Computer Technicians 204: RAID Guide - Technibble
Technibble
Shares

Computer Technicians 204: RAID Guide

Shares

Mirror, mirror on the drive

The first true RAID level is nearly as common as striping, and is likewise supported by most modern motherboards without the use of a dedicated RAID controller. Also known as mirroring mode, it functions exactly as its name says – it takes a drive with stored data and makes an exact, 1:1 copy onto the other drive. Its main advantage is that by writing the same data twice, the failure rate of the hole system becomes a product of the failure rates of both – the chance of one disk can be, say, 1:1000 in a given week, and two mirrored disks of the same kind have a failure rate of 1:100000, nearly negligible. Should one sector on one drive fail, the other disk can patch it up by providing the data on the same sector, and if data integrity is consistent on both disks, more intelligent controllers can provide zig-zag reading off both at the same time, increasing reading speed. The obvious disadvantage is the capacity – two identical 160 GB drives will still have 160 GB in capacity (albeit mirrored), and if the disks aren’t the same size, the lower of the two capacities is assumed for both, creating a space loss. However, for critical data storage, a RAID 1 array is a relatively inexpensive (as RAID is supposed to be in the first place), and most importantly – redundant system, ensuring data safety at least until the dead drive can be swapped and re-mirrored. In some cases, the disks can be temporarily split (known as splitting the mirror), the now-indepedent drive backed up through other means, and then reintegrated into the array. This is useful for taking snapshots of the drive throughout a span of time for either backup or comparison uses.

Encoding and parity

RAID levels 2, 3 and 4 have rarely if ever been implemented into RAID controllers, as they are variants on a theme that culminated in the creation of RAID level 5. The key element in all these levels is parity – a method used to calculate whether an error has happened and how it can be corrected. To demonstrate, suppose you have three disks in an stripe, disks 1, 2 and 3. If you write a data block A, three bytes long in a row, they’ll become bytes A1, A2 and A3. A very simple way of making a fail-safe byte is by creating parity. Parity counts the number of “1” bits in a number by performing a XOR (eXclusive OR) operation on all involved bytes. That is, an Ap parity byte is actually A1 XOR A2 XOR A3. The usefulness of parity is immediately visible. Just as Ap is a XOR of all bytes belonging to the same data block, A1 is a XOR of all remaining bytes of the block and Ap. In other words, should a single byte become unreadable, the controller can decipher the missing byte simply by XORing it from the surviving pieces. This approach has been used in RAID 3 and RAID 4, with the distinction that in RAID 4 XORing is applied to an entire 512 B sector. RAID 2, being somewhat of an oddball, uses a different method of encoding, called Hamming coding, which uses the XOR operation by skipping every few bits, however, it generates a pattern called a syndrome which can not only correct the errors, but pinpoint which bit was erroneous as well.

As mentioned before, all the approaches culminated in the RAID 5 array. This is generally only offered by specialized controllers. However, for a system that has absolutely critical data, RAID 5 is unparalleled – it uses the same block-XOR approach as RAID 4, with the difference that the safety blocks (in this case calculated using Cyclic Redundancy Checks, CRCs) are diagonally stored across disks, so the loss of the parity disk doesn’t destroy the system’s fail-safe ability. It’s considered the optimal cross between a stripe and a checksum system – you get a multiple of the individual drive capacity along with the block-vased safety. As with the above methods, it’s always wiser to use disks that are as similar as possible in terms of capacity. Optionally, some controllers allow for an even higher RAID level called level 6, which uses an extra drive unit but provides an extra CRC block as well. Note that a RAID 5 or 6 field will be best created if same drives from different production batches are used, as it minimizes the probability of having multiple disk failures at once.

The Cuckoo’s Nest

If RAID arrays by themselves, weren’t enough, there are several possible nested arrays along with proprietary RAID systems (which, again, may or may not be true RAID mechanisms). All nested arrays are noted in the exact same way: lowest level of organization goes first, until the last digit which denotes the top level. If any non-top level is zero, a plus is added for distinction. So far, the most commonly used ones are 0+1 (a mirror of stripes), its opposite the 10 array (a stripe of mirrors), RAID 30, 50 and 60 (which combine striping and parity forms), and RAID 100 (a stripe of RAID 10s, sometimes called a plaid RAID, a stripe of stripes).

Of the proprietary RAID models, the most noteworthy are as follows. DPRAID stands for Double Parity which not only XORs regular parity but diagonal one as well, in a form of A1 XOR Bp XOR C3 (for a three-disk, two-parity-disk system). Matrix RAID is a method that is used in some Intel chipsets that allows the user to partition a single drive and designate virtual RAID 0 and RAID 1 areas for speed and security independent on each drive. Finally, the Linux md RAID driver (multiple disk driver) can build a RAID 10 drive in an unusual mode – it can create two mirrors on a three-disk configuration by simply repeating a data chunk the desired number of times and swinging back to the first drive after it’s written to the third.

 
Written by Boris M


Previous page

  • Jeff says:

    I like that mirroring is so reliable in comparison to backup software (no client input) and that it is up to the moment rather than up to the last scheduled backup. Clients don’t notice its working and are not inclined to fiddle as is common with backup software.

  • anis says:

    how to remove laptop bios passwor

  • >