2 # Block device driver configuration
5 menu "Multi-device support (RAID and LVM)"
8 bool "Multiple devices driver support (RAID and LVM)"
10 Support multiple physical spindles through a single logical device.
11 Required for RAID and logical volume management.
14 tristate "RAID support"
17 This driver lets you combine several hard disk partitions into one
18 logical block device. This can be used to simply append one
19 partition to another one or to combine several redundant hard disks
20 into a RAID1/4/5 device so as to provide protection against hard
21 disk failures. This is called "Software RAID" since the combining of
22 the partitions is done by the kernel. "Hardware RAID" means that the
23 combining is done by a dedicated controller; if you have such a
24 controller, you do not need to say Y here.
26 More information about Software RAID on Linux is contained in the
27 Software RAID mini-HOWTO, available from
28 <http://www.tldp.org/docs.html#howto>. There you will also learn
29 where to get the supporting user space utilities raidtools.
34 tristate "Linear (append) mode"
37 If you say Y here, then your multiple devices driver will be able to
38 use the so-called linear mode, i.e. it will combine the hard disk
39 partitions by simply appending one to the other.
41 To compile this as a module, choose M here: the module
42 will be called linear.
47 tristate "RAID-0 (striping) mode"
50 If you say Y here, then your multiple devices driver will be able to
51 use the so-called raid0 mode, i.e. it will combine the hard disk
52 partitions into one logical device in such a fashion as to fill them
53 up evenly, one chunk here and one chunk there. This will increase
54 the throughput rate if the partitions reside on distinct disks.
56 Information about Software RAID on Linux is contained in the
57 Software-RAID mini-HOWTO, available from
58 <http://www.tldp.org/docs.html#howto>. There you will also
59 learn where to get the supporting user space utilities raidtools.
61 To compile this as a module, choose M here: the module
67 tristate "RAID-1 (mirroring) mode"
70 A RAID-1 set consists of several disk drives which are exact copies
71 of each other. In the event of a mirror failure, the RAID driver
72 will continue to use the operational mirrors in the set, providing
73 an error free MD (multiple device) to the higher levels of the
74 kernel. In a set with N drives, the available space is the capacity
75 of a single drive, and the set protects against a failure of (N - 1)
78 Information about Software RAID on Linux is contained in the
79 Software-RAID mini-HOWTO, available from
80 <http://www.tldp.org/docs.html#howto>. There you will also
81 learn where to get the supporting user space utilities raidtools.
83 If you want to use such a RAID-1 set, say Y. To compile this code
84 as a module, choose M here: the module will be called raid1.
89 tristate "RAID-10 (mirrored striping) mode (EXPERIMENTAL)"
90 depends on BLK_DEV_MD && EXPERIMENTAL
92 RAID-10 provides a combination of striping (RAID-0) and
93 mirroring (RAID-1) with easier configuration and more flexable
95 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
96 be the same size (or at least, only as much as the smallest device
98 RAID-10 provides a variety of layouts that provide different levels
99 of redundancy and performance.
101 RAID-10 requires mdadm-1.7.0 or later, available at:
103 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
108 tristate "RAID-4/RAID-5 mode"
109 depends on BLK_DEV_MD
111 A RAID-5 set of N drives with a capacity of C MB per drive provides
112 the capacity of C * (N - 1) MB, and protects against a failure
113 of a single drive. For a given sector (row) number, (N - 1) drives
114 contain data sectors, and one drive contains the parity protection.
115 For a RAID-4 set, the parity blocks are present on a single drive,
116 while a RAID-5 set distributes the parity across the drives in one
117 of the available parity distribution methods.
119 Information about Software RAID on Linux is contained in the
120 Software-RAID mini-HOWTO, available from
121 <http://www.tldp.org/docs.html#howto>. There you will also
122 learn where to get the supporting user space utilities raidtools.
124 If you want to use such a RAID-4/RAID-5 set, say Y. To
125 compile this code as a module, choose M here: the module
126 will be called raid5.
130 config MD_RAID5_RESHAPE
131 bool "Support adding drives to a raid-5 array (experimental)"
132 depends on MD_RAID5 && EXPERIMENTAL
134 A RAID-5 set can be expanded by adding extra drives. This
135 requires "restriping" the array which means (almost) every
136 block must be written to a different place.
138 This option allows such restriping to be done while the array
139 is online. However it is still EXPERIMENTAL code. It should
140 work, but please be sure that you have backups.
142 You will need a version of mdadm newer than 2.3.1. During the
143 early stage of reshape there is a critical section where live data
144 is being over-written. A crash during this time needs extra care
145 for recovery. The newer mdadm takes a copy of the data in the
146 critical section and will restore it, if necessary, after a crash.
148 The mdadm usage is e.g.
149 mdadm --grow /dev/md1 --raid-disks=6
150 to grow '/dev/md1' to having 6 disks.
152 Note: The array can only be expanded, not contracted.
153 There should be enough spares already present to make the new
157 tristate "RAID-6 mode"
158 depends on BLK_DEV_MD
160 A RAID-6 set of N drives with a capacity of C MB per drive
161 provides the capacity of C * (N - 2) MB, and protects
162 against a failure of any two drives. For a given sector
163 (row) number, (N - 2) drives contain data sectors, and two
164 drives contains two independent redundancy syndromes. Like
165 RAID-5, RAID-6 distributes the syndromes across the drives
166 in one of the available parity distribution methods.
168 RAID-6 requires mdadm-1.5.0 or later, available at:
170 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
172 If you want to use such a RAID-6 set, say Y. To compile
173 this code as a module, choose M here: the module will be
179 tristate "Multipath I/O support"
180 depends on BLK_DEV_MD
182 Multipath-IO is the ability of certain devices to address the same
183 physical disk over multiple 'IO paths'. The code ensures that such
184 paths can be defined and handled at runtime, and ensures that a
185 transparent failover to the backup path(s) happens if a IO errors
186 arrives on the primary path.
191 tristate "Faulty test module for MD"
192 depends on BLK_DEV_MD
194 The "faulty" module allows for a block device that occasionally returns
195 read or write errors. It is useful for testing.
200 tristate "Device mapper support"
203 Device-mapper is a low level volume manager. It works by allowing
204 people to specify mappings for ranges of logical sectors. Various
205 mapping types are available, in addition people may write their own
206 modules containing custom mappings if they wish.
208 Higher level volume managers such as LVM2 use this driver.
210 To compile this as a module, choose M here: the module will be
216 tristate "Crypt target support"
217 depends on BLK_DEV_DM && EXPERIMENTAL
220 This device-mapper target allows you to create a device that
221 transparently encrypts the data on it. You'll need to activate
222 the ciphers you're going to use in the cryptoapi configuration.
224 Information on how to use dm-crypt can be found on
226 <http://www.saout.de/misc/dm-crypt/>
228 To compile this code as a module, choose M here: the module will
234 tristate "Snapshot target (EXPERIMENTAL)"
235 depends on BLK_DEV_DM && EXPERIMENTAL
237 Allow volume managers to take writeable snapshots of a device.
240 tristate "Mirror target (EXPERIMENTAL)"
241 depends on BLK_DEV_DM && EXPERIMENTAL
243 Allow volume managers to mirror logical volumes, also
244 needed for live data migration tools such as 'pvmove'.
247 tristate "Zero target (EXPERIMENTAL)"
248 depends on BLK_DEV_DM && EXPERIMENTAL
250 A target that discards writes, and returns all zeroes for
251 reads. Useful in some recovery situations.
254 tristate "Multipath target (EXPERIMENTAL)"
255 depends on BLK_DEV_DM && EXPERIMENTAL
257 Allow volume managers to support multipath hardware.
259 config DM_MULTIPATH_EMC
260 tristate "EMC CX/AX multipath support (EXPERIMENTAL)"
261 depends on DM_MULTIPATH && BLK_DEV_DM && EXPERIMENTAL
263 Multipath support for EMC CX/AX series hardware.