2 # Block device driver configuration
5 menu "Multi-device support (RAID and LVM)"
8 bool "Multiple devices driver support (RAID and LVM)"
10 Support multiple physical spindles through a single logical device.
11 Required for RAID and logical volume management.
14 tristate "RAID support"
17 This driver lets you combine several hard disk partitions into one
18 logical block device. This can be used to simply append one
19 partition to another one or to combine several redundant hard disks
20 into a RAID1/4/5 device so as to provide protection against hard
21 disk failures. This is called "Software RAID" since the combining of
22 the partitions is done by the kernel. "Hardware RAID" means that the
23 combining is done by a dedicated controller; if you have such a
24 controller, you do not need to say Y here.
26 More information about Software RAID on Linux is contained in the
27 Software RAID mini-HOWTO, available from
28 <http://www.tldp.org/docs.html#howto>. There you will also learn
29 where to get the supporting user space utilities raidtools.
34 tristate "Linear (append) mode"
37 If you say Y here, then your multiple devices driver will be able to
38 use the so-called linear mode, i.e. it will combine the hard disk
39 partitions by simply appending one to the other.
41 To compile this as a module, choose M here: the module
42 will be called linear.
47 tristate "RAID-0 (striping) mode"
50 If you say Y here, then your multiple devices driver will be able to
51 use the so-called raid0 mode, i.e. it will combine the hard disk
52 partitions into one logical device in such a fashion as to fill them
53 up evenly, one chunk here and one chunk there. This will increase
54 the throughput rate if the partitions reside on distinct disks.
56 Information about Software RAID on Linux is contained in the
57 Software-RAID mini-HOWTO, available from
58 <http://www.tldp.org/docs.html#howto>. There you will also
59 learn where to get the supporting user space utilities raidtools.
61 To compile this as a module, choose M here: the module
67 tristate "RAID-1 (mirroring) mode"
70 A RAID-1 set consists of several disk drives which are exact copies
71 of each other. In the event of a mirror failure, the RAID driver
72 will continue to use the operational mirrors in the set, providing
73 an error free MD (multiple device) to the higher levels of the
74 kernel. In a set with N drives, the available space is the capacity
75 of a single drive, and the set protects against a failure of (N - 1)
78 Information about Software RAID on Linux is contained in the
79 Software-RAID mini-HOWTO, available from
80 <http://www.tldp.org/docs.html#howto>. There you will also
81 learn where to get the supporting user space utilities raidtools.
83 If you want to use such a RAID-1 set, say Y. To compile this code
84 as a module, choose M here: the module will be called raid1.
89 tristate "RAID-10 (mirrored striping) mode (EXPERIMENTAL)"
90 depends on BLK_DEV_MD && EXPERIMENTAL
92 RAID-10 provides a combination of striping (RAID-0) and
93 mirroring (RAID-1) with easier configuration and more flexable
95 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
96 be the same size (or at least, only as much as the smallest device
98 RAID-10 provides a variety of layouts that provide different levels
99 of redundancy and performance.
101 RAID-10 requires mdadm-1.7.0 or later, available at:
103 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
108 tristate "RAID-4/RAID-5 mode"
109 depends on BLK_DEV_MD
111 A RAID-5 set of N drives with a capacity of C MB per drive provides
112 the capacity of C * (N - 1) MB, and protects against a failure
113 of a single drive. For a given sector (row) number, (N - 1) drives
114 contain data sectors, and one drive contains the parity protection.
115 For a RAID-4 set, the parity blocks are present on a single drive,
116 while a RAID-5 set distributes the parity across the drives in one
117 of the available parity distribution methods.
119 Information about Software RAID on Linux is contained in the
120 Software-RAID mini-HOWTO, available from
121 <http://www.tldp.org/docs.html#howto>. There you will also
122 learn where to get the supporting user space utilities raidtools.
124 If you want to use such a RAID-4/RAID-5 set, say Y. To
125 compile this code as a module, choose M here: the module
126 will be called raid5.
130 config MD_RAID5_RESHAPE
131 bool "Support adding drives to a raid-5 array (experimental)"
132 depends on MD_RAID5 && EXPERIMENTAL
134 A RAID-5 set can be expanded by adding extra drives. This
135 requires "restriping" the array which means (almost) every
136 block must be written to a different place.
138 This option allows such restriping to be done while the array
139 is online. However it is still EXPERIMENTAL code. It should
140 work, but please be sure that you have backups.
142 You will need mdadm verion 2.4.1 or later to use this
143 feature safely. During the early stage of reshape there is
144 a critical section where live data is being over-written. A
145 crash during this time needs extra care for recovery. The
146 newer mdadm takes a copy of the data in the critical section
147 and will restore it, if necessary, after a crash.
149 The mdadm usage is e.g.
150 mdadm --grow /dev/md1 --raid-disks=6
151 to grow '/dev/md1' to having 6 disks.
153 Note: The array can only be expanded, not contracted.
154 There should be enough spares already present to make the new
158 tristate "RAID-6 mode"
159 depends on BLK_DEV_MD
161 A RAID-6 set of N drives with a capacity of C MB per drive
162 provides the capacity of C * (N - 2) MB, and protects
163 against a failure of any two drives. For a given sector
164 (row) number, (N - 2) drives contain data sectors, and two
165 drives contains two independent redundancy syndromes. Like
166 RAID-5, RAID-6 distributes the syndromes across the drives
167 in one of the available parity distribution methods.
169 RAID-6 requires mdadm-1.5.0 or later, available at:
171 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
173 If you want to use such a RAID-6 set, say Y. To compile
174 this code as a module, choose M here: the module will be
180 tristate "Multipath I/O support"
181 depends on BLK_DEV_MD
183 Multipath-IO is the ability of certain devices to address the same
184 physical disk over multiple 'IO paths'. The code ensures that such
185 paths can be defined and handled at runtime, and ensures that a
186 transparent failover to the backup path(s) happens if a IO errors
187 arrives on the primary path.
192 tristate "Faulty test module for MD"
193 depends on BLK_DEV_MD
195 The "faulty" module allows for a block device that occasionally returns
196 read or write errors. It is useful for testing.
201 tristate "Device mapper support"
204 Device-mapper is a low level volume manager. It works by allowing
205 people to specify mappings for ranges of logical sectors. Various
206 mapping types are available, in addition people may write their own
207 modules containing custom mappings if they wish.
209 Higher level volume managers such as LVM2 use this driver.
211 To compile this as a module, choose M here: the module will be
217 tristate "Crypt target support"
218 depends on BLK_DEV_DM && EXPERIMENTAL
221 This device-mapper target allows you to create a device that
222 transparently encrypts the data on it. You'll need to activate
223 the ciphers you're going to use in the cryptoapi configuration.
225 Information on how to use dm-crypt can be found on
227 <http://www.saout.de/misc/dm-crypt/>
229 To compile this code as a module, choose M here: the module will
235 tristate "Snapshot target (EXPERIMENTAL)"
236 depends on BLK_DEV_DM && EXPERIMENTAL
238 Allow volume managers to take writeable snapshots of a device.
241 tristate "Mirror target (EXPERIMENTAL)"
242 depends on BLK_DEV_DM && EXPERIMENTAL
244 Allow volume managers to mirror logical volumes, also
245 needed for live data migration tools such as 'pvmove'.
248 tristate "Zero target (EXPERIMENTAL)"
249 depends on BLK_DEV_DM && EXPERIMENTAL
251 A target that discards writes, and returns all zeroes for
252 reads. Useful in some recovery situations.
255 tristate "Multipath target (EXPERIMENTAL)"
256 depends on BLK_DEV_DM && EXPERIMENTAL
258 Allow volume managers to support multipath hardware.
260 config DM_MULTIPATH_EMC
261 tristate "EMC CX/AX multipath support (EXPERIMENTAL)"
262 depends on DM_MULTIPATH && BLK_DEV_DM && EXPERIMENTAL
264 Multipath support for EMC CX/AX series hardware.