Commit | Line | Data |
---|---|---|
1da177e4 LT |
1 | Dynamic DMA mapping using the generic device |
2 | ============================================ | |
3 | ||
4 | James E.J. Bottomley <James.Bottomley@HansenPartnership.com> | |
5 | ||
6 | This document describes the DMA API. For a more gentle introduction | |
7 | phrased in terms of the pci_ equivalents (and actual examples) see | |
8 | DMA-mapping.txt | |
9 | ||
10 | This API is split into two pieces. Part I describes the API and the | |
11 | corresponding pci_ API. Part II describes the extensions to the API | |
12 | for supporting non-consistent memory machines. Unless you know that | |
13 | your driver absolutely has to support non-consistent platforms (this | |
14 | is usually only legacy platforms) you should only use the API | |
15 | described in part I. | |
16 | ||
17 | Part I - pci_ and dma_ Equivalent API | |
18 | ------------------------------------- | |
19 | ||
20 | To get the pci_ API, you must #include <linux/pci.h> | |
21 | To get the dma_ API, you must #include <linux/dma-mapping.h> | |
22 | ||
23 | ||
24 | Part Ia - Using large dma-coherent buffers | |
25 | ------------------------------------------ | |
26 | ||
27 | void * | |
28 | dma_alloc_coherent(struct device *dev, size_t size, | |
a12e2c6c | 29 | dma_addr_t *dma_handle, gfp_t flag) |
1da177e4 LT |
30 | void * |
31 | pci_alloc_consistent(struct pci_dev *dev, size_t size, | |
32 | dma_addr_t *dma_handle) | |
33 | ||
34 | Consistent memory is memory for which a write by either the device or | |
35 | the processor can immediately be read by the processor or device | |
21440d31 DB |
36 | without having to worry about caching effects. (You may however need |
37 | to make sure to flush the processor's write buffers before telling | |
38 | devices to read that memory.) | |
1da177e4 LT |
39 | |
40 | This routine allocates a region of <size> bytes of consistent memory. | |
a12e2c6c | 41 | It also returns a <dma_handle> which may be cast to an unsigned |
1da177e4 LT |
42 | integer the same width as the bus and used as the physical address |
43 | base of the region. | |
44 | ||
45 | Returns: a pointer to the allocated region (in the processor's virtual | |
46 | address space) or NULL if the allocation failed. | |
47 | ||
48 | Note: consistent memory can be expensive on some platforms, and the | |
49 | minimum allocation length may be as big as a page, so you should | |
50 | consolidate your requests for consistent memory as much as possible. | |
51 | The simplest way to do that is to use the dma_pool calls (see below). | |
52 | ||
53 | The flag parameter (dma_alloc_coherent only) allows the caller to | |
54 | specify the GFP_ flags (see kmalloc) for the allocation (the | |
a12e2c6c | 55 | implementation may choose to ignore flags that affect the location of |
1da177e4 LT |
56 | the returned memory, like GFP_DMA). For pci_alloc_consistent, you |
57 | must assume GFP_ATOMIC behaviour. | |
58 | ||
59 | void | |
a12e2c6c | 60 | dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, |
1da177e4 LT |
61 | dma_addr_t dma_handle) |
62 | void | |
a12e2c6c | 63 | pci_free_consistent(struct pci_dev *dev, size_t size, void *cpu_addr, |
1da177e4 LT |
64 | dma_addr_t dma_handle) |
65 | ||
66 | Free the region of consistent memory you previously allocated. dev, | |
67 | size and dma_handle must all be the same as those passed into the | |
68 | consistent allocate. cpu_addr must be the virtual address returned by | |
a12e2c6c | 69 | the consistent allocate. |
1da177e4 | 70 | |
aa24886e DB |
71 | Note that unlike their sibling allocation calls, these routines |
72 | may only be called with IRQs enabled. | |
73 | ||
1da177e4 LT |
74 | |
75 | Part Ib - Using small dma-coherent buffers | |
76 | ------------------------------------------ | |
77 | ||
78 | To get this part of the dma_ API, you must #include <linux/dmapool.h> | |
79 | ||
80 | Many drivers need lots of small dma-coherent memory regions for DMA | |
81 | descriptors or I/O buffers. Rather than allocating in units of a page | |
82 | or more using dma_alloc_coherent(), you can use DMA pools. These work | |
a12e2c6c | 83 | much like a struct kmem_cache, except that they use the dma-coherent allocator, |
1da177e4 | 84 | not __get_free_pages(). Also, they understand common hardware constraints |
a12e2c6c | 85 | for alignment, like queue heads needing to be aligned on N-byte boundaries. |
1da177e4 LT |
86 | |
87 | ||
88 | struct dma_pool * | |
89 | dma_pool_create(const char *name, struct device *dev, | |
90 | size_t size, size_t align, size_t alloc); | |
91 | ||
92 | struct pci_pool * | |
93 | pci_pool_create(const char *name, struct pci_device *dev, | |
94 | size_t size, size_t align, size_t alloc); | |
95 | ||
96 | The pool create() routines initialize a pool of dma-coherent buffers | |
97 | for use with a given device. It must be called in a context which | |
98 | can sleep. | |
99 | ||
e18b890b | 100 | The "name" is for diagnostics (like a struct kmem_cache name); dev and size |
1da177e4 LT |
101 | are like what you'd pass to dma_alloc_coherent(). The device's hardware |
102 | alignment requirement for this type of data is "align" (which is expressed | |
103 | in bytes, and must be a power of two). If your device has no boundary | |
104 | crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated | |
105 | from this pool must not cross 4KByte boundaries. | |
106 | ||
107 | ||
a12e2c6c | 108 | void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, |
1da177e4 LT |
109 | dma_addr_t *dma_handle); |
110 | ||
a12e2c6c | 111 | void *pci_pool_alloc(struct pci_pool *pool, gfp_t gfp_flags, |
1da177e4 LT |
112 | dma_addr_t *dma_handle); |
113 | ||
114 | This allocates memory from the pool; the returned memory will meet the size | |
115 | and alignment requirements specified at creation time. Pass GFP_ATOMIC to | |
a12e2c6c | 116 | prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks), |
1da177e4 LT |
117 | pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns |
118 | two values: an address usable by the cpu, and the dma address usable by the | |
119 | pool's device. | |
120 | ||
121 | ||
122 | void dma_pool_free(struct dma_pool *pool, void *vaddr, | |
123 | dma_addr_t addr); | |
124 | ||
125 | void pci_pool_free(struct pci_pool *pool, void *vaddr, | |
126 | dma_addr_t addr); | |
127 | ||
128 | This puts memory back into the pool. The pool is what was passed to | |
a12e2c6c | 129 | the pool allocation routine; the cpu (vaddr) and dma addresses are what |
1da177e4 LT |
130 | were returned when that routine allocated the memory being freed. |
131 | ||
132 | ||
133 | void dma_pool_destroy(struct dma_pool *pool); | |
134 | ||
135 | void pci_pool_destroy(struct pci_pool *pool); | |
136 | ||
137 | The pool destroy() routines free the resources of the pool. They must be | |
138 | called in a context which can sleep. Make sure you've freed all allocated | |
139 | memory back to the pool before you destroy it. | |
140 | ||
141 | ||
142 | Part Ic - DMA addressing limitations | |
143 | ------------------------------------ | |
144 | ||
145 | int | |
146 | dma_supported(struct device *dev, u64 mask) | |
147 | int | |
02d15c43 | 148 | pci_dma_supported(struct pci_dev *hwdev, u64 mask) |
1da177e4 LT |
149 | |
150 | Checks to see if the device can support DMA to the memory described by | |
151 | mask. | |
152 | ||
153 | Returns: 1 if it can and 0 if it can't. | |
154 | ||
155 | Notes: This routine merely tests to see if the mask is possible. It | |
156 | won't change the current mask settings. It is more intended as an | |
157 | internal API for use by the platform than an external API for use by | |
158 | driver writers. | |
159 | ||
160 | int | |
161 | dma_set_mask(struct device *dev, u64 mask) | |
162 | int | |
163 | pci_set_dma_mask(struct pci_device *dev, u64 mask) | |
164 | ||
165 | Checks to see if the mask is possible and updates the device | |
166 | parameters if it is. | |
167 | ||
168 | Returns: 0 if successful and a negative error if not. | |
169 | ||
170 | u64 | |
171 | dma_get_required_mask(struct device *dev) | |
172 | ||
173 | After setting the mask with dma_set_mask(), this API returns the | |
174 | actual mask (within that already set) that the platform actually | |
175 | requires to operate efficiently. Usually this means the returned mask | |
176 | is the minimum required to cover all of memory. Examining the | |
177 | required mask gives drivers with variable descriptor sizes the | |
178 | opportunity to use smaller descriptors as necessary. | |
179 | ||
180 | Requesting the required mask does not alter the current mask. If you | |
181 | wish to take advantage of it, you should issue another dma_set_mask() | |
182 | call to lower the mask again. | |
183 | ||
184 | ||
185 | Part Id - Streaming DMA mappings | |
186 | -------------------------------- | |
187 | ||
188 | dma_addr_t | |
189 | dma_map_single(struct device *dev, void *cpu_addr, size_t size, | |
190 | enum dma_data_direction direction) | |
191 | dma_addr_t | |
02d15c43 | 192 | pci_map_single(struct pci_dev *hwdev, void *cpu_addr, size_t size, |
1da177e4 LT |
193 | int direction) |
194 | ||
195 | Maps a piece of processor virtual memory so it can be accessed by the | |
196 | device and returns the physical handle of the memory. | |
197 | ||
198 | The direction for both api's may be converted freely by casting. | |
199 | However the dma_ API uses a strongly typed enumerator for its | |
200 | direction: | |
201 | ||
202 | DMA_NONE = PCI_DMA_NONE no direction (used for | |
203 | debugging) | |
204 | DMA_TO_DEVICE = PCI_DMA_TODEVICE data is going from the | |
205 | memory to the device | |
206 | DMA_FROM_DEVICE = PCI_DMA_FROMDEVICE data is coming from | |
207 | the device to the | |
208 | memory | |
209 | DMA_BIDIRECTIONAL = PCI_DMA_BIDIRECTIONAL direction isn't known | |
210 | ||
211 | Notes: Not all memory regions in a machine can be mapped by this | |
212 | API. Further, regions that appear to be physically contiguous in | |
213 | kernel virtual space may not be contiguous as physical memory. Since | |
214 | this API does not provide any scatter/gather capability, it will fail | |
a12e2c6c | 215 | if the user tries to map a non-physically contiguous piece of memory. |
1da177e4 | 216 | For this reason, it is recommended that memory mapped by this API be |
a12e2c6c | 217 | obtained only from sources which guarantee it to be physically contiguous |
1da177e4 LT |
218 | (like kmalloc). |
219 | ||
220 | Further, the physical address of the memory must be within the | |
221 | dma_mask of the device (the dma_mask represents a bit mask of the | |
a12e2c6c | 222 | addressable region for the device. I.e., if the physical address of |
1da177e4 LT |
223 | the memory anded with the dma_mask is still equal to the physical |
224 | address, then the device can perform DMA to the memory). In order to | |
225 | ensure that the memory allocated by kmalloc is within the dma_mask, | |
a12e2c6c | 226 | the driver may specify various platform-dependent flags to restrict |
1da177e4 LT |
227 | the physical memory range of the allocation (e.g. on x86, GFP_DMA |
228 | guarantees to be within the first 16Mb of available physical memory, | |
229 | as required by ISA devices). | |
230 | ||
231 | Note also that the above constraints on physical contiguity and | |
232 | dma_mask may not apply if the platform has an IOMMU (a device which | |
233 | supplies a physical to virtual mapping between the I/O memory bus and | |
234 | the device). However, to be portable, device driver writers may *not* | |
235 | assume that such an IOMMU exists. | |
236 | ||
237 | Warnings: Memory coherency operates at a granularity called the cache | |
238 | line width. In order for memory mapped by this API to operate | |
239 | correctly, the mapped region must begin exactly on a cache line | |
240 | boundary and end exactly on one (to prevent two separately mapped | |
241 | regions from sharing a single cache line). Since the cache line size | |
242 | may not be known at compile time, the API will not enforce this | |
243 | requirement. Therefore, it is recommended that driver writers who | |
244 | don't take special care to determine the cache line size at run time | |
245 | only map virtual regions that begin and end on page boundaries (which | |
246 | are guaranteed also to be cache line boundaries). | |
247 | ||
248 | DMA_TO_DEVICE synchronisation must be done after the last modification | |
249 | of the memory region by the software and before it is handed off to | |
a12e2c6c RD |
250 | the driver. Once this primitive is used, memory covered by this |
251 | primitive should be treated as read-only by the device. If the device | |
1da177e4 LT |
252 | may write to it at any point, it should be DMA_BIDIRECTIONAL (see |
253 | below). | |
254 | ||
255 | DMA_FROM_DEVICE synchronisation must be done before the driver | |
256 | accesses data that may be changed by the device. This memory should | |
a12e2c6c | 257 | be treated as read-only by the driver. If the driver needs to write |
1da177e4 LT |
258 | to it at any point, it should be DMA_BIDIRECTIONAL (see below). |
259 | ||
260 | DMA_BIDIRECTIONAL requires special handling: it means that the driver | |
261 | isn't sure if the memory was modified before being handed off to the | |
262 | device and also isn't sure if the device will also modify it. Thus, | |
263 | you must always sync bidirectional memory twice: once before the | |
264 | memory is handed off to the device (to make sure all memory changes | |
265 | are flushed from the processor) and once before the data may be | |
266 | accessed after being used by the device (to make sure any processor | |
a12e2c6c | 267 | cache lines are updated with data that the device may have changed). |
1da177e4 LT |
268 | |
269 | void | |
270 | dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, | |
271 | enum dma_data_direction direction) | |
272 | void | |
273 | pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr, | |
274 | size_t size, int direction) | |
275 | ||
276 | Unmaps the region previously mapped. All the parameters passed in | |
277 | must be identical to those passed in (and returned) by the mapping | |
278 | API. | |
279 | ||
280 | dma_addr_t | |
281 | dma_map_page(struct device *dev, struct page *page, | |
282 | unsigned long offset, size_t size, | |
283 | enum dma_data_direction direction) | |
284 | dma_addr_t | |
285 | pci_map_page(struct pci_dev *hwdev, struct page *page, | |
286 | unsigned long offset, size_t size, int direction) | |
287 | void | |
288 | dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, | |
289 | enum dma_data_direction direction) | |
290 | void | |
291 | pci_unmap_page(struct pci_dev *hwdev, dma_addr_t dma_address, | |
292 | size_t size, int direction) | |
293 | ||
294 | API for mapping and unmapping for pages. All the notes and warnings | |
295 | for the other mapping APIs apply here. Also, although the <offset> | |
296 | and <size> parameters are provided to do partial page mapping, it is | |
297 | recommended that you never use these unless you really know what the | |
298 | cache width is. | |
299 | ||
300 | int | |
301 | dma_mapping_error(dma_addr_t dma_addr) | |
302 | ||
303 | int | |
304 | pci_dma_mapping_error(dma_addr_t dma_addr) | |
305 | ||
306 | In some circumstances dma_map_single and dma_map_page will fail to create | |
307 | a mapping. A driver can check for these errors by testing the returned | |
a12e2c6c RD |
308 | dma address with dma_mapping_error(). A non-zero return value means the mapping |
309 | could not be created and the driver should take appropriate action (e.g. | |
1da177e4 LT |
310 | reduce current DMA mapping usage or delay and try again later). |
311 | ||
21440d31 DB |
312 | int |
313 | dma_map_sg(struct device *dev, struct scatterlist *sg, | |
314 | int nents, enum dma_data_direction direction) | |
315 | int | |
316 | pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, | |
317 | int nents, int direction) | |
1da177e4 LT |
318 | |
319 | Maps a scatter gather list from the block layer. | |
320 | ||
a12e2c6c | 321 | Returns: the number of physical segments mapped (this may be shorter |
1da177e4 LT |
322 | than <nents> passed in if the block layer determines that some |
323 | elements of the scatter/gather list are physically adjacent and thus | |
324 | may be mapped with a single entry). | |
325 | ||
326 | Please note that the sg cannot be mapped again if it has been mapped once. | |
327 | The mapping process is allowed to destroy information in the sg. | |
328 | ||
329 | As with the other mapping interfaces, dma_map_sg can fail. When it | |
330 | does, 0 is returned and a driver must take appropriate action. It is | |
331 | critical that the driver do something, in the case of a block driver | |
332 | aborting the request or even oopsing is better than doing nothing and | |
333 | corrupting the filesystem. | |
334 | ||
21440d31 DB |
335 | With scatterlists, you use the resulting mapping like this: |
336 | ||
337 | int i, count = dma_map_sg(dev, sglist, nents, direction); | |
338 | struct scatterlist *sg; | |
339 | ||
340 | for (i = 0, sg = sglist; i < count; i++, sg++) { | |
341 | hw_address[i] = sg_dma_address(sg); | |
342 | hw_len[i] = sg_dma_len(sg); | |
343 | } | |
344 | ||
345 | where nents is the number of entries in the sglist. | |
346 | ||
347 | The implementation is free to merge several consecutive sglist entries | |
348 | into one (e.g. with an IOMMU, or if several pages just happen to be | |
349 | physically contiguous) and returns the actual number of sg entries it | |
350 | mapped them to. On failure 0, is returned. | |
351 | ||
352 | Then you should loop count times (note: this can be less than nents times) | |
353 | and use sg_dma_address() and sg_dma_len() macros where you previously | |
354 | accessed sg->address and sg->length as shown above. | |
355 | ||
356 | void | |
357 | dma_unmap_sg(struct device *dev, struct scatterlist *sg, | |
358 | int nhwentries, enum dma_data_direction direction) | |
359 | void | |
360 | pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, | |
361 | int nents, int direction) | |
1da177e4 | 362 | |
a12e2c6c | 363 | Unmap the previously mapped scatter/gather list. All the parameters |
1da177e4 LT |
364 | must be the same as those and passed in to the scatter/gather mapping |
365 | API. | |
366 | ||
367 | Note: <nents> must be the number you passed in, *not* the number of | |
368 | physical entries returned. | |
369 | ||
370 | void | |
371 | dma_sync_single(struct device *dev, dma_addr_t dma_handle, size_t size, | |
372 | enum dma_data_direction direction) | |
373 | void | |
374 | pci_dma_sync_single(struct pci_dev *hwdev, dma_addr_t dma_handle, | |
375 | size_t size, int direction) | |
376 | void | |
377 | dma_sync_sg(struct device *dev, struct scatterlist *sg, int nelems, | |
378 | enum dma_data_direction direction) | |
379 | void | |
380 | pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg, | |
381 | int nelems, int direction) | |
382 | ||
a12e2c6c | 383 | Synchronise a single contiguous or scatter/gather mapping. All the |
1da177e4 LT |
384 | parameters must be the same as those passed into the single mapping |
385 | API. | |
386 | ||
387 | Notes: You must do this: | |
388 | ||
389 | - Before reading values that have been written by DMA from the device | |
390 | (use the DMA_FROM_DEVICE direction) | |
391 | - After writing values that will be written to the device using DMA | |
392 | (use the DMA_TO_DEVICE) direction | |
393 | - before *and* after handing memory to the device if the memory is | |
394 | DMA_BIDIRECTIONAL | |
395 | ||
396 | See also dma_map_single(). | |
397 | ||
a75b0a2f AK |
398 | dma_addr_t |
399 | dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, | |
400 | enum dma_data_direction dir, | |
401 | struct dma_attrs *attrs) | |
402 | ||
403 | void | |
404 | dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, | |
405 | size_t size, enum dma_data_direction dir, | |
406 | struct dma_attrs *attrs) | |
407 | ||
408 | int | |
409 | dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, | |
410 | int nents, enum dma_data_direction dir, | |
411 | struct dma_attrs *attrs) | |
412 | ||
413 | void | |
414 | dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, | |
415 | int nents, enum dma_data_direction dir, | |
416 | struct dma_attrs *attrs) | |
417 | ||
418 | The four functions above are just like the counterpart functions | |
419 | without the _attrs suffixes, except that they pass an optional | |
420 | struct dma_attrs*. | |
421 | ||
422 | struct dma_attrs encapsulates a set of "dma attributes". For the | |
423 | definition of struct dma_attrs see linux/dma-attrs.h. | |
424 | ||
425 | The interpretation of dma attributes is architecture-specific, and | |
426 | each attribute should be documented in Documentation/DMA-attributes.txt. | |
427 | ||
428 | If struct dma_attrs* is NULL, the semantics of each of these | |
429 | functions is identical to those of the corresponding function | |
430 | without the _attrs suffix. As a result dma_map_single_attrs() | |
431 | can generally replace dma_map_single(), etc. | |
432 | ||
433 | As an example of the use of the *_attrs functions, here's how | |
434 | you could pass an attribute DMA_ATTR_FOO when mapping memory | |
435 | for DMA: | |
436 | ||
437 | #include <linux/dma-attrs.h> | |
438 | /* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and | |
439 | * documented in Documentation/DMA-attributes.txt */ | |
440 | ... | |
441 | ||
442 | DEFINE_DMA_ATTRS(attrs); | |
443 | dma_set_attr(DMA_ATTR_FOO, &attrs); | |
444 | .... | |
445 | n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr); | |
446 | .... | |
447 | ||
448 | Architectures that care about DMA_ATTR_FOO would check for its | |
449 | presence in their implementations of the mapping and unmapping | |
450 | routines, e.g.: | |
451 | ||
452 | void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, | |
453 | size_t size, enum dma_data_direction dir, | |
454 | struct dma_attrs *attrs) | |
455 | { | |
456 | .... | |
457 | int foo = dma_get_attr(DMA_ATTR_FOO, attrs); | |
458 | .... | |
459 | if (foo) | |
460 | /* twizzle the frobnozzle */ | |
461 | .... | |
462 | ||
1da177e4 LT |
463 | |
464 | Part II - Advanced dma_ usage | |
465 | ----------------------------- | |
466 | ||
467 | Warning: These pieces of the DMA API have no PCI equivalent. They | |
468 | should also not be used in the majority of cases, since they cater for | |
469 | unlikely corner cases that don't belong in usual drivers. | |
470 | ||
471 | If you don't understand how cache line coherency works between a | |
472 | processor and an I/O device, you should not be using this part of the | |
473 | API at all. | |
474 | ||
475 | void * | |
476 | dma_alloc_noncoherent(struct device *dev, size_t size, | |
a12e2c6c | 477 | dma_addr_t *dma_handle, gfp_t flag) |
1da177e4 LT |
478 | |
479 | Identical to dma_alloc_coherent() except that the platform will | |
480 | choose to return either consistent or non-consistent memory as it sees | |
481 | fit. By using this API, you are guaranteeing to the platform that you | |
482 | have all the correct and necessary sync points for this memory in the | |
483 | driver should it choose to return non-consistent memory. | |
484 | ||
485 | Note: where the platform can return consistent memory, it will | |
486 | guarantee that the sync points become nops. | |
487 | ||
488 | Warning: Handling non-consistent memory is a real pain. You should | |
489 | only ever use this API if you positively know your driver will be | |
490 | required to work on one of the rare (usually non-PCI) architectures | |
491 | that simply cannot make consistent memory. | |
492 | ||
493 | void | |
494 | dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, | |
495 | dma_addr_t dma_handle) | |
496 | ||
a12e2c6c | 497 | Free memory allocated by the nonconsistent API. All parameters must |
1da177e4 LT |
498 | be identical to those passed in (and returned by |
499 | dma_alloc_noncoherent()). | |
500 | ||
501 | int | |
f67637ee | 502 | dma_is_consistent(struct device *dev, dma_addr_t dma_handle) |
1da177e4 | 503 | |
a12e2c6c | 504 | Returns true if the device dev is performing consistent DMA on the memory |
f67637ee | 505 | area pointed to by the dma_handle. |
1da177e4 LT |
506 | |
507 | int | |
508 | dma_get_cache_alignment(void) | |
509 | ||
a12e2c6c | 510 | Returns the processor cache alignment. This is the absolute minimum |
1da177e4 LT |
511 | alignment *and* width that you must observe when either mapping |
512 | memory or doing partial flushes. | |
513 | ||
514 | Notes: This API may return a number *larger* than the actual cache | |
515 | line, but it will guarantee that one or more cache lines fit exactly | |
516 | into the width returned by this call. It will also always be a power | |
a12e2c6c | 517 | of two for easy alignment. |
1da177e4 LT |
518 | |
519 | void | |
520 | dma_sync_single_range(struct device *dev, dma_addr_t dma_handle, | |
521 | unsigned long offset, size_t size, | |
522 | enum dma_data_direction direction) | |
523 | ||
a12e2c6c | 524 | Does a partial sync, starting at offset and continuing for size. You |
1da177e4 LT |
525 | must be careful to observe the cache alignment and width when doing |
526 | anything like this. You must also be extra careful about accessing | |
527 | memory you intend to sync partially. | |
528 | ||
529 | void | |
d3fa72e4 | 530 | dma_cache_sync(struct device *dev, void *vaddr, size_t size, |
1da177e4 LT |
531 | enum dma_data_direction direction) |
532 | ||
533 | Do a partial sync of memory that was allocated by | |
534 | dma_alloc_noncoherent(), starting at virtual address vaddr and | |
535 | continuing on for size. Again, you *must* observe the cache line | |
536 | boundaries when doing this. | |
537 | ||
538 | int | |
539 | dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr, | |
540 | dma_addr_t device_addr, size_t size, int | |
541 | flags) | |
542 | ||
1da177e4 LT |
543 | Declare region of memory to be handed out by dma_alloc_coherent when |
544 | it's asked for coherent memory for this device. | |
545 | ||
546 | bus_addr is the physical address to which the memory is currently | |
547 | assigned in the bus responding region (this will be used by the | |
a12e2c6c | 548 | platform to perform the mapping). |
1da177e4 LT |
549 | |
550 | device_addr is the physical address the device needs to be programmed | |
551 | with actually to address this memory (this will be handed out as the | |
a12e2c6c | 552 | dma_addr_t in dma_alloc_coherent()). |
1da177e4 LT |
553 | |
554 | size is the size of the area (must be multiples of PAGE_SIZE). | |
555 | ||
a12e2c6c | 556 | flags can be or'd together and are: |
1da177e4 LT |
557 | |
558 | DMA_MEMORY_MAP - request that the memory returned from | |
4ae0edc2 | 559 | dma_alloc_coherent() be directly writable. |
1da177e4 LT |
560 | |
561 | DMA_MEMORY_IO - request that the memory returned from | |
562 | dma_alloc_coherent() be addressable using read/write/memcpy_toio etc. | |
563 | ||
a12e2c6c | 564 | One or both of these flags must be present. |
1da177e4 LT |
565 | |
566 | DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by | |
567 | dma_alloc_coherent of any child devices of this one (for memory residing | |
568 | on a bridge). | |
569 | ||
570 | DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. | |
571 | Do not allow dma_alloc_coherent() to fall back to system memory when | |
572 | it's out of memory in the declared region. | |
573 | ||
574 | The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and | |
575 | must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO | |
576 | if only DMA_MEMORY_MAP were passed in) for success or zero for | |
577 | failure. | |
578 | ||
579 | Note, for DMA_MEMORY_IO returns, all subsequent memory returned by | |
580 | dma_alloc_coherent() may no longer be accessed directly, but instead | |
581 | must be accessed using the correct bus functions. If your driver | |
582 | isn't prepared to handle this contingency, it should not specify | |
583 | DMA_MEMORY_IO in the input flags. | |
584 | ||
585 | As a simplification for the platforms, only *one* such region of | |
586 | memory may be declared per device. | |
587 | ||
588 | For reasons of efficiency, most platforms choose to track the declared | |
589 | region only at the granularity of a page. For smaller allocations, | |
590 | you should use the dma_pool() API. | |
591 | ||
592 | void | |
593 | dma_release_declared_memory(struct device *dev) | |
594 | ||
595 | Remove the memory region previously declared from the system. This | |
596 | API performs *no* in-use checking for this region and will return | |
597 | unconditionally having removed all the required structures. It is the | |
a12e2c6c | 598 | driver's job to ensure that no parts of this memory region are |
1da177e4 LT |
599 | currently in use. |
600 | ||
601 | void * | |
602 | dma_mark_declared_memory_occupied(struct device *dev, | |
603 | dma_addr_t device_addr, size_t size) | |
604 | ||
605 | This is used to occupy specific regions of the declared space | |
606 | (dma_alloc_coherent() will hand out the first free region it finds). | |
607 | ||
a12e2c6c | 608 | device_addr is the *device* address of the region requested. |
1da177e4 | 609 | |
a12e2c6c | 610 | size is the size (and should be a page-sized multiple). |
1da177e4 LT |
611 | |
612 | The return value will be either a pointer to the processor virtual | |
613 | address of the memory, or an error (via PTR_ERR()) if any part of the | |
614 | region is occupied. |