Commit | Line | Data |
---|---|---|
6f97933d RL |
1 | inotify |
2 | a powerful yet simple file change notification system | |
0eeca283 RL |
3 | |
4 | ||
5 | ||
6 | Document started 15 Mar 2005 by Robert Love <rml@novell.com> | |
7 | ||
6f97933d | 8 | |
0eeca283 RL |
9 | (i) User Interface |
10 | ||
6f97933d RL |
11 | Inotify is controlled by a set of three system calls and normal file I/O on a |
12 | returned file descriptor. | |
0eeca283 | 13 | |
6f97933d | 14 | First step in using inotify is to initialise an inotify instance: |
0eeca283 RL |
15 | |
16 | int fd = inotify_init (); | |
17 | ||
6f97933d RL |
18 | Each instance is associated with a unique, ordered queue. |
19 | ||
0eeca283 RL |
20 | Change events are managed by "watches". A watch is an (object,mask) pair where |
21 | the object is a file or directory and the mask is a bit mask of one or more | |
22 | inotify events that the application wishes to receive. See <linux/inotify.h> | |
23 | for valid events. A watch is referenced by a watch descriptor, or wd. | |
24 | ||
25 | Watches are added via a path to the file. | |
26 | ||
27 | Watches on a directory will return events on any files inside of the directory. | |
28 | ||
6f97933d | 29 | Adding a watch is simple: |
0eeca283 RL |
30 | |
31 | int wd = inotify_add_watch (fd, path, mask); | |
32 | ||
6f97933d RL |
33 | Where "fd" is the return value from inotify_init(), path is the path to the |
34 | object to watch, and mask is the watch mask (see <linux/inotify.h>). | |
0eeca283 RL |
35 | |
36 | You can update an existing watch in the same manner, by passing in a new mask. | |
37 | ||
6f97933d | 38 | An existing watch is removed via |
0eeca283 | 39 | |
6f97933d | 40 | int ret = inotify_rm_watch (fd, wd); |
0eeca283 RL |
41 | |
42 | Events are provided in the form of an inotify_event structure that is read(2) | |
6f97933d RL |
43 | from a given inotify instance. The filename is of dynamic length and follows |
44 | the struct. It is of size len. The filename is padded with null bytes to | |
45 | ensure proper alignment. This padding is reflected in len. | |
0eeca283 RL |
46 | |
47 | You can slurp multiple events by passing a large buffer, for example | |
48 | ||
49 | size_t len = read (fd, buf, BUF_LEN); | |
50 | ||
6f97933d RL |
51 | Where "buf" is a pointer to an array of "inotify_event" structures at least |
52 | BUF_LEN bytes in size. The above example will return as many events as are | |
53 | available and fit in BUF_LEN. | |
0eeca283 | 54 | |
6f97933d | 55 | Each inotify instance fd is also select()- and poll()-able. |
0eeca283 | 56 | |
6f97933d RL |
57 | You can find the size of the current event queue via the standard FIONREAD |
58 | ioctl on the fd returned by inotify_init(). | |
0eeca283 RL |
59 | |
60 | All watches are destroyed and cleaned up on close. | |
61 | ||
62 | ||
6f97933d RL |
63 | (ii) |
64 | ||
65 | Prototypes: | |
66 | ||
67 | int inotify_init (void); | |
68 | int inotify_add_watch (int fd, const char *path, __u32 mask); | |
69 | int inotify_rm_watch (int fd, __u32 mask); | |
70 | ||
0eeca283 | 71 | |
0edce197 | 72 | (iii) Kernel Interface |
6f97933d | 73 | |
0edce197 AG |
74 | Inotify's kernel API consists a set of functions for managing watches and an |
75 | event callback. | |
76 | ||
77 | To use the kernel API, you must first initialize an inotify instance with a set | |
78 | of inotify_operations. You are given an opaque inotify_handle, which you use | |
79 | for any further calls to inotify. | |
80 | ||
81 | struct inotify_handle *ih = inotify_init(my_event_handler); | |
82 | ||
83 | You must provide a function for processing events and a function for destroying | |
84 | the inotify watch. | |
85 | ||
86 | void handle_event(struct inotify_watch *watch, u32 wd, u32 mask, | |
87 | u32 cookie, const char *name, struct inode *inode) | |
88 | ||
89 | watch - the pointer to the inotify_watch that triggered this call | |
90 | wd - the watch descriptor | |
91 | mask - describes the event that occurred | |
92 | cookie - an identifier for synchronizing events | |
93 | name - the dentry name for affected files in a directory-based event | |
94 | inode - the affected inode in a directory-based event | |
95 | ||
96 | void destroy_watch(struct inotify_watch *watch) | |
97 | ||
98 | You may add watches by providing a pre-allocated and initialized inotify_watch | |
99 | structure and specifying the inode to watch along with an inotify event mask. | |
100 | You must pin the inode during the call. You will likely wish to embed the | |
101 | inotify_watch structure in a structure of your own which contains other | |
102 | information about the watch. Once you add an inotify watch, it is immediately | |
103 | subject to removal depending on filesystem events. You must grab a reference if | |
104 | you depend on the watch hanging around after the call. | |
105 | ||
106 | inotify_init_watch(&my_watch->iwatch); | |
107 | inotify_get_watch(&my_watch->iwatch); // optional | |
108 | s32 wd = inotify_add_watch(ih, &my_watch->iwatch, inode, mask); | |
109 | inotify_put_watch(&my_watch->iwatch); // optional | |
110 | ||
111 | You may use the watch descriptor (wd) or the address of the inotify_watch for | |
112 | other inotify operations. You must not directly read or manipulate data in the | |
113 | inotify_watch. Additionally, you must not call inotify_add_watch() more than | |
114 | once for a given inotify_watch structure, unless you have first called either | |
115 | inotify_rm_watch() or inotify_rm_wd(). | |
116 | ||
117 | To determine if you have already registered a watch for a given inode, you may | |
118 | call inotify_find_watch(), which gives you both the wd and the watch pointer for | |
119 | the inotify_watch, or an error if the watch does not exist. | |
120 | ||
121 | wd = inotify_find_watch(ih, inode, &watchp); | |
122 | ||
123 | You may use container_of() on the watch pointer to access your own data | |
124 | associated with a given watch. When an existing watch is found, | |
125 | inotify_find_watch() bumps the refcount before releasing its locks. You must | |
126 | put that reference with: | |
127 | ||
128 | put_inotify_watch(watchp); | |
129 | ||
130 | Call inotify_find_update_watch() to update the event mask for an existing watch. | |
131 | inotify_find_update_watch() returns the wd of the updated watch, or an error if | |
132 | the watch does not exist. | |
133 | ||
134 | wd = inotify_find_update_watch(ih, inode, mask); | |
135 | ||
136 | An existing watch may be removed by calling either inotify_rm_watch() or | |
137 | inotify_rm_wd(). | |
138 | ||
139 | int ret = inotify_rm_watch(ih, &my_watch->iwatch); | |
140 | int ret = inotify_rm_wd(ih, wd); | |
141 | ||
142 | A watch may be removed while executing your event handler with the following: | |
143 | ||
144 | inotify_remove_watch_locked(ih, iwatch); | |
145 | ||
146 | Call inotify_destroy() to remove all watches from your inotify instance and | |
147 | release it. If there are no outstanding references, inotify_destroy() will call | |
148 | your destroy_watch op for each watch. | |
149 | ||
150 | inotify_destroy(ih); | |
151 | ||
152 | When inotify removes a watch, it sends an IN_IGNORED event to your callback. | |
153 | You may use this event as an indication to free the watch memory. Note that | |
154 | inotify may remove a watch due to filesystem events, as well as by your request. | |
155 | If you use IN_ONESHOT, inotify will remove the watch after the first event, at | |
156 | which point you may call the final inotify_put_watch. | |
157 | ||
158 | (iv) Kernel Interface Prototypes | |
159 | ||
160 | struct inotify_handle *inotify_init(struct inotify_operations *ops); | |
161 | ||
162 | inotify_init_watch(struct inotify_watch *watch); | |
163 | ||
164 | s32 inotify_add_watch(struct inotify_handle *ih, | |
165 | struct inotify_watch *watch, | |
166 | struct inode *inode, u32 mask); | |
167 | ||
168 | s32 inotify_find_watch(struct inotify_handle *ih, struct inode *inode, | |
169 | struct inotify_watch **watchp); | |
170 | ||
171 | s32 inotify_find_update_watch(struct inotify_handle *ih, | |
172 | struct inode *inode, u32 mask); | |
173 | ||
174 | int inotify_rm_wd(struct inotify_handle *ih, u32 wd); | |
175 | ||
176 | int inotify_rm_watch(struct inotify_handle *ih, | |
177 | struct inotify_watch *watch); | |
178 | ||
179 | void inotify_remove_watch_locked(struct inotify_handle *ih, | |
180 | struct inotify_watch *watch); | |
181 | ||
182 | void inotify_destroy(struct inotify_handle *ih); | |
183 | ||
184 | void get_inotify_watch(struct inotify_watch *watch); | |
185 | void put_inotify_watch(struct inotify_watch *watch); | |
186 | ||
187 | ||
188 | (v) Internal Kernel Implementation | |
189 | ||
190 | Each inotify instance is represented by an inotify_handle structure. | |
191 | Inotify's userspace consumers also have an inotify_device which is | |
192 | associated with the inotify_handle, and on which events are queued. | |
0eeca283 RL |
193 | |
194 | Each watch is associated with an inotify_watch structure. Watches are chained | |
0edce197 | 195 | off of each associated inotify_handle and each associated inode. |
0eeca283 | 196 | |
0edce197 | 197 | See fs/inotify.c and fs/inotify_user.c for the locking and lifetime rules. |
0eeca283 RL |
198 | |
199 | ||
0edce197 | 200 | (vi) Rationale |
0eeca283 RL |
201 | |
202 | Q: What is the design decision behind not tying the watch to the open fd of | |
203 | the watched object? | |
204 | ||
205 | A: Watches are associated with an open inotify device, not an open file. | |
206 | This solves the primary problem with dnotify: keeping the file open pins | |
207 | the file and thus, worse, pins the mount. Dnotify is therefore infeasible | |
208 | for use on a desktop system with removable media as the media cannot be | |
6f97933d | 209 | unmounted. Watching a file should not require that it be open. |
0eeca283 | 210 | |
6f97933d | 211 | Q: What is the design decision behind using an-fd-per-instance as opposed to |
0eeca283 RL |
212 | an fd-per-watch? |
213 | ||
214 | A: An fd-per-watch quickly consumes more file descriptors than are allowed, | |
215 | more fd's than are feasible to manage, and more fd's than are optimally | |
216 | select()-able. Yes, root can bump the per-process fd limit and yes, users | |
217 | can use epoll, but requiring both is a silly and extraneous requirement. | |
218 | A watch consumes less memory than an open file, separating the number | |
219 | spaces is thus sensible. The current design is what user-space developers | |
6f97933d RL |
220 | want: Users initialize inotify, once, and add n watches, requiring but one |
221 | fd and no twiddling with fd limits. Initializing an inotify instance two | |
0eeca283 RL |
222 | thousand times is silly. If we can implement user-space's preferences |
223 | cleanly--and we can, the idr layer makes stuff like this trivial--then we | |
224 | should. | |
225 | ||
226 | There are other good arguments. With a single fd, there is a single | |
227 | item to block on, which is mapped to a single queue of events. The single | |
228 | fd returns all watch events and also any potential out-of-band data. If | |
229 | every fd was a separate watch, | |
230 | ||
231 | - There would be no way to get event ordering. Events on file foo and | |
232 | file bar would pop poll() on both fd's, but there would be no way to tell | |
233 | which happened first. A single queue trivially gives you ordering. Such | |
234 | ordering is crucial to existing applications such as Beagle. Imagine | |
235 | "mv a b ; mv b a" events without ordering. | |
236 | ||
237 | - We'd have to maintain n fd's and n internal queues with state, | |
238 | versus just one. It is a lot messier in the kernel. A single, linear | |
239 | queue is the data structure that makes sense. | |
240 | ||
241 | - User-space developers prefer the current API. The Beagle guys, for | |
242 | example, love it. Trust me, I asked. It is not a surprise: Who'd want | |
243 | to manage and block on 1000 fd's via select? | |
244 | ||
0eeca283 RL |
245 | - No way to get out of band data. |
246 | ||
247 | - 1024 is still too low. ;-) | |
248 | ||
249 | When you talk about designing a file change notification system that | |
250 | scales to 1000s of directories, juggling 1000s of fd's just does not seem | |
251 | the right interface. It is too heavy. | |
252 | ||
6f97933d RL |
253 | Additionally, it _is_ possible to more than one instance and |
254 | juggle more than one queue and thus more than one associated fd. There | |
255 | need not be a one-fd-per-process mapping; it is one-fd-per-queue and a | |
256 | process can easily want more than one queue. | |
257 | ||
0eeca283 RL |
258 | Q: Why the system call approach? |
259 | ||
260 | A: The poor user-space interface is the second biggest problem with dnotify. | |
261 | Signals are a terrible, terrible interface for file notification. Or for | |
262 | anything, for that matter. The ideal solution, from all perspectives, is a | |
263 | file descriptor-based one that allows basic file I/O and poll/select. | |
264 | Obtaining the fd and managing the watches could have been done either via a | |
265 | device file or a family of new system calls. We decided to implement a | |
0edce197 | 266 | family of system calls because that is the preferred approach for new kernel |
6f97933d RL |
267 | interfaces. The only real difference was whether we wanted to use open(2) |
268 | and ioctl(2) or a couple of new system calls. System calls beat ioctls. | |
0eeca283 | 269 |