2 <title>Writing Conformance tests</title>
5 Note: This part of the documentation is still very much a work in
6 progress and is in no way complete.
9 <sect1 id="testing-intro">
10 <title>Introduction</title>
12 With more The Windows API follows no standard, it is itself a defacto
13 standard, and deviations from that standard, even small ones, often
14 cause applications to crash or misbehave in some way. Furthermore
15 a conformance test suite is the most accurate (if not necessarily
16 the most complete) form of API documentation and can be used to
17 supplement the Windows API documentation.
20 Writing a conformance test suite for more than 10000 APIs is no small
21 undertaking. Fortunately it can prove very useful to the development
22 of Wine way before it is complete.
26 The conformance test suite must run on Windows. This is
27 necessary to provide a reasonable way to verify its accuracy.
28 Furthermore the tests must pass successfully on all Windows
29 platforms (tests not relevant to a given platform should be
33 A consequence of this is that the test suite will provide a
34 great way to detect variations in the API between different
35 Windows versions. For instance, this can provide insights
36 into the differences between the, often undocumented, Win9x and
40 However, one must remember that the goal of Wine is to run
41 Windows applications on Linux, not to be a clone of any specific
42 Windows version. So such variations must only be tested for when
43 relevant to that goal.
48 Writing conformance tests is also an easy way to discover
49 bugs in Wine. Of course, before fixing the bugs discovered in
50 this way, one must first make sure that the new tests do pass
51 successfully on at least one Windows 9x and one Windows NT
55 Bugs discovered this way should also be easier to fix. Unlike
56 some mysterious application crashes, when a conformance test
57 fails, the expected behavior and APIs tested for are known thus
58 greatly simplifying the diagnosis.
63 To detect regressions. Simply running the test suite regularly
64 in Wine turns it into a great tool to detect regressions.
65 When a test fails, one immediately knows what was the expected
66 behavior and which APIs are involved. Thus regressions caught
67 this way should be detected earlier, because it is easy to run
68 all tests on a regular basis, and easier to fix because of the
69 reduced diagnosis work.
74 Tests written in advance of the Wine development (possibly even
75 by non Wine developpers) can also simplify the work of the
76 futur implementer by making it easier for him to check the
77 correctness of his code.
82 Conformance tests will also come in handy when testing Wine on
83 new (or not as widely used) architectures such as FreeBSD,
84 Solaris x86 or even non-x86 systems. Even when the port does
85 not involve any significant change in the thread management,
86 exception handling or other low-level aspects of Wine, new
87 architectures can expose subtle bugs that can be hard to
88 diagnose when debugging regular (complex) applications.
95 <sect1 id="testing-what">
96 <title>What to test for?</title>
98 The first thing to test for is the documented behavior of APIs
99 and such as CreateFile. For instance one can create a file using a
100 long pathname, check that the behavior is correct when the file
101 already exists, try to open the file using the corresponding short
102 pathname, convert the filename to Unicode and try to open it using
103 CreateFileW, and all other things which are documented and that
104 applications rely on.
107 While the testing framework is not specifically geared towards this
108 type of tests, it is also possible to test the behavior of Windows
109 messages. To do so, create a window, preferably a hidden one so that
110 it does not steal the focus when running the tests, and send messages
111 to that window or to controls in that window. Then, in the message
112 procedure, check that you receive the expected messages and with the
116 For instance you could create an edit control and use WM_SETTEXT to
117 set its contents, possibly check length restrictions, and verify the
118 results using WM_GETTEXT. Similarly one could create a listbox and
119 check the effect of LB_DELETESTRING on the list's number of items,
120 selected items list, highlighted item, etc.
123 However, undocumented behavior should not be tested for unless there
124 is an application that relies on this behavior, and in that case the
125 test should mention that application, or unless one can strongly
126 expect applications to rely on this behavior, typically APIs that
127 return the required buffer size when the buffer pointer is NULL.
131 <sect1 id="testing-perl-vs-c">
132 <title>Why have both Perl and C tests?</title>
137 <sect1 id="testing-running">
138 <title>Running the tests on Windows</title>
140 The simplest way to run the tests in Wine is to type 'make test' in
141 the Wine sources top level directory. This will run all the Wine
145 The tests for a specific Wine library are located in a 'tests'
146 directory in that library's directory. Each test is contained in a
147 file, either a '.pl' file (e.g. <filename>dlls/kernel/tests/atom.pl</>)
148 for a test written in perl, or a '.c' file (e.g.
149 <filename>dlls/kernel/tests/thread.c</>) for a test written in C. Each
150 file itself contains many checks concerning one or more related APIs.
153 So to run all the tests related to a given Wine library, go to the
154 corresponding 'tests' directory and type 'make test'. This will
155 compile the C tests, run the tests, and create an
156 '<replaceable>xxx</>.ok' file for each test that passes successfully.
157 And if you only want to run the tests contained in the
158 <filename>thread.c</> file of the kernel library, you would do:
160 <prompt>$ </>cd dlls/kernel/tests
161 <prompt>$ </>make thread.ok
165 Note that if the test has already been run and is up to date (i.e. if
166 neither the kernel library nor the <filename>thread.c</> file has
167 changed since the <filename>thread.ok</> file was created), then make
168 will say so. To force the test to be re-run, delete the
169 <filename>thread.ok</> file, and run the make command again.
172 You can also run tests manually using a command similar to the
175 <prompt>$ </>runtest -q -M kernel32.dll -p kernel32_test.exe.so thread.c
176 <prompt>$ </>runtest -p kernel32_test.exe.so thread.c
177 thread.c: 86 tests executed, 5 marked as todo, 0 failures.
179 The '-P wine' options defines the platform that is currently being
180 tested; the '-q' option causes the testing framework not to report
181 statistics about the number of successfull and failed tests. Run
182 <command>runtest -h</> for more details.
186 <sect1 id="testing-c-test">
187 <title>Inside a C test</title>
190 When writing new checks you can either modify an existing test file or
191 add a new one. If your tests are related to the tests performed by an
192 existing file, then add them to that file. Otherwise create a new .c
193 file in the tests directory and add that file to the
194 <varname>CTESTS</> variable in <filename>Makefile.in</>.
197 A new test file will look something like the following:
199 #include <wine/test.h>
200 #include <winbase.h>
202 /* Maybe auxiliary functions and definitions here */
206 /* Write your checks there or put them in functions you will call from
213 The test's entry point is the START_TEST section. This is where
214 execution will start. You can put all your tests in that section but
215 it may be better to split related checks in functions you will call
216 from the START_TEST section. The parameter to START_TEST must match
217 the name of the C file. So in the above example the C file would be
218 called <filename>paths.c</>.
221 Tests should start by including the <filename>wine/test.h</> header.
222 This header will provide you access to all the testing framework
223 functions. You can then include the windows header you need, but make
224 sure to not include any Unix or Wine specific header: tests must
227 <!-- FIXME: Can we include windows.h now? We should be able to but currently __WINE__ is defined thus making it impossible. -->
228 <!-- FIXME: Add recommendations about what to print in case of a failure: be informative -->
230 You can use <function>trace</> to print informational messages. Note
231 that these messages will only be printed if 'runtest -v' is being used.
233 trace("testing GlobalAddAtomA");
236 <!-- FIXME: Make sure trace supports %d... -->
239 Then just call functions and use <function>ok</> to make sure that
240 they behaved as expected:
242 ATOM atom = GlobalAddAtomA( "foobar" );
243 ok( GlobalFindAtomA( "foobar" ) == atom, "could not find atom foobar" );
244 ok( GlobalFindAtomA( "FOOBAR" ) == atom, "could not find atom FOOBAR" );
246 The first parameter of <function>ok</> is an expression which must
247 evaluate to true if the test was successful. The next parameter is a
248 printf-compatible format string which is displayed in case the test
249 failed, and the following optional parameters depend on the format
253 It is important to display an informative message when a test fails:
254 a good error message will help the Wine developper identify exactly
255 what went wrong without having to add too many other printfs. For
256 instance it may be useful to print the error code if relevant, or the
257 expected value and effective value. In that respect, for some tests
258 you may want to define a macro such as the following:
260 #define eq(received, expected, label, type) \
261 ok((received) == (expected), "%s: got " type " instead of " type, (label),(received),(expected))
265 eq( b, curr_val, "SPI_{GET,SET}BEEP", "%d" );
273 <sect1 id="testing-platforms">
274 <title>Handling platform issues</title>
276 Some checks may be written before they pass successfully in Wine.
277 Without some mechanism, such checks would potentially generate
278 hundred of known failures for months each time the tests are being run.
279 This would make it hard to detect new failures caused by a regression.
280 or to detect that a patch fixed a long standing issue.
283 Thus the Wine testing framework has the concept of platforms and
284 groups of checks can be declared as expected to fail on some of them.
285 In the most common case, one would declare a group of tests as
286 expected to fail in Wine. To do so, use the following construct:
289 SetLastError( 0xdeadbeef );
290 ok( GlobalAddAtomA(0) == 0 && GetLastError() == 0xdeadbeef, "failed to add atom 0" );
293 On Windows the above check would be performed normally, but on Wine it
294 would be expected to fail, and not cause the failure of the whole
295 test. However. If that check were to succeed in Wine, it would
296 cause the test to fail, thus making it easy to detect when something
297 has changed that fixes a bug. Also note that todo checks are accounted
298 separately from regular checks so that the testing statistics remain
299 meaningful. Finally, note that todo sections can be nested so that if
300 a test only fails on the cygwin and reactos platforms, one would
309 <!-- FIXME: Would we really have platforms such as reactos, cygwin, freebsd & co? -->
310 But specific platforms should not be nested inside a todo_wine section
311 since that would be redundant.
314 When writing tests you will also encounter differences between Windows
315 9x and Windows NT platforms. Such differences should be treated
316 differently from the platform issues mentioned above. In particular
317 you should remember that the goal of Wine is not to be a clone of any
318 specific Windows version but to run Windows applications on Unix.
321 So, if an API returns a different error code on Windows 9x and
322 Windows NT, your check should just verify that Wine returns one or
325 ok ( GetLastError() == WIN9X_ERROR || GetLastError() == NT_ERROR, ...);
329 If an API is only present on some Windows platforms, then use
330 LoadLibrary and GetProcAddress to check if it is implemented and
331 invoke it. Remember, tests must run on all Windows platforms.
332 Similarly, conformance tests should nor try to correlate the Windows
333 version returned by GetVersion with whether given APIs are
334 implemented or not. Again, the goal of Wine is to run Windows
335 applications (which do not do such checks), and not be a clone of a
336 specific Windows version.
339 FIXME: What about checks that cause the process to crash due to a bug?
344 <!-- FIXME: Strategies for testing threads, testing network stuff,
345 file handling, eq macro... -->
349 <!-- Keep this comment at the end of the file
352 sgml-parent-document:("wine-doc.sgml" "set" "book" "part" "chapter" "")