2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FTRACE_NMI_ENTER
15 config HAVE_FUNCTION_TRACER
18 config HAVE_FUNCTION_GRAPH_TRACER
21 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
24 This gets selected when the arch tests the function_trace_stop
25 variable at the mcount call site. Otherwise, this variable
26 is tested by the called function.
28 config HAVE_DYNAMIC_FTRACE
31 config HAVE_FTRACE_MCOUNT_RECORD
34 config HAVE_HW_BRANCH_TRACER
37 config HAVE_FTRACE_SYSCALLS
40 config TRACER_MAX_TRACE
46 config FTRACE_NMI_ENTER
48 depends on HAVE_FTRACE_NMI_ENTER
52 select CONTEXT_SWITCH_TRACER
55 config CONTEXT_SWITCH_TRACER
59 # All tracer options should select GENERIC_TRACER. For those options that are
60 # enabled by all tracers (context switch and event tracer) they select TRACING.
61 # This allows those options to appear when no other tracer is selected. But the
62 # options do not appear when something else selects it. We need the two options
63 # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
64 # hidding of the automatic options options.
70 select STACKTRACE if STACKTRACE_SUPPORT
81 # Minimum requirements an architecture has to meet for us to
82 # be able to offer generic tracing facilities:
84 config TRACING_SUPPORT
86 # PPC32 has no irqflags tracing support, but it can use most of the
87 # tracers anyway, they were tested to build and work. Note that new
88 # exceptions to this list aren't welcomed, better implement the
89 # irqflags tracing for your architecture.
90 depends on TRACE_IRQFLAGS_SUPPORT || PPC32
91 depends on STACKTRACE_SUPPORT
98 default y if DEBUG_KERNEL
100 Enable the kernel tracing infrastructure.
104 config FUNCTION_TRACER
105 bool "Kernel Function Tracer"
106 depends on HAVE_FUNCTION_TRACER
109 select GENERIC_TRACER
110 select CONTEXT_SWITCH_TRACER
112 Enable the kernel to trace every kernel function. This is done
113 by using a compiler feature to insert a small, 5-byte No-Operation
114 instruction to the beginning of every kernel function, which NOP
115 sequence is then dynamically patched into a tracer call when
116 tracing is enabled by the administrator. If it's runtime disabled
117 (the bootup default), then the overhead of the instructions is very
118 small and not measurable even in micro-benchmarks.
120 config FUNCTION_GRAPH_TRACER
121 bool "Kernel Function Graph Tracer"
122 depends on HAVE_FUNCTION_GRAPH_TRACER
123 depends on FUNCTION_TRACER
124 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
127 Enable the kernel to trace a function at both its return
129 Its first purpose is to trace the duration of functions and
130 draw a call graph for each thread with some information like
131 the return value. This is done by setting the current return
132 address on the current task structure into a stack of calls.
135 config IRQSOFF_TRACER
136 bool "Interrupts-off Latency Tracer"
138 depends on TRACE_IRQFLAGS_SUPPORT
139 depends on GENERIC_TIME
140 select TRACE_IRQFLAGS
141 select GENERIC_TRACER
142 select TRACER_MAX_TRACE
144 This option measures the time spent in irqs-off critical
145 sections, with microsecond accuracy.
147 The default measurement method is a maximum search, which is
148 disabled by default and can be runtime (re-)started
151 echo 0 > /debugfs/tracing/tracing_max_latency
153 (Note that kernel size and overhead increases with this option
154 enabled. This option and the preempt-off timing option can be
155 used together or separately.)
157 config PREEMPT_TRACER
158 bool "Preemption-off Latency Tracer"
160 depends on GENERIC_TIME
162 select GENERIC_TRACER
163 select TRACER_MAX_TRACE
165 This option measures the time spent in preemption off critical
166 sections, with microsecond accuracy.
168 The default measurement method is a maximum search, which is
169 disabled by default and can be runtime (re-)started
172 echo 0 > /debugfs/tracing/tracing_max_latency
174 (Note that kernel size and overhead increases with this option
175 enabled. This option and the irqs-off timing option can be
176 used together or separately.)
178 config SYSPROF_TRACER
179 bool "Sysprof Tracer"
181 select GENERIC_TRACER
182 select CONTEXT_SWITCH_TRACER
184 This tracer provides the trace needed by the 'Sysprof' userspace
188 bool "Scheduling Latency Tracer"
189 select GENERIC_TRACER
190 select CONTEXT_SWITCH_TRACER
191 select TRACER_MAX_TRACE
193 This tracer tracks the latency of the highest priority task
194 to be scheduled in, starting from the point it has woken up.
196 config ENABLE_DEFAULT_TRACERS
197 bool "Trace process context switches and events"
198 depends on !GENERIC_TRACER
201 This tracer hooks to various trace points in the kernel
202 allowing the user to pick and choose which trace point they
203 want to trace. It also includes the sched_switch tracer plugin.
205 config FTRACE_SYSCALLS
206 bool "Trace syscalls"
207 depends on HAVE_FTRACE_SYSCALLS
208 select GENERIC_TRACER
211 Basic tracer to catch the syscall entry and exit events.
214 bool "Trace boot initcalls"
215 select GENERIC_TRACER
216 select CONTEXT_SWITCH_TRACER
218 This tracer helps developers to optimize boot times: it records
219 the timings of the initcalls and traces key events and the identity
220 of tasks that can cause boot delays, such as context-switches.
222 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
223 produce pretty graphics about boot inefficiencies, giving a visual
224 representation of the delays during initcalls - but the raw
225 /debug/tracing/trace text output is readable too.
227 You must pass in ftrace=initcall to the kernel command line
228 to enable this on bootup.
230 config TRACE_BRANCH_PROFILING
232 select GENERIC_TRACER
235 prompt "Branch Profiling"
236 default BRANCH_PROFILE_NONE
238 The branch profiling is a software profiler. It will add hooks
239 into the C conditionals to test which path a branch takes.
241 The likely/unlikely profiler only looks at the conditions that
242 are annotated with a likely or unlikely macro.
244 The "all branch" profiler will profile every if statement in the
245 kernel. This profiler will also enable the likely/unlikely
248 Either of the above profilers add a bit of overhead to the system.
249 If unsure choose "No branch profiling".
251 config BRANCH_PROFILE_NONE
252 bool "No branch profiling"
254 No branch profiling. Branch profiling adds a bit of overhead.
255 Only enable it if you want to analyse the branching behavior.
256 Otherwise keep it disabled.
258 config PROFILE_ANNOTATED_BRANCHES
259 bool "Trace likely/unlikely profiler"
260 select TRACE_BRANCH_PROFILING
262 This tracer profiles all the the likely and unlikely macros
263 in the kernel. It will display the results in:
265 /debugfs/tracing/profile_annotated_branch
267 Note: this will add a significant overhead, only turn this
268 on if you need to profile the system's use of these macros.
270 config PROFILE_ALL_BRANCHES
271 bool "Profile all if conditionals"
272 select TRACE_BRANCH_PROFILING
274 This tracer profiles all branch conditions. Every if ()
275 taken in the kernel is recorded whether it hit or miss.
276 The results will be displayed in:
278 /debugfs/tracing/profile_branch
280 This option also enables the likely/unlikely profiler.
282 This configuration, when enabled, will impose a great overhead
283 on the system. This should only be enabled when the system
287 config TRACING_BRANCHES
290 Selected by tracers that will trace the likely and unlikely
291 conditions. This prevents the tracers themselves from being
292 profiled. Profiling the tracing infrastructure can only happen
293 when the likelys and unlikelys are not being traced.
296 bool "Trace likely/unlikely instances"
297 depends on TRACE_BRANCH_PROFILING
298 select TRACING_BRANCHES
300 This traces the events of likely and unlikely condition
301 calls in the kernel. The difference between this and the
302 "Trace likely/unlikely profiler" is that this is not a
303 histogram of the callers, but actually places the calling
304 events into a running trace buffer to see when and where the
305 events happened, as well as their results.
310 bool "Trace power consumption behavior"
312 select GENERIC_TRACER
314 This tracer helps developers to analyze and optimize the kernels
315 power management decisions, specifically the C-state and P-state
320 bool "Trace max stack"
321 depends on HAVE_FUNCTION_TRACER
322 select FUNCTION_TRACER
326 This special tracer records the maximum stack footprint of the
327 kernel and displays it in debugfs/tracing/stack_trace.
329 This tracer works by hooking into every function call that the
330 kernel executes, and keeping a maximum stack depth value and
331 stack-trace saved. If this is configured with DYNAMIC_FTRACE
332 then it will not have any overhead while the stack tracer
335 To enable the stack tracer on bootup, pass in 'stacktrace'
336 on the kernel command line.
338 The stack tracer can also be enabled or disabled via the
339 sysctl kernel.stack_tracer_enabled
343 config HW_BRANCH_TRACER
344 depends on HAVE_HW_BRANCH_TRACER
345 bool "Trace hw branches"
346 select GENERIC_TRACER
348 This tracer records all branches on the system in a circular
349 buffer giving access to the last N branches for each cpu.
352 bool "Trace SLAB allocations"
353 select GENERIC_TRACER
355 kmemtrace provides tracing for slab allocator functions, such as
356 kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
357 data is then fed to the userspace application in order to analyse
358 allocation hotspots, internal fragmentation and so on, making it
359 possible to see how well an allocator performs, as well as debug
360 and profile kernel code.
362 This requires an userspace application to use. See
363 Documentation/trace/kmemtrace.txt for more information.
365 Saying Y will make the kernel somewhat larger and slower. However,
366 if you disable kmemtrace at run-time or boot-time, the performance
367 impact is minimal (depending on the arch the kernel is built for).
371 config WORKQUEUE_TRACER
372 bool "Trace workqueues"
373 select GENERIC_TRACER
375 The workqueue tracer provides some statistical informations
376 about each cpu workqueue thread such as the number of the
377 works inserted and executed since their creation. It can help
378 to evaluate the amount of work each of them have to perform.
379 For example it can help a developer to decide whether he should
380 choose a per cpu workqueue instead of a singlethreaded one.
382 config BLK_DEV_IO_TRACE
383 bool "Support for tracing block io actions"
389 select GENERIC_TRACER
392 Say Y here if you want to be able to trace the block layer actions
393 on a given queue. Tracing allows you to see any traffic happening
394 on a block device queue. For more information (and the userspace
395 support tools needed), fetch the blktrace tools from:
397 git://git.kernel.dk/blktrace.git
399 Tracing also is possible using the ftrace interface, e.g.:
401 echo 1 > /sys/block/sda/sda1/trace/enable
402 echo blk > /sys/kernel/debug/tracing/current_tracer
403 cat /sys/kernel/debug/tracing/trace_pipe
407 config DYNAMIC_FTRACE
408 bool "enable/disable ftrace tracepoints dynamically"
409 depends on FUNCTION_TRACER
410 depends on HAVE_DYNAMIC_FTRACE
413 This option will modify all the calls to ftrace dynamically
414 (will patch them out of the binary image and replaces them
415 with a No-Op instruction) as they are called. A table is
416 created to dynamically enable them again.
418 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
419 has native performance as long as no tracing is active.
421 The changes to the code are done by a kernel thread that
422 wakes up once a second and checks to see if any ftrace calls
423 were made. If so, it runs stop_machine (stops all CPUS)
424 and modifies the code to jump over the call to ftrace.
426 config FUNCTION_PROFILER
427 bool "Kernel function profiler"
428 depends on FUNCTION_TRACER
431 This option enables the kernel function profiler. A file is created
432 in debugfs called function_profile_enabled which defaults to zero.
433 When a 1 is echoed into this file profiling begins, and when a
434 zero is entered, profiling stops. A file in the trace_stats
435 directory called functions, that show the list of functions that
436 have been hit and their counters.
440 config FTRACE_MCOUNT_RECORD
442 depends on DYNAMIC_FTRACE
443 depends on HAVE_FTRACE_MCOUNT_RECORD
445 config FTRACE_SELFTEST
448 config FTRACE_STARTUP_TEST
449 bool "Perform a startup test on ftrace"
450 depends on GENERIC_TRACER
451 select FTRACE_SELFTEST
453 This option performs a series of startup tests on ftrace. On bootup
454 a series of tests are made to verify that the tracer is
455 functioning properly. It will do tests on all the configured
459 bool "Memory mapped IO tracing"
460 depends on HAVE_MMIOTRACE_SUPPORT && PCI
461 select GENERIC_TRACER
463 Mmiotrace traces Memory Mapped I/O access and is meant for
464 debugging and reverse engineering. It is called from the ioremap
465 implementation and works via page faults. Tracing is disabled by
466 default and can be enabled at run-time.
468 See Documentation/trace/mmiotrace.txt.
469 If you are not helping to develop drivers, say N.
471 config MMIOTRACE_TEST
472 tristate "Test module for mmiotrace"
473 depends on MMIOTRACE && m
475 This is a dumb module for testing mmiotrace. It is very dangerous
476 as it will write garbage to IO memory starting at a given address.
477 However, it should be safe to use on e.g. unused portion of VRAM.
479 Say N, unless you absolutely know what you are doing.
481 config RING_BUFFER_BENCHMARK
482 tristate "Ring buffer benchmark stress tester"
483 depends on RING_BUFFER
485 This option creates a test to stress the ring buffer and bench mark it.
486 It creates its own ring buffer such that it will not interfer with
487 any other users of the ring buffer (such as ftrace). It then creates
488 a producer and consumer that will run for 10 seconds and sleep for
489 10 seconds. Each interval it will print out the number of events
490 it recorded and give a rough estimate of how long each iteration took.
492 It does not disable interrupts or raise its priority, so it may be
493 affected by processes that are running.
499 endif # TRACING_SUPPORT