2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FUNCTION_TRACER
15 config HAVE_FUNCTION_GRAPH_TRACER
18 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
21 This gets selected when the arch tests the function_trace_stop
22 variable at the mcount call site. Otherwise, this variable
23 is tested by the called function.
25 config HAVE_DYNAMIC_FTRACE
28 config HAVE_FTRACE_MCOUNT_RECORD
31 config HAVE_HW_BRANCH_TRACER
34 config TRACER_MAX_TRACE
44 select STACKTRACE if STACKTRACE_SUPPORT
50 config FUNCTION_TRACER
51 bool "Kernel Function Tracer"
52 depends on HAVE_FUNCTION_TRACER
53 depends on DEBUG_KERNEL
56 select CONTEXT_SWITCH_TRACER
58 Enable the kernel to trace every kernel function. This is done
59 by using a compiler feature to insert a small, 5-byte No-Operation
60 instruction to the beginning of every kernel function, which NOP
61 sequence is then dynamically patched into a tracer call when
62 tracing is enabled by the administrator. If it's runtime disabled
63 (the bootup default), then the overhead of the instructions is very
64 small and not measurable even in micro-benchmarks.
66 config FUNCTION_GRAPH_TRACER
67 bool "Kernel Function Graph Tracer"
68 depends on HAVE_FUNCTION_GRAPH_TRACER
69 depends on FUNCTION_TRACER
72 Enable the kernel to trace a function at both its return
74 It's first purpose is to trace the duration of functions and
75 draw a call graph for each thread with some informations like
77 This is done by setting the current return address on the current
78 task structure into a stack of calls.
81 bool "Interrupts-off Latency Tracer"
83 depends on TRACE_IRQFLAGS_SUPPORT
84 depends on GENERIC_TIME
85 depends on DEBUG_KERNEL
88 select TRACER_MAX_TRACE
90 This option measures the time spent in irqs-off critical
91 sections, with microsecond accuracy.
93 The default measurement method is a maximum search, which is
94 disabled by default and can be runtime (re-)started
97 echo 0 > /debugfs/tracing/tracing_max_latency
99 (Note that kernel size and overhead increases with this option
100 enabled. This option and the preempt-off timing option can be
101 used together or separately.)
103 config PREEMPT_TRACER
104 bool "Preemption-off Latency Tracer"
106 depends on GENERIC_TIME
108 depends on DEBUG_KERNEL
110 select TRACER_MAX_TRACE
112 This option measures the time spent in preemption off critical
113 sections, with microsecond accuracy.
115 The default measurement method is a maximum search, which is
116 disabled by default and can be runtime (re-)started
119 echo 0 > /debugfs/tracing/tracing_max_latency
121 (Note that kernel size and overhead increases with this option
122 enabled. This option and the irqs-off timing option can be
123 used together or separately.)
125 config SYSPROF_TRACER
126 bool "Sysprof Tracer"
130 This tracer provides the trace needed by the 'Sysprof' userspace
134 bool "Scheduling Latency Tracer"
135 depends on DEBUG_KERNEL
137 select CONTEXT_SWITCH_TRACER
138 select TRACER_MAX_TRACE
140 This tracer tracks the latency of the highest priority task
141 to be scheduled in, starting from the point it has woken up.
143 config CONTEXT_SWITCH_TRACER
144 bool "Trace process context switches"
145 depends on DEBUG_KERNEL
149 This tracer gets called from the context switch and records
150 all switching of tasks.
153 bool "Trace boot initcalls"
154 depends on DEBUG_KERNEL
156 select CONTEXT_SWITCH_TRACER
158 This tracer helps developers to optimize boot times: it records
159 the timings of the initcalls and traces key events and the identity
160 of tasks that can cause boot delays, such as context-switches.
162 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
163 produce pretty graphics about boot inefficiencies, giving a visual
164 representation of the delays during initcalls - but the raw
165 /debug/tracing/trace text output is readable too.
167 ( Note that tracing self tests can't be enabled if this tracer is
168 selected, because the self-tests are an initcall as well and that
169 would invalidate the boot trace. )
171 config TRACE_BRANCH_PROFILING
172 bool "Trace likely/unlikely profiler"
173 depends on DEBUG_KERNEL
176 This tracer profiles all the the likely and unlikely macros
177 in the kernel. It will display the results in:
179 /debugfs/tracing/profile_annotated_branch
181 Note: this will add a significant overhead, only turn this
182 on if you need to profile the system's use of these macros.
186 config PROFILE_ALL_BRANCHES
187 bool "Profile all if conditionals"
188 depends on TRACE_BRANCH_PROFILING
190 This tracer profiles all branch conditions. Every if ()
191 taken in the kernel is recorded whether it hit or miss.
192 The results will be displayed in:
194 /debugfs/tracing/profile_branch
196 This configuration, when enabled, will impose a great overhead
197 on the system. This should only be enabled when the system
202 config TRACING_BRANCHES
205 Selected by tracers that will trace the likely and unlikely
206 conditions. This prevents the tracers themselves from being
207 profiled. Profiling the tracing infrastructure can only happen
208 when the likelys and unlikelys are not being traced.
211 bool "Trace likely/unlikely instances"
212 depends on TRACE_BRANCH_PROFILING
213 select TRACING_BRANCHES
215 This traces the events of likely and unlikely condition
216 calls in the kernel. The difference between this and the
217 "Trace likely/unlikely profiler" is that this is not a
218 histogram of the callers, but actually places the calling
219 events into a running trace buffer to see when and where the
220 events happened, as well as their results.
225 bool "Trace power consumption behavior"
226 depends on DEBUG_KERNEL
230 This tracer helps developers to analyze and optimize the kernels
231 power management decisions, specifically the C-state and P-state
236 bool "Trace max stack"
237 depends on HAVE_FUNCTION_TRACER
238 depends on DEBUG_KERNEL
239 select FUNCTION_TRACER
242 This special tracer records the maximum stack footprint of the
243 kernel and displays it in debugfs/tracing/stack_trace.
245 This tracer works by hooking into every function call that the
246 kernel executes, and keeping a maximum stack depth value and
247 stack-trace saved. If this is configured with DYNAMIC_FTRACE
248 then it will not have any overhead while the stack tracer
251 To enable the stack tracer on bootup, pass in 'stacktrace'
252 on the kernel command line.
254 The stack tracer can also be enabled or disabled via the
255 sysctl kernel.stack_tracer_enabled
259 config HW_BRANCH_TRACER
260 depends on HAVE_HW_BRANCH_TRACER
261 bool "Trace hw branches"
264 This tracer records all branches on the system in a circular
265 buffer giving access to the last N branches for each cpu.
268 bool "Trace SLAB allocations"
271 kmemtrace provides tracing for slab allocator functions, such as
272 kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
273 data is then fed to the userspace application in order to analyse
274 allocation hotspots, internal fragmentation and so on, making it
275 possible to see how well an allocator performs, as well as debug
276 and profile kernel code.
278 This requires an userspace application to use. See
279 Documentation/vm/kmemtrace.txt for more information.
281 Saying Y will make the kernel somewhat larger and slower. However,
282 if you disable kmemtrace at run-time or boot-time, the performance
283 impact is minimal (depending on the arch the kernel is built for).
288 config DYNAMIC_FTRACE
289 bool "enable/disable ftrace tracepoints dynamically"
290 depends on FUNCTION_TRACER
291 depends on HAVE_DYNAMIC_FTRACE
292 depends on DEBUG_KERNEL
295 This option will modify all the calls to ftrace dynamically
296 (will patch them out of the binary image and replaces them
297 with a No-Op instruction) as they are called. A table is
298 created to dynamically enable them again.
300 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
301 has native performance as long as no tracing is active.
303 The changes to the code are done by a kernel thread that
304 wakes up once a second and checks to see if any ftrace calls
305 were made. If so, it runs stop_machine (stops all CPUS)
306 and modifies the code to jump over the call to ftrace.
308 config FTRACE_MCOUNT_RECORD
310 depends on DYNAMIC_FTRACE
311 depends on HAVE_FTRACE_MCOUNT_RECORD
313 config FTRACE_SELFTEST
316 config FTRACE_STARTUP_TEST
317 bool "Perform a startup test on ftrace"
318 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
319 select FTRACE_SELFTEST
321 This option performs a series of startup tests on ftrace. On bootup
322 a series of tests are made to verify that the tracer is
323 functioning properly. It will do tests on all the configured
327 bool "Memory mapped IO tracing"
328 depends on HAVE_MMIOTRACE_SUPPORT && DEBUG_KERNEL && PCI
331 Mmiotrace traces Memory Mapped I/O access and is meant for
332 debugging and reverse engineering. It is called from the ioremap
333 implementation and works via page faults. Tracing is disabled by
334 default and can be enabled at run-time.
336 See Documentation/tracers/mmiotrace.txt.
337 If you are not helping to develop drivers, say N.
339 config MMIOTRACE_TEST
340 tristate "Test module for mmiotrace"
341 depends on MMIOTRACE && m
343 This is a dumb module for testing mmiotrace. It is very dangerous
344 as it will write garbage to IO memory starting at a given address.
345 However, it should be safe to use on e.g. unused portion of VRAM.
347 Say N, unless you absolutely know what you are doing.