2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FTRACE_NMI_ENTER
15 config HAVE_FUNCTION_TRACER
18 config HAVE_FUNCTION_GRAPH_TRACER
21 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
24 This gets selected when the arch tests the function_trace_stop
25 variable at the mcount call site. Otherwise, this variable
26 is tested by the called function.
28 config HAVE_DYNAMIC_FTRACE
31 config HAVE_FTRACE_MCOUNT_RECORD
34 config HAVE_HW_BRANCH_TRACER
37 config TRACER_MAX_TRACE
43 config FTRACE_NMI_ENTER
45 depends on HAVE_FTRACE_NMI_ENTER
52 select STACKTRACE if STACKTRACE_SUPPORT
58 config FUNCTION_TRACER
59 bool "Kernel Function Tracer"
60 depends on HAVE_FUNCTION_TRACER
61 depends on DEBUG_KERNEL
64 select CONTEXT_SWITCH_TRACER
66 Enable the kernel to trace every kernel function. This is done
67 by using a compiler feature to insert a small, 5-byte No-Operation
68 instruction to the beginning of every kernel function, which NOP
69 sequence is then dynamically patched into a tracer call when
70 tracing is enabled by the administrator. If it's runtime disabled
71 (the bootup default), then the overhead of the instructions is very
72 small and not measurable even in micro-benchmarks.
74 config FUNCTION_GRAPH_TRACER
75 bool "Kernel Function Graph Tracer"
76 depends on HAVE_FUNCTION_GRAPH_TRACER
77 depends on FUNCTION_TRACER
80 Enable the kernel to trace a function at both its return
82 It's first purpose is to trace the duration of functions and
83 draw a call graph for each thread with some informations like
85 This is done by setting the current return address on the current
86 task structure into a stack of calls.
89 bool "Interrupts-off Latency Tracer"
91 depends on TRACE_IRQFLAGS_SUPPORT
92 depends on GENERIC_TIME
93 depends on DEBUG_KERNEL
96 select TRACER_MAX_TRACE
98 This option measures the time spent in irqs-off critical
99 sections, with microsecond accuracy.
101 The default measurement method is a maximum search, which is
102 disabled by default and can be runtime (re-)started
105 echo 0 > /debugfs/tracing/tracing_max_latency
107 (Note that kernel size and overhead increases with this option
108 enabled. This option and the preempt-off timing option can be
109 used together or separately.)
111 config PREEMPT_TRACER
112 bool "Preemption-off Latency Tracer"
114 depends on GENERIC_TIME
116 depends on DEBUG_KERNEL
118 select TRACER_MAX_TRACE
120 This option measures the time spent in preemption off critical
121 sections, with microsecond accuracy.
123 The default measurement method is a maximum search, which is
124 disabled by default and can be runtime (re-)started
127 echo 0 > /debugfs/tracing/tracing_max_latency
129 (Note that kernel size and overhead increases with this option
130 enabled. This option and the irqs-off timing option can be
131 used together or separately.)
133 config SYSPROF_TRACER
134 bool "Sysprof Tracer"
138 This tracer provides the trace needed by the 'Sysprof' userspace
142 bool "Scheduling Latency Tracer"
143 depends on DEBUG_KERNEL
145 select CONTEXT_SWITCH_TRACER
146 select TRACER_MAX_TRACE
148 This tracer tracks the latency of the highest priority task
149 to be scheduled in, starting from the point it has woken up.
151 config CONTEXT_SWITCH_TRACER
152 bool "Trace process context switches"
153 depends on DEBUG_KERNEL
157 This tracer gets called from the context switch and records
158 all switching of tasks.
161 bool "Trace boot initcalls"
162 depends on DEBUG_KERNEL
164 select CONTEXT_SWITCH_TRACER
166 This tracer helps developers to optimize boot times: it records
167 the timings of the initcalls and traces key events and the identity
168 of tasks that can cause boot delays, such as context-switches.
170 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
171 produce pretty graphics about boot inefficiencies, giving a visual
172 representation of the delays during initcalls - but the raw
173 /debug/tracing/trace text output is readable too.
175 You must pass in ftrace=initcall to the kernel command line
176 to enable this on bootup.
178 config TRACE_BRANCH_PROFILING
179 bool "Trace likely/unlikely profiler"
180 depends on DEBUG_KERNEL
183 This tracer profiles all the the likely and unlikely macros
184 in the kernel. It will display the results in:
186 /debugfs/tracing/profile_annotated_branch
188 Note: this will add a significant overhead, only turn this
189 on if you need to profile the system's use of these macros.
193 config PROFILE_ALL_BRANCHES
194 bool "Profile all if conditionals"
195 depends on TRACE_BRANCH_PROFILING
197 This tracer profiles all branch conditions. Every if ()
198 taken in the kernel is recorded whether it hit or miss.
199 The results will be displayed in:
201 /debugfs/tracing/profile_branch
203 This configuration, when enabled, will impose a great overhead
204 on the system. This should only be enabled when the system
209 config TRACING_BRANCHES
212 Selected by tracers that will trace the likely and unlikely
213 conditions. This prevents the tracers themselves from being
214 profiled. Profiling the tracing infrastructure can only happen
215 when the likelys and unlikelys are not being traced.
218 bool "Trace likely/unlikely instances"
219 depends on TRACE_BRANCH_PROFILING
220 select TRACING_BRANCHES
222 This traces the events of likely and unlikely condition
223 calls in the kernel. The difference between this and the
224 "Trace likely/unlikely profiler" is that this is not a
225 histogram of the callers, but actually places the calling
226 events into a running trace buffer to see when and where the
227 events happened, as well as their results.
232 bool "Trace power consumption behavior"
233 depends on DEBUG_KERNEL
237 This tracer helps developers to analyze and optimize the kernels
238 power management decisions, specifically the C-state and P-state
243 bool "Trace max stack"
244 depends on HAVE_FUNCTION_TRACER
245 depends on DEBUG_KERNEL
246 select FUNCTION_TRACER
249 This special tracer records the maximum stack footprint of the
250 kernel and displays it in debugfs/tracing/stack_trace.
252 This tracer works by hooking into every function call that the
253 kernel executes, and keeping a maximum stack depth value and
254 stack-trace saved. If this is configured with DYNAMIC_FTRACE
255 then it will not have any overhead while the stack tracer
258 To enable the stack tracer on bootup, pass in 'stacktrace'
259 on the kernel command line.
261 The stack tracer can also be enabled or disabled via the
262 sysctl kernel.stack_tracer_enabled
266 config HW_BRANCH_TRACER
267 depends on HAVE_HW_BRANCH_TRACER
268 bool "Trace hw branches"
271 This tracer records all branches on the system in a circular
272 buffer giving access to the last N branches for each cpu.
275 bool "Trace SLAB allocations"
278 kmemtrace provides tracing for slab allocator functions, such as
279 kmalloc, kfree, kmem_cache_alloc, kmem_cache_free etc.. Collected
280 data is then fed to the userspace application in order to analyse
281 allocation hotspots, internal fragmentation and so on, making it
282 possible to see how well an allocator performs, as well as debug
283 and profile kernel code.
285 This requires an userspace application to use. See
286 Documentation/vm/kmemtrace.txt for more information.
288 Saying Y will make the kernel somewhat larger and slower. However,
289 if you disable kmemtrace at run-time or boot-time, the performance
290 impact is minimal (depending on the arch the kernel is built for).
294 config WORKQUEUE_TRACER
295 bool "Trace workqueues"
298 The workqueue tracer provides some statistical informations
299 about each cpu workqueue thread such as the number of the
300 works inserted and executed since their creation. It can help
301 to evaluate the amount of work each of them have to perform.
302 For example it can help a developer to decide whether he should
303 choose a per cpu workqueue instead of a singlethreaded one.
306 config DYNAMIC_FTRACE
307 bool "enable/disable ftrace tracepoints dynamically"
308 depends on FUNCTION_TRACER
309 depends on HAVE_DYNAMIC_FTRACE
310 depends on DEBUG_KERNEL
313 This option will modify all the calls to ftrace dynamically
314 (will patch them out of the binary image and replaces them
315 with a No-Op instruction) as they are called. A table is
316 created to dynamically enable them again.
318 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
319 has native performance as long as no tracing is active.
321 The changes to the code are done by a kernel thread that
322 wakes up once a second and checks to see if any ftrace calls
323 were made. If so, it runs stop_machine (stops all CPUS)
324 and modifies the code to jump over the call to ftrace.
326 config FTRACE_MCOUNT_RECORD
328 depends on DYNAMIC_FTRACE
329 depends on HAVE_FTRACE_MCOUNT_RECORD
331 config FTRACE_SELFTEST
334 config FTRACE_STARTUP_TEST
335 bool "Perform a startup test on ftrace"
336 depends on TRACING && DEBUG_KERNEL
337 select FTRACE_SELFTEST
339 This option performs a series of startup tests on ftrace. On bootup
340 a series of tests are made to verify that the tracer is
341 functioning properly. It will do tests on all the configured
345 bool "Memory mapped IO tracing"
346 depends on HAVE_MMIOTRACE_SUPPORT && DEBUG_KERNEL && PCI
349 Mmiotrace traces Memory Mapped I/O access and is meant for
350 debugging and reverse engineering. It is called from the ioremap
351 implementation and works via page faults. Tracing is disabled by
352 default and can be enabled at run-time.
354 See Documentation/tracers/mmiotrace.txt.
355 If you are not helping to develop drivers, say N.
357 config MMIOTRACE_TEST
358 tristate "Test module for mmiotrace"
359 depends on MMIOTRACE && m
361 This is a dumb module for testing mmiotrace. It is very dangerous
362 as it will write garbage to IO memory starting at a given address.
363 However, it should be safe to use on e.g. unused portion of VRAM.
365 Say N, unless you absolutely know what you are doing.