2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FUNCTION_TRACER
15 config HAVE_FUNCTION_RET_TRACER
18 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
21 This gets selected when the arch tests the function_trace_stop
22 variable at the mcount call site. Otherwise, this variable
23 is tested by the called function.
25 config HAVE_DYNAMIC_FTRACE
28 config HAVE_FTRACE_MCOUNT_RECORD
31 config HAVE_HW_BRANCH_TRACER
34 config TRACER_MAX_TRACE
44 select STACKTRACE if STACKTRACE_SUPPORT
50 config FUNCTION_TRACER
51 bool "Kernel Function Tracer"
52 depends on HAVE_FUNCTION_TRACER
53 depends on DEBUG_KERNEL
56 select CONTEXT_SWITCH_TRACER
58 Enable the kernel to trace every kernel function. This is done
59 by using a compiler feature to insert a small, 5-byte No-Operation
60 instruction to the beginning of every kernel function, which NOP
61 sequence is then dynamically patched into a tracer call when
62 tracing is enabled by the administrator. If it's runtime disabled
63 (the bootup default), then the overhead of the instructions is very
64 small and not measurable even in micro-benchmarks.
66 config FUNCTION_RET_TRACER
67 bool "Kernel Function return Tracer"
68 depends on HAVE_FUNCTION_RET_TRACER
69 depends on FUNCTION_TRACER
71 Enable the kernel to trace a function at its return.
72 It's first purpose is to trace the duration of functions.
73 This is done by setting the current return address on the thread
74 info structure of the current task.
77 bool "Interrupts-off Latency Tracer"
79 depends on TRACE_IRQFLAGS_SUPPORT
80 depends on GENERIC_TIME
81 depends on DEBUG_KERNEL
84 select TRACER_MAX_TRACE
86 This option measures the time spent in irqs-off critical
87 sections, with microsecond accuracy.
89 The default measurement method is a maximum search, which is
90 disabled by default and can be runtime (re-)started
93 echo 0 > /debugfs/tracing/tracing_max_latency
95 (Note that kernel size and overhead increases with this option
96 enabled. This option and the preempt-off timing option can be
97 used together or separately.)
100 bool "Preemption-off Latency Tracer"
102 depends on GENERIC_TIME
104 depends on DEBUG_KERNEL
106 select TRACER_MAX_TRACE
108 This option measures the time spent in preemption off critical
109 sections, with microsecond accuracy.
111 The default measurement method is a maximum search, which is
112 disabled by default and can be runtime (re-)started
115 echo 0 > /debugfs/tracing/tracing_max_latency
117 (Note that kernel size and overhead increases with this option
118 enabled. This option and the irqs-off timing option can be
119 used together or separately.)
121 config SYSPROF_TRACER
122 bool "Sysprof Tracer"
126 This tracer provides the trace needed by the 'Sysprof' userspace
130 bool "Scheduling Latency Tracer"
131 depends on DEBUG_KERNEL
133 select CONTEXT_SWITCH_TRACER
134 select TRACER_MAX_TRACE
136 This tracer tracks the latency of the highest priority task
137 to be scheduled in, starting from the point it has woken up.
139 config CONTEXT_SWITCH_TRACER
140 bool "Trace process context switches"
141 depends on DEBUG_KERNEL
145 This tracer gets called from the context switch and records
146 all switching of tasks.
149 bool "Trace boot initcalls"
150 depends on DEBUG_KERNEL
152 select CONTEXT_SWITCH_TRACER
154 This tracer helps developers to optimize boot times: it records
155 the timings of the initcalls and traces key events and the identity
156 of tasks that can cause boot delays, such as context-switches.
158 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
159 produce pretty graphics about boot inefficiencies, giving a visual
160 representation of the delays during initcalls - but the raw
161 /debug/tracing/trace text output is readable too.
163 ( Note that tracing self tests can't be enabled if this tracer is
164 selected, because the self-tests are an initcall as well and that
165 would invalidate the boot trace. )
167 config TRACE_BRANCH_PROFILING
168 bool "Trace likely/unlikely profiler"
169 depends on DEBUG_KERNEL
172 This tracer profiles all the the likely and unlikely macros
173 in the kernel. It will display the results in:
175 /debugfs/tracing/profile_annotated_branch
177 Note: this will add a significant overhead, only turn this
178 on if you need to profile the system's use of these macros.
182 config PROFILE_ALL_BRANCHES
183 bool "Profile all if conditionals"
184 depends on TRACE_BRANCH_PROFILING
186 This tracer profiles all branch conditions. Every if ()
187 taken in the kernel is recorded whether it hit or miss.
188 The results will be displayed in:
190 /debugfs/tracing/profile_branch
192 This configuration, when enabled, will impose a great overhead
193 on the system. This should only be enabled when the system
198 config TRACING_BRANCHES
201 Selected by tracers that will trace the likely and unlikely
202 conditions. This prevents the tracers themselves from being
203 profiled. Profiling the tracing infrastructure can only happen
204 when the likelys and unlikelys are not being traced.
207 bool "Trace likely/unlikely instances"
208 depends on TRACE_BRANCH_PROFILING
209 select TRACING_BRANCHES
211 This traces the events of likely and unlikely condition
212 calls in the kernel. The difference between this and the
213 "Trace likely/unlikely profiler" is that this is not a
214 histogram of the callers, but actually places the calling
215 events into a running trace buffer to see when and where the
216 events happened, as well as their results.
221 bool "Trace max stack"
222 depends on HAVE_FUNCTION_TRACER
223 depends on DEBUG_KERNEL
224 select FUNCTION_TRACER
227 This special tracer records the maximum stack footprint of the
228 kernel and displays it in debugfs/tracing/stack_trace.
230 This tracer works by hooking into every function call that the
231 kernel executes, and keeping a maximum stack depth value and
232 stack-trace saved. Because this logic has to execute in every
233 kernel function, all the time, this option can slow down the
234 kernel measurably and is generally intended for kernel
240 depends on HAVE_HW_BRANCH_TRACER
241 bool "Trace branches"
244 This tracer records all branches on the system in a circular
245 buffer giving access to the last N branches for each cpu.
247 config DYNAMIC_FTRACE
248 bool "enable/disable ftrace tracepoints dynamically"
249 depends on FUNCTION_TRACER
250 depends on HAVE_DYNAMIC_FTRACE
251 depends on DEBUG_KERNEL
254 This option will modify all the calls to ftrace dynamically
255 (will patch them out of the binary image and replaces them
256 with a No-Op instruction) as they are called. A table is
257 created to dynamically enable them again.
259 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
260 has native performance as long as no tracing is active.
262 The changes to the code are done by a kernel thread that
263 wakes up once a second and checks to see if any ftrace calls
264 were made. If so, it runs stop_machine (stops all CPUS)
265 and modifies the code to jump over the call to ftrace.
267 config FTRACE_MCOUNT_RECORD
269 depends on DYNAMIC_FTRACE
270 depends on HAVE_FTRACE_MCOUNT_RECORD
272 config FTRACE_SELFTEST
275 config FTRACE_STARTUP_TEST
276 bool "Perform a startup test on ftrace"
277 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
278 select FTRACE_SELFTEST
280 This option performs a series of startup tests on ftrace. On bootup
281 a series of tests are made to verify that the tracer is
282 functioning properly. It will do tests on all the configured