2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FUNCTION_TRACER
15 config HAVE_FUNCTION_GRAPH_TRACER
18 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
21 This gets selected when the arch tests the function_trace_stop
22 variable at the mcount call site. Otherwise, this variable
23 is tested by the called function.
25 config HAVE_DYNAMIC_FTRACE
28 config HAVE_FTRACE_MCOUNT_RECORD
31 config HAVE_HW_BRANCH_TRACER
34 config TRACER_MAX_TRACE
44 select STACKTRACE if STACKTRACE_SUPPORT
50 config FUNCTION_TRACER
51 bool "Kernel Function Tracer"
52 depends on HAVE_FUNCTION_TRACER
53 depends on DEBUG_KERNEL
56 select CONTEXT_SWITCH_TRACER
58 Enable the kernel to trace every kernel function. This is done
59 by using a compiler feature to insert a small, 5-byte No-Operation
60 instruction to the beginning of every kernel function, which NOP
61 sequence is then dynamically patched into a tracer call when
62 tracing is enabled by the administrator. If it's runtime disabled
63 (the bootup default), then the overhead of the instructions is very
64 small and not measurable even in micro-benchmarks.
66 config FUNCTION_GRAPH_TRACER
67 bool "Kernel Function Graph Tracer"
68 depends on HAVE_FUNCTION_GRAPH_TRACER
69 depends on FUNCTION_TRACER
72 Enable the kernel to trace a function at both its return
74 It's first purpose is to trace the duration of functions and
75 draw a call graph for each thread with some informations like
77 This is done by setting the current return address on the current
78 task structure into a stack of calls.
81 bool "Interrupts-off Latency Tracer"
83 depends on TRACE_IRQFLAGS_SUPPORT
84 depends on GENERIC_TIME
85 depends on DEBUG_KERNEL
88 select TRACER_MAX_TRACE
90 This option measures the time spent in irqs-off critical
91 sections, with microsecond accuracy.
93 The default measurement method is a maximum search, which is
94 disabled by default and can be runtime (re-)started
97 echo 0 > /debugfs/tracing/tracing_max_latency
99 (Note that kernel size and overhead increases with this option
100 enabled. This option and the preempt-off timing option can be
101 used together or separately.)
103 config PREEMPT_TRACER
104 bool "Preemption-off Latency Tracer"
106 depends on GENERIC_TIME
108 depends on DEBUG_KERNEL
110 select TRACER_MAX_TRACE
112 This option measures the time spent in preemption off critical
113 sections, with microsecond accuracy.
115 The default measurement method is a maximum search, which is
116 disabled by default and can be runtime (re-)started
119 echo 0 > /debugfs/tracing/tracing_max_latency
121 (Note that kernel size and overhead increases with this option
122 enabled. This option and the irqs-off timing option can be
123 used together or separately.)
125 config SYSPROF_TRACER
126 bool "Sysprof Tracer"
130 This tracer provides the trace needed by the 'Sysprof' userspace
134 bool "Scheduling Latency Tracer"
135 depends on DEBUG_KERNEL
137 select CONTEXT_SWITCH_TRACER
138 select TRACER_MAX_TRACE
140 This tracer tracks the latency of the highest priority task
141 to be scheduled in, starting from the point it has woken up.
143 config CONTEXT_SWITCH_TRACER
144 bool "Trace process context switches"
145 depends on DEBUG_KERNEL
149 This tracer gets called from the context switch and records
150 all switching of tasks.
153 bool "Trace boot initcalls"
154 depends on DEBUG_KERNEL
156 select CONTEXT_SWITCH_TRACER
158 This tracer helps developers to optimize boot times: it records
159 the timings of the initcalls and traces key events and the identity
160 of tasks that can cause boot delays, such as context-switches.
162 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
163 produce pretty graphics about boot inefficiencies, giving a visual
164 representation of the delays during initcalls - but the raw
165 /debug/tracing/trace text output is readable too.
167 ( Note that tracing self tests can't be enabled if this tracer is
168 selected, because the self-tests are an initcall as well and that
169 would invalidate the boot trace. )
171 config TRACE_BRANCH_PROFILING
172 bool "Trace likely/unlikely profiler"
173 depends on DEBUG_KERNEL
176 This tracer profiles all the the likely and unlikely macros
177 in the kernel. It will display the results in:
179 /debugfs/tracing/profile_annotated_branch
181 Note: this will add a significant overhead, only turn this
182 on if you need to profile the system's use of these macros.
186 config PROFILE_ALL_BRANCHES
187 bool "Profile all if conditionals"
188 depends on TRACE_BRANCH_PROFILING
190 This tracer profiles all branch conditions. Every if ()
191 taken in the kernel is recorded whether it hit or miss.
192 The results will be displayed in:
194 /debugfs/tracing/profile_branch
196 This configuration, when enabled, will impose a great overhead
197 on the system. This should only be enabled when the system
202 config TRACING_BRANCHES
205 Selected by tracers that will trace the likely and unlikely
206 conditions. This prevents the tracers themselves from being
207 profiled. Profiling the tracing infrastructure can only happen
208 when the likelys and unlikelys are not being traced.
211 bool "Trace likely/unlikely instances"
212 depends on TRACE_BRANCH_PROFILING
213 select TRACING_BRANCHES
215 This traces the events of likely and unlikely condition
216 calls in the kernel. The difference between this and the
217 "Trace likely/unlikely profiler" is that this is not a
218 histogram of the callers, but actually places the calling
219 events into a running trace buffer to see when and where the
220 events happened, as well as their results.
225 bool "Trace power consumption behavior"
226 depends on DEBUG_KERNEL
230 This tracer helps developers to analyze and optimize the kernels
231 power management decisions, specifically the C-state and P-state
236 bool "Trace max stack"
237 depends on HAVE_FUNCTION_TRACER
238 depends on DEBUG_KERNEL
239 select FUNCTION_TRACER
242 This special tracer records the maximum stack footprint of the
243 kernel and displays it in debugfs/tracing/stack_trace.
245 This tracer works by hooking into every function call that the
246 kernel executes, and keeping a maximum stack depth value and
247 stack-trace saved. If this is configured with DYNAMIC_FTRACE
248 then it will not have any overhead while the stack tracer
251 To enable the stack tracer on bootup, pass in 'stacktrace'
252 on the kernel command line.
254 The stack tracer can also be enabled or disabled via the
255 sysctl kernel.stack_tracer_enabled
259 config HW_BRANCH_TRACER
260 depends on HAVE_HW_BRANCH_TRACER
261 bool "Trace hw branches"
264 This tracer records all branches on the system in a circular
265 buffer giving access to the last N branches for each cpu.
267 config DYNAMIC_FTRACE
268 bool "enable/disable ftrace tracepoints dynamically"
269 depends on FUNCTION_TRACER
270 depends on HAVE_DYNAMIC_FTRACE
271 depends on DEBUG_KERNEL
274 This option will modify all the calls to ftrace dynamically
275 (will patch them out of the binary image and replaces them
276 with a No-Op instruction) as they are called. A table is
277 created to dynamically enable them again.
279 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
280 has native performance as long as no tracing is active.
282 The changes to the code are done by a kernel thread that
283 wakes up once a second and checks to see if any ftrace calls
284 were made. If so, it runs stop_machine (stops all CPUS)
285 and modifies the code to jump over the call to ftrace.
287 config FTRACE_MCOUNT_RECORD
289 depends on DYNAMIC_FTRACE
290 depends on HAVE_FTRACE_MCOUNT_RECORD
292 config FTRACE_SELFTEST
295 config FTRACE_STARTUP_TEST
296 bool "Perform a startup test on ftrace"
297 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
298 select FTRACE_SELFTEST
300 This option performs a series of startup tests on ftrace. On bootup
301 a series of tests are made to verify that the tracer is
302 functioning properly. It will do tests on all the configured