2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
9 config HAVE_FUNCTION_TRACER
12 config HAVE_FUNCTION_RET_TRACER
15 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
18 This gets selected when the arch tests the function_trace_stop
19 variable at the mcount call site. Otherwise, this variable
20 is tested by the called function.
22 config HAVE_DYNAMIC_FTRACE
25 config HAVE_FTRACE_MCOUNT_RECORD
28 config TRACER_MAX_TRACE
38 select STACKTRACE if STACKTRACE_SUPPORT
44 config FUNCTION_TRACER
45 bool "Kernel Function Tracer"
46 depends on HAVE_FUNCTION_TRACER
47 depends on DEBUG_KERNEL
50 select CONTEXT_SWITCH_TRACER
52 Enable the kernel to trace every kernel function. This is done
53 by using a compiler feature to insert a small, 5-byte No-Operation
54 instruction to the beginning of every kernel function, which NOP
55 sequence is then dynamically patched into a tracer call when
56 tracing is enabled by the administrator. If it's runtime disabled
57 (the bootup default), then the overhead of the instructions is very
58 small and not measurable even in micro-benchmarks.
60 config FUNCTION_RET_TRACER
61 bool "Kernel Function return Tracer"
62 depends on !DYNAMIC_FTRACE
63 depends on HAVE_FUNCTION_RET_TRACER
64 depends on FUNCTION_TRACER
66 Enable the kernel to trace a function at its return.
67 It's first purpose is to trace the duration of functions.
68 This is done by setting the current return address on the thread
69 info structure of the current task.
72 bool "Interrupts-off Latency Tracer"
74 depends on TRACE_IRQFLAGS_SUPPORT
75 depends on GENERIC_TIME
76 depends on DEBUG_KERNEL
79 select TRACER_MAX_TRACE
81 This option measures the time spent in irqs-off critical
82 sections, with microsecond accuracy.
84 The default measurement method is a maximum search, which is
85 disabled by default and can be runtime (re-)started
88 echo 0 > /debugfs/tracing/tracing_max_latency
90 (Note that kernel size and overhead increases with this option
91 enabled. This option and the preempt-off timing option can be
92 used together or separately.)
95 bool "Preemption-off Latency Tracer"
97 depends on GENERIC_TIME
99 depends on DEBUG_KERNEL
101 select TRACER_MAX_TRACE
103 This option measures the time spent in preemption off critical
104 sections, with microsecond accuracy.
106 The default measurement method is a maximum search, which is
107 disabled by default and can be runtime (re-)started
110 echo 0 > /debugfs/tracing/tracing_max_latency
112 (Note that kernel size and overhead increases with this option
113 enabled. This option and the irqs-off timing option can be
114 used together or separately.)
116 config SYSPROF_TRACER
117 bool "Sysprof Tracer"
121 This tracer provides the trace needed by the 'Sysprof' userspace
125 bool "Scheduling Latency Tracer"
126 depends on DEBUG_KERNEL
128 select CONTEXT_SWITCH_TRACER
129 select TRACER_MAX_TRACE
131 This tracer tracks the latency of the highest priority task
132 to be scheduled in, starting from the point it has woken up.
134 config CONTEXT_SWITCH_TRACER
135 bool "Trace process context switches"
136 depends on DEBUG_KERNEL
140 This tracer gets called from the context switch and records
141 all switching of tasks.
144 bool "Trace boot initcalls"
145 depends on DEBUG_KERNEL
147 select CONTEXT_SWITCH_TRACER
149 This tracer helps developers to optimize boot times: it records
150 the timings of the initcalls and traces key events and the identity
151 of tasks that can cause boot delays, such as context-switches.
153 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
154 produce pretty graphics about boot inefficiencies, giving a visual
155 representation of the delays during initcalls - but the raw
156 /debug/tracing/trace text output is readable too.
158 ( Note that tracing self tests can't be enabled if this tracer is
159 selected, because the self-tests are an initcall as well and that
160 would invalidate the boot trace. )
162 config TRACE_UNLIKELY_PROFILE
163 bool "Trace likely/unlikely profiler"
164 depends on DEBUG_KERNEL
167 This tracer profiles all the the likely and unlikely macros
168 in the kernel. It will display the results in:
170 /debugfs/tracing/profile_likely
171 /debugfs/tracing/profile_unlikely
173 Note: this will add a significant overhead, only turn this
174 on if you need to profile the system's use of these macros.
178 config TRACING_UNLIKELY
181 Selected by tracers that will trace the likely and unlikely
182 conditions. This prevents the tracers themselves from being
183 profiled. Profiling the tracing infrastructure can only happen
184 when the likelys and unlikelys are not being traced.
186 config UNLIKELY_TRACER
187 bool "Trace likely/unlikely instances"
188 depends on TRACE_UNLIKELY_PROFILE
189 select TRACING_UNLIKELY
191 This traces the events of likely and unlikely condition
192 calls in the kernel. The difference between this and the
193 "Trace likely/unlikely profiler" is that this is not a
194 histogram of the callers, but actually places the calling
195 events into a running trace buffer to see when and where the
196 events happened, as well as their results.
201 bool "Trace max stack"
202 depends on HAVE_FUNCTION_TRACER
203 depends on DEBUG_KERNEL
204 select FUNCTION_TRACER
207 This special tracer records the maximum stack footprint of the
208 kernel and displays it in debugfs/tracing/stack_trace.
210 This tracer works by hooking into every function call that the
211 kernel executes, and keeping a maximum stack depth value and
212 stack-trace saved. Because this logic has to execute in every
213 kernel function, all the time, this option can slow down the
214 kernel measurably and is generally intended for kernel
219 config DYNAMIC_FTRACE
220 bool "enable/disable ftrace tracepoints dynamically"
221 depends on FUNCTION_TRACER
222 depends on HAVE_DYNAMIC_FTRACE
223 depends on DEBUG_KERNEL
226 This option will modify all the calls to ftrace dynamically
227 (will patch them out of the binary image and replaces them
228 with a No-Op instruction) as they are called. A table is
229 created to dynamically enable them again.
231 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
232 has native performance as long as no tracing is active.
234 The changes to the code are done by a kernel thread that
235 wakes up once a second and checks to see if any ftrace calls
236 were made. If so, it runs stop_machine (stops all CPUS)
237 and modifies the code to jump over the call to ftrace.
239 config FTRACE_MCOUNT_RECORD
241 depends on DYNAMIC_FTRACE
242 depends on HAVE_FTRACE_MCOUNT_RECORD
244 config FTRACE_SELFTEST
247 config FTRACE_STARTUP_TEST
248 bool "Perform a startup test on ftrace"
249 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
250 select FTRACE_SELFTEST
252 This option performs a series of startup tests on ftrace. On bootup
253 a series of tests are made to verify that the tracer is
254 functioning properly. It will do tests on all the configured