diff options
author | Dave Airlie <airlied@redhat.com> | 2011-05-16 10:45:40 +1000 |
---|---|---|
committer | Dave Airlie <airlied@redhat.com> | 2011-05-16 10:45:40 +1000 |
commit | 69f7876b2ab61e8114675d6092ad0b482e233612 (patch) | |
tree | a55aefd08d6c5f617d277a99e11b5a707e162585 /Documentation/workqueue.txt | |
parent | 0eacdba3a186e5d5b8a8bb421caacddc135e67e3 (diff) | |
parent | 645c62a5e95a5f9a8e0d0627446bbda4ee042024 (diff) |
Merge remote branch 'keithp/drm-intel-next' of /ssd/git/drm-next into drm-core-next
* 'keithp/drm-intel-next' of /ssd/git/drm-next: (301 commits)
drm/i915: split PCH clock gating init
drm/i915: add Ivybridge clock gating init function
drm/i915: Update the location of the ringbuffers' HWS_PGA registers for IVB.
drm/i915: Add support for fence registers on Ivybridge.
drm/i915: Use existing function instead of open-coding fence reg clear.
drm/i915: split clock gating init into per-chipset functions
drm/i915: set IBX pch type explicitly
drm/i915: add Ivy Bridge PCI IDs and driver feature structs
drm/i915: add PantherPoint PCH ID
agp/intel: add Ivy Bridge support
drm/i915: ring support for Ivy Bridge
drm/i915: page flip support for Ivy Bridge
drm/i915: interrupt & vblank support for Ivy Bridge
drm/i915: treat Ivy Bridge watermarks like Sandy Bridge
drm/i915: manual FDI training for Ivy Bridge
drm/i915: add swizzle/tiling support for Ivy Bridge
drm/i915: Ivy Bridge has split display and pipe control
drm/i915: add IS_IVYBRIDGE macro for checks
drm/i915: add IS_GEN7 macro to cover Ivy Bridge and later
drm/i915: split enable/disable vblank code into chipset specific functions
...
Diffstat (limited to 'Documentation/workqueue.txt')
-rw-r--r-- | Documentation/workqueue.txt | 40 |
1 files changed, 40 insertions, 0 deletions
diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt index 01c513fac40..a0b577de918 100644 --- a/Documentation/workqueue.txt +++ b/Documentation/workqueue.txt @@ -12,6 +12,7 @@ CONTENTS 4. Application Programming Interface (API) 5. Example Execution Scenarios 6. Guidelines +7. Debugging 1. Introduction @@ -379,3 +380,42 @@ If q1 has WQ_CPU_INTENSIVE set, * Unless work items are expected to consume a huge amount of CPU cycles, using a bound wq is usually beneficial due to the increased level of locality in wq operations and work item execution. + + +7. Debugging + +Because the work functions are executed by generic worker threads +there are a few tricks needed to shed some light on misbehaving +workqueue users. + +Worker threads show up in the process list as: + +root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] +root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] +root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] +root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0] + +If kworkers are going crazy (using too much cpu), there are two types +of possible problems: + + 1. Something beeing scheduled in rapid succession + 2. A single work item that consumes lots of cpu cycles + +The first one can be tracked using tracing: + + $ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event + $ cat /sys/kernel/debug/tracing/trace_pipe > out.txt + (wait a few secs) + ^C + +If something is busy looping on work queueing, it would be dominating +the output and the offender can be determined with the work item +function. + +For the second type of problems it should be possible to just check +the stack trace of the offending worker thread. + + $ cat /proc/THE_OFFENDING_KWORKER/stack + +The work item's function should be trivially visible in the stack +trace. |