summaryrefslogtreecommitdiff
path: root/benchmarks/Makefile.sources
AgeCommit message (Collapse)Author
2016-03-09benchmarks: Add gem_syslatencyChris Wilson
Instead of measuring the wakeup latency of a GEM client, we turn the tables here and ask what is the wakeup latency of a normal process competing with GEM. In particular, a realtime process that expects deterministic latency. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-12-19benchmarks: Remove gem_waitChris Wilson
Superseded by gem_latency. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-12-19benchmark: Measure of latency of producers -> consumers, gem_latencyChris Wilson
The goal is measure how long it takes for clients waiting on results to wakeup after a buffer completes, and in doing so ensure scalibilty of the kernel to large number of clients. We spawn a number of producers. Each producer submits a busyload to the system and records in the GPU the BCS timestamp of when the batch completes. Then each producer spawns a number of waiters, who wait upon the batch completion and measure the current BCS timestamp register and compare against the recorded value. By varying the number of producers and consumers, we can study different aspects of the design, in particular how many wakeups the kernel does for each interrupt (end of batch). The more wakeups on each batch, the longer it takes for any one client to finish. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-11-22benchmarks: Add a set-domain benchmarkChris Wilson
Benchmark the overhead of changing from GTT to CPU domains and vice versa. Effectively this measures the cost of a clflush, and how well the driver can avoid them. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-10-30benchmark/gem_wait: poc for benchmarking i915_wait_request overheadChris Wilson
One scenario under recent discussion is that of having a thundering herd in i915_wait_request - where the overhead of waking up every waiter for every batchbuffer was significantly impacting customer throughput. This benchmark tries to replicate something to that effect by having a large number of consumers generating a busy load (a large copy followed by lots of small copies to generate lots of interrupts) and tries to wait upon all the consumers concurrenctly (to reproduce the thundering herd effect). To measure the overhead, we have a bunch of cpu hogs - less kernel overhead in waiting should allow more CPU throughput. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-10-06benchmarks: Measure BLT performanceChris Wilson
Execute N blits and time how long they complete to measure both GPU limited bandwidth and submission overhead. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-08-11benchmarks: Add a microbenchmark for relocation overheadChris Wilson
Allow specification of the many different busyness modes and relocation interfaces, along with the number of buffers to use and relocations. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-08-10benchmarks: Do not install to system-wide bin/Chris Wilson
These benchmarks are first-and-foremost development tools, not aimed at general users. As such they should not be installed into the system-wide bin/ directory, but installed into libexec/. v2: Now actually install beneath ${libexec} Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-08-09benchmarks: Record and replay calls to EXECBUFFER2Chris Wilson
This slightly idealises the behaviour of clients with the aim of measuring the kernel overhead of different workloads. This test focuses on the cost of relocating batchbuffers. A trace file is generated with an LD_PRELOAD intercept around execbuffer, which we can then replay at our leisure. The replay replaces the real buffers with a set of empty ones so the only thing that the kernel has to do is parse the relocations. but without a real workload we lose the impact of having to rewrite active buffers. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-07-24benchmark: Measure allocation time for objectsChris Wilson
A basic measurement, how fast can we create and populate an object with backing storage? Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-07-24benchmarks: Benchmarkify gem_exec_ctxChris Wilson
Measure the overhead of execution when doing nothing, switching between a pair of contexts, or creating a new context every time. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-07-23benchmarks: Measure round-trip time for an immediate vblanksChris Wilson
By measuring both the query and the event round trip time, we can make a reasonable estimate of how long it takes for the query to send the vblank following an interrupt. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-07-23benchmarks: Add simple mmap benchmarksChris Wilson
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-07-23benchmarks: Add simple pread/pwrite benchmarksChris Wilson
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2015-07-22benchmarks: Benchmarkify gem_exec_nopChris Wilson
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2014-04-25tests/gem_userptr_benchmark: Benchmarking userptr surfaces and impactTvrtko Ursulin
This adds a small benchmark for the new userptr functionality. Apart from basic surface creation and destruction, also tested is the impact of having userptr surfaces in the process address space. Reason for that is the impact of MMU notifiers on common address space operations like munmap() which is per process. v2: * Moved to benchmarks. * Added pointer read/write tests. * Changed output to say iterations per second instead of operations per second. * Multiply result by batch size for multi-create* tests for a more comparable number with create-destroy test. v3: * Use ALIGN macro. * Catchup with big lib/ reorganization. * Removed unused code and one global variable. * Fixed up some warnings. v4: * Fixed feature test, does not matter here but makes it consistent with gem_userptr_blits and clearer. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Brad Volkin <bradley.d.volkin@intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-04-24benchmarks: Build them on Android.Tvrtko Ursulin
They build fine so give them some exposure. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com> Signed-off-by: Thomas Wood <thomas.wood@intel.com>