diff options
author | Radim Krčmář <rkrcmar@redhat.com> | 2016-03-02 22:56:41 +0100 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2016-03-04 09:29:47 +0100 |
commit | ddf54503e2bbed01958cf5fb16ad6378971d2468 (patch) | |
tree | d9c327a9036e3118653ca67181c4a32fabf04608 /tools/perf/scripts/python/syscall-counts-by-pid.py | |
parent | fd700a00dc2e821be92b0b56fd5d8ebf8c63f9ba (diff) |
KVM: i8254: use atomic_t instead of pit.inject_lock
The lock was an overkill, the same can be done with atomics.
A mb() was added in kvm_pit_ack_irq, to pair with implicit barrier
between pit_timer_fn and pit_do_work. The mb() prevents a race that
could happen if pending == 0 and irq_ack == 0:
kvm_pit_ack_irq: | pit_timer_fn:
p = atomic_read(&ps->pending); |
| atomic_inc(&ps->pending);
| queue_work(pit_do_work);
| pit_do_work:
| atomic_xchg(&ps->irq_ack, 0);
| return;
atomic_set(&ps->irq_ack, 1); |
if (p == 0) return; |
where the interrupt would not be delivered in this tick of pit_timer_fn.
PIT would have eventually delivered the interrupt, but we sacrifice
perofmance to make sure that interrupts are not needlessly delayed.
sfence isn't enough: atomic_dec_if_positive does atomic_read first and
x86 can reorder loads before stores. lfence isn't enough: store can
pass lfence, turning it into a nop. A compiler barrier would be more
than enough as CPU needs to stall for unbelievably long to use fences.
This patch doesn't do anything in kvm_pit_reset_reinject, because any
order of resets can race, but the result differs by at most one
interrupt, which is ok, because it's the same result as if the reset
happened at a slightly different time. (Original code didn't protect
the reset path with a proper lock, so users have to be robust.)
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'tools/perf/scripts/python/syscall-counts-by-pid.py')
0 files changed, 0 insertions, 0 deletions