KVM: MMU: optimize set_spte for page sync
authorMarcelo Tosatti <mtosatti@redhat.com>
Tue, 25 Nov 2008 14:58:07 +0000 (15:58 +0100)
committerAvi Kivity <avi@redhat.com>
Wed, 31 Dec 2008 14:55:02 +0000 (16:55 +0200)
commitecc5589f19a52e7e6501fe449047b19087ae11bb
tree2b5a0273e2ce67953d8c32d5a60475aa91907815
parent5319c662522db8995ff9276ba9d80549c64b294a
KVM: MMU: optimize set_spte for page sync

The write protect verification in set_spte is unnecessary for page sync.

Its guaranteed that, if the unsync spte was writable, the target page
does not have a write protected shadow (if it had, the spte would have
been write protected under mmu_lock by rmap_write_protect before).

Same reasoning applies to mark_page_dirty: the gfn has been marked as
dirty via the pagefault path.

The cost of hash table and memslot lookups are quite significant if the
workload is pagetable write intensive resulting in increased mmu_lock
contention.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
arch/x86/kvm/mmu.c