diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-08-05 13:28:50 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-08-05 13:28:50 -0700 |
| commit | fffe3ae0ee84e25d2befe2ae59bc32aa2b6bc77b (patch) | |
| tree | 80db9b520298091787d70772530f51b90afb2709 /include/linux/hmm.h | |
| parent | 8f7be6291529011a58856bf178f52ed5751c68ac (diff) | |
| parent | 7d17e83abec1be3355260b3e4812044c65c32907 (diff) | |
Merge tag 'for-linus-hmm' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
Pull hmm updates from Jason Gunthorpe:
"Ralph has been working on nouveau's use of hmm_range_fault() and
migrate_vma() which resulted in this small series. It adds reporting
of the page table order from hmm_range_fault() and some optimization
of migrate_vma():
- Report the size of the page table mapping out of hmm_range_fault().
This makes it easier to establish a large/huge/etc mapping in the
device's page table.
- Allow devices to ignore the invalidations during migration in cases
where the migration is not going to change pages.
For instance migrating pages to a device does not require the
device to invalidate pages already in the device.
- Update nouveau and hmm_tests to use the above"
* tag 'for-linus-hmm' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
mm/hmm/test: use the new migration invalidation
nouveau/svm: use the new migration invalidation
mm/notifier: add migration invalidation type
mm/migrate: add a flags parameter to migrate_vma
nouveau: fix storing invalid ptes
nouveau/hmm: support mapping large sysmem pages
nouveau: fix mapping 2MB sysmem pages
nouveau/hmm: fault one page at a time
mm/hmm: add tests for hmm_pfn_to_map_order()
mm/hmm: provide the page mapping order in hmm_range_fault()
Diffstat (limited to 'include/linux/hmm.h')
| -rw-r--r-- | include/linux/hmm.h | 24 |
1 files changed, 22 insertions, 2 deletions
diff --git a/include/linux/hmm.h b/include/linux/hmm.h index f4a09ed223ac..866a0fa104c4 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -37,16 +37,17 @@ * will fail. Must be combined with HMM_PFN_REQ_FAULT. */ enum hmm_pfn_flags { - /* Output flags */ + /* Output fields and flags */ HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flags */ HMM_PFN_REQ_FAULT = HMM_PFN_VALID, HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, - HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR, + HMM_PFN_FLAGS = 0xFFUL << HMM_PFN_ORDER_SHIFT, }; /* @@ -62,6 +63,25 @@ static inline struct page *hmm_pfn_to_page(unsigned long hmm_pfn) } /* + * hmm_pfn_to_map_order() - return the CPU mapping size order + * + * This is optionally useful to optimize processing of the pfn result + * array. It indicates that the page starts at the order aligned VA and is + * 1<<order bytes long. Every pfn within an high order page will have the + * same pfn flags, both access protections and the map_order. The caller must + * be careful with edge cases as the start and end VA of the given page may + * extend past the range used with hmm_range_fault(). + * + * This must be called under the caller 'user_lock' after a successful + * mmu_interval_read_begin(). The caller must have tested for HMM_PFN_VALID + * already. + */ +static inline unsigned int hmm_pfn_to_map_order(unsigned long hmm_pfn) +{ + return (hmm_pfn >> HMM_PFN_ORDER_SHIFT) & 0x1F; +} + +/* * struct hmm_range - track invalidation lock on virtual address range * * @notifier: a mmu_interval_notifier that includes the start/end |
