diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-08-02 08:44:33 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-08-02 08:44:33 -0700 |
commit | 234172f6bbf8e26fa8407c4bbbf2a36da30d7913 (patch) | |
tree | b3be4fccbe68761d4813ecfd876c8bb3bc0b2d82 /kernel/dma | |
parent | 35fca9f8a999627e3291907992b299b1207baee5 (diff) | |
parent | ad3c7b18c5b362be5dbd0f2c0bcf1fd5fd659315 (diff) |
Merge tag 'arm-swiotlb-5.3' of git://git.infradead.org/users/hch/dma-mapping
Pull arm swiotlb support from Christoph Hellwig:
"This fixes a cascade of regressions that originally started with the
addition of the ia64 port, but only got fatal once we removed most
uses of block layer bounce buffering in Linux 4.18.
The reason is that while the original i386/PAE code that was the first
architecture that supported > 4GB of memory without an iommu decided
to leave bounce buffering to the subsystems, which in those days just
mean block and networking as no one else consumed arbitrary userspace
memory.
Later with ia64, x86_64 and other ports we assumed that either an
iommu or something that fakes it up ("software IOTLB" in beautiful
Intel speak) is present and that subsystems can rely on that for
dealing with addressing limitations in devices. Except that the ARM
LPAE scheme that added larger physical address to 32-bit ARM did not
follow that scheme and thus only worked by chance and only for block
and networking I/O directly to highmem.
Long story, short fix - add swiotlb support to arm when build for LPAE
platforms, which actuallys turns out to be pretty trivial with the
modern dma-direct / swiotlb code to fix the Linux 4.18-ish regression"
* tag 'arm-swiotlb-5.3' of git://git.infradead.org/users/hch/dma-mapping:
arm: use swiotlb for bounce buffering on LPAE configs
dma-mapping: check pfn validity in dma_common_{mmap,get_sgtable}
Diffstat (limited to 'kernel/dma')
-rw-r--r-- | kernel/dma/mapping.c | 13 |
1 files changed, 11 insertions, 2 deletions
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 1f628e7ac709..b945239621d8 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -116,11 +116,16 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, int ret; if (!dev_is_dma_coherent(dev)) { + unsigned long pfn; + if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_COHERENT_TO_PFN)) return -ENXIO; - page = pfn_to_page(arch_dma_coherent_to_pfn(dev, cpu_addr, - dma_addr)); + /* If the PFN is not valid, we do not have a struct page */ + pfn = arch_dma_coherent_to_pfn(dev, cpu_addr, dma_addr); + if (!pfn_valid(pfn)) + return -ENXIO; + page = pfn_to_page(pfn); } else { page = virt_to_page(cpu_addr); } @@ -170,7 +175,11 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, if (!dev_is_dma_coherent(dev)) { if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_COHERENT_TO_PFN)) return -ENXIO; + + /* If the PFN is not valid, we do not have a struct page */ pfn = arch_dma_coherent_to_pfn(dev, cpu_addr, dma_addr); + if (!pfn_valid(pfn)) + return -ENXIO; } else { pfn = page_to_pfn(virt_to_page(cpu_addr)); } |