From 93ebb6828723b8aef114415c4dc3518342f7dcad Mon Sep 17 00:00:00 2001 From: Halil Pasic Date: Sat, 24 Jul 2021 01:17:46 +0200 Subject: s390/pv: fix the forcing of the swiotlb Since commit 903cd0f315fe ("swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing") if code sets swiotlb_force it needs to do so before the swiotlb is initialised. Otherwise io_tlb_default_mem->force_bounce will not get set to true, and devices that use (the default) swiotlb will not bounce despite switolb_force having the value of SWIOTLB_FORCE. Let us restore swiotlb functionality for PV by fulfilling this new requirement. This change addresses what turned out to be a fragility in commit 64e1f0c531d1 ("s390/mm: force swiotlb for protected virtualization"), which ain't exactly broken in its original context, but could give us some more headache if people backport the broken change and forget this fix. Signed-off-by: Halil Pasic Tested-by: Christian Borntraeger Reviewed-by: Christian Borntraeger Fixes: 903cd0f315fe ("swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing") Fixes: 64e1f0c531d1 ("s390/mm: force swiotlb for protected virtualization") Cc: stable@vger.kernel.org #5.3+ Signed-off-by: Konrad Rzeszutek Wilk --- arch/s390/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'arch') diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 8ac710de1ab1..07bbee9b7320 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -186,9 +186,9 @@ static void pv_init(void) return; /* make sure bounce buffers are shared */ + swiotlb_force = SWIOTLB_FORCE; swiotlb_init(1); swiotlb_update_mem_attributes(); - swiotlb_force = SWIOTLB_FORCE; } void __init mem_init(void) -- cgit v1.2.3 From a449ffaf9181b5a2dc705d8a06b13e0068207fd4 Mon Sep 17 00:00:00 2001 From: Will Deacon Date: Fri, 30 Jul 2021 12:42:31 +0100 Subject: powerpc/svm: Don't issue ultracalls if !mem_encrypt_active() Commit ad6c00283163 ("swiotlb: Free tbl memory in swiotlb_exit()") introduced a set_memory_encrypted() call to swiotlb_exit() so that the buffer pages are returned to an encrypted state prior to being freed. Sachin reports that this leads to the following crash on a Power server: [ 0.010799] software IO TLB: tearing down default memory pool [ 0.010805] ------------[ cut here ]------------ [ 0.010808] kernel BUG at arch/powerpc/kernel/interrupt.c:98! Nick spotted that this is because set_memory_encrypted() is issuing an ultracall which doesn't exist for the processor, and should therefore be gated by mem_encrypt_active() to mirror the x86 implementation. Cc: Konrad Rzeszutek Wilk Cc: Claire Chang Cc: Christoph Hellwig Cc: Robin Murphy Fixes: ad6c00283163 ("swiotlb: Free tbl memory in swiotlb_exit()") Suggested-by: Nicholas Piggin Reported-by: Sachin Sant Tested-by: Sachin Sant Tested-by: Nathan Chancellor Link: https://lore.kernel.org/r/1905CD70-7656-42AE-99E2-A31FC3812EAC@linux.vnet.ibm.com/ Signed-off-by: Will Deacon Signed-off-by: Konrad Rzeszutek Wilk --- arch/powerpc/platforms/pseries/svm.c | 6 ++++++ 1 file changed, 6 insertions(+) (limited to 'arch') diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c index 1d829e257996..87f001b4c4e4 100644 --- a/arch/powerpc/platforms/pseries/svm.c +++ b/arch/powerpc/platforms/pseries/svm.c @@ -63,6 +63,9 @@ void __init svm_swiotlb_init(void) int set_memory_encrypted(unsigned long addr, int numpages) { + if (!mem_encrypt_active()) + return 0; + if (!PAGE_ALIGNED(addr)) return -EINVAL; @@ -73,6 +76,9 @@ int set_memory_encrypted(unsigned long addr, int numpages) int set_memory_decrypted(unsigned long addr, int numpages) { + if (!mem_encrypt_active()) + return 0; + if (!PAGE_ALIGNED(addr)) return -EINVAL; -- cgit v1.2.3