[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 6/9] softmmu: Always initialize xlat in address_space_translate_fo
From: |
Richard Henderson |
Subject: |
[PULL 6/9] softmmu: Always initialize xlat in address_space_translate_for_iotlb |
Date: |
Tue, 21 Jun 2022 13:46:40 -0700 |
The bug is an uninitialized memory read, along the translate_fail
path, which results in garbage being read from iotlb_to_section,
which can lead to a crash in io_readx/io_writex.
The bug may be fixed by writing any value with zero
in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
the xlat'ed address returns io_mem_unassigned, as desired by the
translate_fail path.
It is most useful to record the original physical page address,
which will eventually be logged by memory_region_access_valid
when the access is rejected by unassigned_mem_accepts.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220621153829.366423-1-richard.henderson@linaro.org>
---
softmmu/physmem.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index fb16be57a6..dc3c3e5f2e 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -669,7 +669,7 @@ void tcg_iommu_init_notifier_list(CPUState *cpu)
/* Called from RCU critical section */
MemoryRegionSection *
-address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
+address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr,
hwaddr *xlat, hwaddr *plen,
MemTxAttrs attrs, int *prot)
{
@@ -678,6 +678,7 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx,
hwaddr addr,
IOMMUMemoryRegionClass *imrc;
IOMMUTLBEntry iotlb;
int iommu_idx;
+ hwaddr addr = orig_addr;
AddressSpaceDispatch *d =
qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
@@ -722,6 +723,16 @@ address_space_translate_for_iotlb(CPUState *cpu, int
asidx, hwaddr addr,
return section;
translate_fail:
+ /*
+ * We should be given a page-aligned address -- certainly
+ * tlb_set_page_with_attrs() does so. The page offset of xlat
+ * is used to index sections[], and PHYS_SECTION_UNASSIGNED = 0.
+ * The page portion of xlat will be logged by memory_region_access_valid()
+ * when this memory access is rejected, so use the original untranslated
+ * physical address.
+ */
+ assert((orig_addr & ~TARGET_PAGE_MASK) == 0);
+ *xlat = orig_addr;
return &d->map.sections[PHYS_SECTION_UNASSIGNED];
}
--
2.34.1
- [PULL 0/9] tcg patch queue for 2022-06-21, Richard Henderson, 2022/06/21
- [PULL 6/9] softmmu: Always initialize xlat in address_space_translate_for_iotlb,
Richard Henderson <=
- [PULL 4/9] accel/tcg: Reorganize tcg_accel_ops_init(), Richard Henderson, 2022/06/21
- [PULL 8/9] util/cacheflush: Merge aarch64 ctr_el0 usage, Richard Henderson, 2022/06/21
- [PULL 5/9] qemu-timer: Skip empty timer lists before locking in qemu_clock_deadline_ns_all, Richard Henderson, 2022/06/21
- [PULL 2/9] target/avr: Drop avr_cpu_memory_rw_debug(), Richard Henderson, 2022/06/21
- [PULL 1/9] tcg/ppc: implement rem[u]_i{32,64} with mod[su][wd], Richard Henderson, 2022/06/21
- [PULL 7/9] util: Merge cacheflush.c and cacheinfo.c, Richard Henderson, 2022/06/21
- [PULL 9/9] util/cacheflush: Optimize flushing when ppc host has coherent icache, Richard Henderson, 2022/06/21
- [PULL 3/9] accel/tcg: Init TCG cflags in vCPU thread handler, Richard Henderson, 2022/06/21
- Re: [PULL 0/9] tcg patch queue for 2022-06-21, Richard Henderson, 2022/06/21