Message ID | 1454059965-23402-8-git-send-email-a.rigo@virtualopensystems.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Alvise Rigo <a.rigo@virtualopensystems.com> writes: > The new helpers rely on the legacy ones to perform the actual read/write. > > The LoadLink helper (helper_ldlink_name) prepares the way for the > following StoreCond operation. It sets the linked address and the size > of the access. The LoadLink helper also updates the TLB entry of the > page involved in the LL/SC to all vCPUs by forcing a TLB flush, so that > the following accesses made by all the vCPUs will follow the slow path. > > The StoreConditional helper (helper_stcond_name) returns 1 if the > store has to fail due to a concurrent access to the same page by > another vCPU. A 'concurrent access' can be a store made by *any* vCPU > (although, some implementations allow stores made by the CPU that issued > the LoadLink). > > Suggested-by: Jani Kokkonen <jani.kokkonen@huawei.com> > Suggested-by: Claudio Fontana <claudio.fontana@huawei.com> > Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> > --- > cputlb.c | 3 ++ > include/qom/cpu.h | 5 ++ > softmmu_llsc_template.h | 133 ++++++++++++++++++++++++++++++++++++++++++++++++ > softmmu_template.h | 12 +++++ > tcg/tcg.h | 31 +++++++++++ > 5 files changed, 184 insertions(+) > create mode 100644 softmmu_llsc_template.h > > diff --git a/cputlb.c b/cputlb.c > index f6fb161..ce6d720 100644 > --- a/cputlb.c > +++ b/cputlb.c > @@ -476,6 +476,8 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) > > #define MMUSUFFIX _mmu > > +/* Generates LoadLink/StoreConditional helpers in softmmu_template.h */ > +#define GEN_EXCLUSIVE_HELPERS > #define SHIFT 0 > #include "softmmu_template.h" > > @@ -488,6 +490,7 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) > #define SHIFT 3 > #include "softmmu_template.h" > #undef MMUSUFFIX > +#undef GEN_EXCLUSIVE_HELPERS > > #define MMUSUFFIX _cmmu > #undef GETPC_ADJ > diff --git a/include/qom/cpu.h b/include/qom/cpu.h > index 682c81d..6f6c1c0 100644 > --- a/include/qom/cpu.h > +++ b/include/qom/cpu.h > @@ -351,10 +351,15 @@ struct CPUState { > */ > bool throttle_thread_scheduled; > > + /* Used by the atomic insn translation backend. */ > + bool ll_sc_context; > /* vCPU's exclusive addresses range. > * The address is set to EXCLUSIVE_RESET_ADDR if the vCPU is not > * in the middle of a LL/SC. */ > struct Range excl_protected_range; > + /* Used to carry the SC result but also to flag a normal store access made > + * by a stcond (see softmmu_template.h). */ > + bool excl_succeeded; > > /* Note that this is accessed at the start of every TB via a negative > offset from AREG0. Leave this field at the end so as to make the > diff --git a/softmmu_llsc_template.h b/softmmu_llsc_template.h > new file mode 100644 > index 0000000..101f5e8 > --- /dev/null > +++ b/softmmu_llsc_template.h > @@ -0,0 +1,133 @@ > +/* > + * Software MMU support (esclusive load/store operations) > + * > + * Generate helpers used by TCG for qemu_ldlink/stcond ops. > + * > + * Included from softmmu_template.h only. > + * > + * Copyright (c) 2015 Virtual Open Systems > + * > + * Authors: > + * Alvise Rigo <a.rigo@virtualopensystems.com> > + * > + * This library is free software; you can redistribute it and/or > + * modify it under the terms of the GNU Lesser General Public > + * License as published by the Free Software Foundation; either > + * version 2 of the License, or (at your option) any later version. > + * > + * This library is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + * Lesser General Public License for more details. > + * > + * You should have received a copy of the GNU Lesser General Public > + * License along with this library; if not, see <http://www.gnu.org/licenses/>. > + */ > + > +/* This template does not generate together the le and be version, but only one > + * of the two depending on whether BIGENDIAN_EXCLUSIVE_HELPERS has been set. > + * The same nomenclature as softmmu_template.h is used for the exclusive > + * helpers. */ > + > +#ifdef BIGENDIAN_EXCLUSIVE_HELPERS > + > +#define helper_ldlink_name glue(glue(helper_be_ldlink, USUFFIX), MMUSUFFIX) > +#define helper_stcond_name glue(glue(helper_be_stcond, SUFFIX), MMUSUFFIX) > +#define helper_ld glue(glue(helper_be_ld, USUFFIX), MMUSUFFIX) > +#define helper_st glue(glue(helper_be_st, SUFFIX), MMUSUFFIX) > + > +#else /* LE helpers + 8bit helpers (generated only once for both LE end BE) */ > + > +#if DATA_SIZE > 1 > +#define helper_ldlink_name glue(glue(helper_le_ldlink, USUFFIX), MMUSUFFIX) > +#define helper_stcond_name glue(glue(helper_le_stcond, SUFFIX), MMUSUFFIX) > +#define helper_ld glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX) > +#define helper_st glue(glue(helper_le_st, SUFFIX), MMUSUFFIX) > +#else /* DATA_SIZE <= 1 */ > +#define helper_ldlink_name glue(glue(helper_ret_ldlink, USUFFIX), MMUSUFFIX) > +#define helper_stcond_name glue(glue(helper_ret_stcond, SUFFIX), MMUSUFFIX) > +#define helper_ld glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX) > +#define helper_st glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX) > +#endif > + > +#endif > + > +WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong addr, > + TCGMemOpIdx oi, uintptr_t retaddr) > +{ > + WORD_TYPE ret; > + int index; > + CPUState *cpu, *this = ENV_GET_CPU(env); I'd rename this to this_cpu and move the *cpu definition to inside the if {} where it is used so no confusion occurs. > + CPUClass *cc = CPU_GET_CLASS(this); > + hwaddr hw_addr; > + unsigned mmu_idx = get_mmuidx(oi); > + > + /* Use the proper load helper from cpu_ldst.h */ > + ret = helper_ld(env, addr, oi, retaddr); > + > + index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); > + > + /* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr + xlat) > + * plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */ > + hw_addr = (env->iotlb[mmu_idx][index].addr & TARGET_PAGE_MASK) + addr; > + if (likely(!(env->tlb_table[mmu_idx][index].addr_read & TLB_MMIO))) { > + /* If all the vCPUs have the EXCL bit set for this page there is no need > + * to request any flush. */ > + if (!cpu_physical_memory_is_excl(hw_addr)) { > + cpu_physical_memory_set_excl(hw_addr); > + CPU_FOREACH(cpu) { > + if (current_cpu != cpu) { Why use current_cpu is we have this_cpu? I'd argue the check should use what we've been passed over global (and future TLS value). > + tlb_flush(cpu, 1); > + } > + } > + } > + } else { > + hw_error("EXCL accesses to MMIO regions not supported yet."); > + } > + > + cc->cpu_set_excl_protected_range(this, hw_addr, DATA_SIZE); > + > + /* For this vCPU, just update the TLB entry, no need to flush. */ > + env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; > + > + /* From now on we are in LL/SC context */ > + this->ll_sc_context = true; > + > + return ret; > +} > + > +WORD_TYPE helper_stcond_name(CPUArchState *env, target_ulong addr, > + DATA_TYPE val, TCGMemOpIdx oi, > + uintptr_t retaddr) > +{ > + WORD_TYPE ret; > + CPUState *cpu = ENV_GET_CPU(env); > + > + if (!cpu->ll_sc_context) { > + ret = 1; > + } else { > + /* We set it preventively to true to distinguish the following legacy > + * access as one made by the store conditional wrapper. If the store > + * conditional does not succeed, the value will be set to 0.*/ > + cpu->excl_succeeded = true; > + helper_st(env, addr, val, oi, retaddr); > + > + if (cpu->excl_succeeded) { > + ret = 0; > + } else { > + ret = 1; > + } > + } > + > + /* Unset LL/SC context */ > + cpu->ll_sc_context = false; > + cpu->excl_succeeded = false; > + cpu->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR; > + > + return ret; > +} > + > +#undef helper_ldlink_name > +#undef helper_stcond_name > +#undef helper_ld > +#undef helper_st > diff --git a/softmmu_template.h b/softmmu_template.h > index 6279437..4332db2 100644 > --- a/softmmu_template.h > +++ b/softmmu_template.h > @@ -622,6 +622,18 @@ void probe_write(CPUArchState *env, target_ulong addr, int mmu_idx, > #endif > #endif /* !defined(SOFTMMU_CODE_ACCESS) */ > > +#ifdef GEN_EXCLUSIVE_HELPERS > + > +#if DATA_SIZE > 1 /* The 8-bit helpers are generate along with LE helpers */ > +#define BIGENDIAN_EXCLUSIVE_HELPERS > +#include "softmmu_llsc_template.h" > +#undef BIGENDIAN_EXCLUSIVE_HELPERS > +#endif > + > +#include "softmmu_llsc_template.h" > + > +#endif /* !defined(GEN_EXCLUSIVE_HELPERS) */ > + > #undef READ_ACCESS_TYPE > #undef SHIFT > #undef DATA_TYPE > diff --git a/tcg/tcg.h b/tcg/tcg.h > index a696922..3e050a4 100644 > --- a/tcg/tcg.h > +++ b/tcg/tcg.h > @@ -968,6 +968,21 @@ tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, > TCGMemOpIdx oi, uintptr_t retaddr); > uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, > TCGMemOpIdx oi, uintptr_t retaddr); > +/* Exclusive variants */ > +tcg_target_ulong helper_ret_ldlinkub_mmu(CPUArchState *env, target_ulong addr, > + TCGMemOpIdx oi, uintptr_t retaddr); > +tcg_target_ulong helper_le_ldlinkuw_mmu(CPUArchState *env, target_ulong addr, > + TCGMemOpIdx oi, uintptr_t retaddr); > +tcg_target_ulong helper_le_ldlinkul_mmu(CPUArchState *env, target_ulong addr, > + TCGMemOpIdx oi, uintptr_t retaddr); > +uint64_t helper_le_ldlinkq_mmu(CPUArchState *env, target_ulong addr, > + TCGMemOpIdx oi, uintptr_t retaddr); > +tcg_target_ulong helper_be_ldlinkuw_mmu(CPUArchState *env, target_ulong addr, > + TCGMemOpIdx oi, uintptr_t retaddr); > +tcg_target_ulong helper_be_ldlinkul_mmu(CPUArchState *env, target_ulong addr, > + TCGMemOpIdx oi, uintptr_t retaddr); > +uint64_t helper_be_ldlinkq_mmu(CPUArchState *env, target_ulong addr, > + TCGMemOpIdx oi, uintptr_t retaddr); > > /* Value sign-extended to tcg register size. */ > tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, > @@ -1010,6 +1025,22 @@ uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, > TCGMemOpIdx oi, uintptr_t retaddr); > uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, > TCGMemOpIdx oi, uintptr_t retaddr); > +/* Exclusive variants */ > +tcg_target_ulong helper_ret_stcondb_mmu(CPUArchState *env, target_ulong addr, > + uint8_t val, TCGMemOpIdx oi, uintptr_t retaddr); > +tcg_target_ulong helper_le_stcondw_mmu(CPUArchState *env, target_ulong addr, > + uint16_t val, TCGMemOpIdx oi, uintptr_t retaddr); > +tcg_target_ulong helper_le_stcondl_mmu(CPUArchState *env, target_ulong addr, > + uint32_t val, TCGMemOpIdx oi, uintptr_t retaddr); > +uint64_t helper_le_stcondq_mmu(CPUArchState *env, target_ulong addr, > + uint64_t val, TCGMemOpIdx oi, uintptr_t retaddr); > +tcg_target_ulong helper_be_stcondw_mmu(CPUArchState *env, target_ulong addr, > + uint16_t val, TCGMemOpIdx oi, uintptr_t retaddr); > +tcg_target_ulong helper_be_stcondl_mmu(CPUArchState *env, target_ulong addr, > + uint32_t val, TCGMemOpIdx oi, uintptr_t retaddr); > +uint64_t helper_be_stcondq_mmu(CPUArchState *env, target_ulong addr, > + uint64_t val, TCGMemOpIdx oi, uintptr_t retaddr); > + > > /* Temporary aliases until backends are converted. */ > #ifdef TARGET_WORDS_BIGENDIAN -- Alex Bennée
On Thu, Feb 11, 2016 at 5:33 PM, Alex Bennée <alex.bennee@linaro.org> wrote: > > Alvise Rigo <a.rigo@virtualopensystems.com> writes: > >> The new helpers rely on the legacy ones to perform the actual read/write. >> >> The LoadLink helper (helper_ldlink_name) prepares the way for the >> following StoreCond operation. It sets the linked address and the size >> of the access. The LoadLink helper also updates the TLB entry of the >> page involved in the LL/SC to all vCPUs by forcing a TLB flush, so that >> the following accesses made by all the vCPUs will follow the slow path. >> >> The StoreConditional helper (helper_stcond_name) returns 1 if the >> store has to fail due to a concurrent access to the same page by >> another vCPU. A 'concurrent access' can be a store made by *any* vCPU >> (although, some implementations allow stores made by the CPU that issued >> the LoadLink). >> >> Suggested-by: Jani Kokkonen <jani.kokkonen@huawei.com> >> Suggested-by: Claudio Fontana <claudio.fontana@huawei.com> >> Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> >> --- >> cputlb.c | 3 ++ >> include/qom/cpu.h | 5 ++ >> softmmu_llsc_template.h | 133 ++++++++++++++++++++++++++++++++++++++++++++++++ >> softmmu_template.h | 12 +++++ >> tcg/tcg.h | 31 +++++++++++ >> 5 files changed, 184 insertions(+) >> create mode 100644 softmmu_llsc_template.h >> >> diff --git a/cputlb.c b/cputlb.c >> index f6fb161..ce6d720 100644 >> --- a/cputlb.c >> +++ b/cputlb.c >> @@ -476,6 +476,8 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) >> >> #define MMUSUFFIX _mmu >> >> +/* Generates LoadLink/StoreConditional helpers in softmmu_template.h */ >> +#define GEN_EXCLUSIVE_HELPERS >> #define SHIFT 0 >> #include "softmmu_template.h" >> >> @@ -488,6 +490,7 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) >> #define SHIFT 3 >> #include "softmmu_template.h" >> #undef MMUSUFFIX >> +#undef GEN_EXCLUSIVE_HELPERS >> >> #define MMUSUFFIX _cmmu >> #undef GETPC_ADJ >> diff --git a/include/qom/cpu.h b/include/qom/cpu.h >> index 682c81d..6f6c1c0 100644 >> --- a/include/qom/cpu.h >> +++ b/include/qom/cpu.h >> @@ -351,10 +351,15 @@ struct CPUState { >> */ >> bool throttle_thread_scheduled; >> >> + /* Used by the atomic insn translation backend. */ >> + bool ll_sc_context; >> /* vCPU's exclusive addresses range. >> * The address is set to EXCLUSIVE_RESET_ADDR if the vCPU is not >> * in the middle of a LL/SC. */ >> struct Range excl_protected_range; >> + /* Used to carry the SC result but also to flag a normal store access made >> + * by a stcond (see softmmu_template.h). */ >> + bool excl_succeeded; >> >> /* Note that this is accessed at the start of every TB via a negative >> offset from AREG0. Leave this field at the end so as to make the >> diff --git a/softmmu_llsc_template.h b/softmmu_llsc_template.h >> new file mode 100644 >> index 0000000..101f5e8 >> --- /dev/null >> +++ b/softmmu_llsc_template.h >> @@ -0,0 +1,133 @@ >> +/* >> + * Software MMU support (esclusive load/store operations) >> + * >> + * Generate helpers used by TCG for qemu_ldlink/stcond ops. >> + * >> + * Included from softmmu_template.h only. >> + * >> + * Copyright (c) 2015 Virtual Open Systems >> + * >> + * Authors: >> + * Alvise Rigo <a.rigo@virtualopensystems.com> >> + * >> + * This library is free software; you can redistribute it and/or >> + * modify it under the terms of the GNU Lesser General Public >> + * License as published by the Free Software Foundation; either >> + * version 2 of the License, or (at your option) any later version. >> + * >> + * This library is distributed in the hope that it will be useful, >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >> + * Lesser General Public License for more details. >> + * >> + * You should have received a copy of the GNU Lesser General Public >> + * License along with this library; if not, see <http://www.gnu.org/licenses/>. >> + */ >> + >> +/* This template does not generate together the le and be version, but only one >> + * of the two depending on whether BIGENDIAN_EXCLUSIVE_HELPERS has been set. >> + * The same nomenclature as softmmu_template.h is used for the exclusive >> + * helpers. */ >> + >> +#ifdef BIGENDIAN_EXCLUSIVE_HELPERS >> + >> +#define helper_ldlink_name glue(glue(helper_be_ldlink, USUFFIX), MMUSUFFIX) >> +#define helper_stcond_name glue(glue(helper_be_stcond, SUFFIX), MMUSUFFIX) >> +#define helper_ld glue(glue(helper_be_ld, USUFFIX), MMUSUFFIX) >> +#define helper_st glue(glue(helper_be_st, SUFFIX), MMUSUFFIX) >> + >> +#else /* LE helpers + 8bit helpers (generated only once for both LE end BE) */ >> + >> +#if DATA_SIZE > 1 >> +#define helper_ldlink_name glue(glue(helper_le_ldlink, USUFFIX), MMUSUFFIX) >> +#define helper_stcond_name glue(glue(helper_le_stcond, SUFFIX), MMUSUFFIX) >> +#define helper_ld glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX) >> +#define helper_st glue(glue(helper_le_st, SUFFIX), MMUSUFFIX) >> +#else /* DATA_SIZE <= 1 */ >> +#define helper_ldlink_name glue(glue(helper_ret_ldlink, USUFFIX), MMUSUFFIX) >> +#define helper_stcond_name glue(glue(helper_ret_stcond, SUFFIX), MMUSUFFIX) >> +#define helper_ld glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX) >> +#define helper_st glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX) >> +#endif >> + >> +#endif >> + >> +WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong addr, >> + TCGMemOpIdx oi, uintptr_t retaddr) >> +{ >> + WORD_TYPE ret; >> + int index; >> + CPUState *cpu, *this = ENV_GET_CPU(env); > > I'd rename this to this_cpu and move the *cpu definition to inside the > if {} where it is used so no confusion occurs. > >> + CPUClass *cc = CPU_GET_CLASS(this); >> + hwaddr hw_addr; >> + unsigned mmu_idx = get_mmuidx(oi); >> + >> + /* Use the proper load helper from cpu_ldst.h */ >> + ret = helper_ld(env, addr, oi, retaddr); >> + >> + index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); >> + >> + /* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr + xlat) >> + * plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */ >> + hw_addr = (env->iotlb[mmu_idx][index].addr & TARGET_PAGE_MASK) + addr; >> + if (likely(!(env->tlb_table[mmu_idx][index].addr_read & TLB_MMIO))) { >> + /* If all the vCPUs have the EXCL bit set for this page there is no need >> + * to request any flush. */ >> + if (!cpu_physical_memory_is_excl(hw_addr)) { >> + cpu_physical_memory_set_excl(hw_addr); >> + CPU_FOREACH(cpu) { >> + if (current_cpu != cpu) { > > Why use current_cpu is we have this_cpu? I'd argue the check should use > what we've been passed over global (and future TLS value). Indeed, it makes more sense to do as you suggested. Thank you, alvise > >> + tlb_flush(cpu, 1); >> + } >> + } >> + } >> + } else { >> + hw_error("EXCL accesses to MMIO regions not supported yet."); >> + } >> + >> + cc->cpu_set_excl_protected_range(this, hw_addr, DATA_SIZE); >> + >> + /* For this vCPU, just update the TLB entry, no need to flush. */ >> + env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; >> + >> + /* From now on we are in LL/SC context */ >> + this->ll_sc_context = true; >> + >> + return ret; >> +} >> + >> +WORD_TYPE helper_stcond_name(CPUArchState *env, target_ulong addr, >> + DATA_TYPE val, TCGMemOpIdx oi, >> + uintptr_t retaddr) >> +{ >> + WORD_TYPE ret; >> + CPUState *cpu = ENV_GET_CPU(env); >> + >> + if (!cpu->ll_sc_context) { >> + ret = 1; >> + } else { >> + /* We set it preventively to true to distinguish the following legacy >> + * access as one made by the store conditional wrapper. If the store >> + * conditional does not succeed, the value will be set to 0.*/ >> + cpu->excl_succeeded = true; >> + helper_st(env, addr, val, oi, retaddr); >> + >> + if (cpu->excl_succeeded) { >> + ret = 0; >> + } else { >> + ret = 1; >> + } >> + } >> + >> + /* Unset LL/SC context */ >> + cpu->ll_sc_context = false; >> + cpu->excl_succeeded = false; >> + cpu->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR; >> + >> + return ret; >> +} >> + >> +#undef helper_ldlink_name >> +#undef helper_stcond_name >> +#undef helper_ld >> +#undef helper_st >> diff --git a/softmmu_template.h b/softmmu_template.h >> index 6279437..4332db2 100644 >> --- a/softmmu_template.h >> +++ b/softmmu_template.h >> @@ -622,6 +622,18 @@ void probe_write(CPUArchState *env, target_ulong addr, int mmu_idx, >> #endif >> #endif /* !defined(SOFTMMU_CODE_ACCESS) */ >> >> +#ifdef GEN_EXCLUSIVE_HELPERS >> + >> +#if DATA_SIZE > 1 /* The 8-bit helpers are generate along with LE helpers */ >> +#define BIGENDIAN_EXCLUSIVE_HELPERS >> +#include "softmmu_llsc_template.h" >> +#undef BIGENDIAN_EXCLUSIVE_HELPERS >> +#endif >> + >> +#include "softmmu_llsc_template.h" >> + >> +#endif /* !defined(GEN_EXCLUSIVE_HELPERS) */ >> + >> #undef READ_ACCESS_TYPE >> #undef SHIFT >> #undef DATA_TYPE >> diff --git a/tcg/tcg.h b/tcg/tcg.h >> index a696922..3e050a4 100644 >> --- a/tcg/tcg.h >> +++ b/tcg/tcg.h >> @@ -968,6 +968,21 @@ tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, >> TCGMemOpIdx oi, uintptr_t retaddr); >> uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, >> TCGMemOpIdx oi, uintptr_t retaddr); >> +/* Exclusive variants */ >> +tcg_target_ulong helper_ret_ldlinkub_mmu(CPUArchState *env, target_ulong addr, >> + TCGMemOpIdx oi, uintptr_t retaddr); >> +tcg_target_ulong helper_le_ldlinkuw_mmu(CPUArchState *env, target_ulong addr, >> + TCGMemOpIdx oi, uintptr_t retaddr); >> +tcg_target_ulong helper_le_ldlinkul_mmu(CPUArchState *env, target_ulong addr, >> + TCGMemOpIdx oi, uintptr_t retaddr); >> +uint64_t helper_le_ldlinkq_mmu(CPUArchState *env, target_ulong addr, >> + TCGMemOpIdx oi, uintptr_t retaddr); >> +tcg_target_ulong helper_be_ldlinkuw_mmu(CPUArchState *env, target_ulong addr, >> + TCGMemOpIdx oi, uintptr_t retaddr); >> +tcg_target_ulong helper_be_ldlinkul_mmu(CPUArchState *env, target_ulong addr, >> + TCGMemOpIdx oi, uintptr_t retaddr); >> +uint64_t helper_be_ldlinkq_mmu(CPUArchState *env, target_ulong addr, >> + TCGMemOpIdx oi, uintptr_t retaddr); >> >> /* Value sign-extended to tcg register size. */ >> tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, >> @@ -1010,6 +1025,22 @@ uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, >> TCGMemOpIdx oi, uintptr_t retaddr); >> uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, >> TCGMemOpIdx oi, uintptr_t retaddr); >> +/* Exclusive variants */ >> +tcg_target_ulong helper_ret_stcondb_mmu(CPUArchState *env, target_ulong addr, >> + uint8_t val, TCGMemOpIdx oi, uintptr_t retaddr); >> +tcg_target_ulong helper_le_stcondw_mmu(CPUArchState *env, target_ulong addr, >> + uint16_t val, TCGMemOpIdx oi, uintptr_t retaddr); >> +tcg_target_ulong helper_le_stcondl_mmu(CPUArchState *env, target_ulong addr, >> + uint32_t val, TCGMemOpIdx oi, uintptr_t retaddr); >> +uint64_t helper_le_stcondq_mmu(CPUArchState *env, target_ulong addr, >> + uint64_t val, TCGMemOpIdx oi, uintptr_t retaddr); >> +tcg_target_ulong helper_be_stcondw_mmu(CPUArchState *env, target_ulong addr, >> + uint16_t val, TCGMemOpIdx oi, uintptr_t retaddr); >> +tcg_target_ulong helper_be_stcondl_mmu(CPUArchState *env, target_ulong addr, >> + uint32_t val, TCGMemOpIdx oi, uintptr_t retaddr); >> +uint64_t helper_be_stcondq_mmu(CPUArchState *env, target_ulong addr, >> + uint64_t val, TCGMemOpIdx oi, uintptr_t retaddr); >> + >> >> /* Temporary aliases until backends are converted. */ >> #ifdef TARGET_WORDS_BIGENDIAN > > > -- > Alex Bennée
diff --git a/cputlb.c b/cputlb.c index f6fb161..ce6d720 100644 --- a/cputlb.c +++ b/cputlb.c @@ -476,6 +476,8 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) #define MMUSUFFIX _mmu +/* Generates LoadLink/StoreConditional helpers in softmmu_template.h */ +#define GEN_EXCLUSIVE_HELPERS #define SHIFT 0 #include "softmmu_template.h" @@ -488,6 +490,7 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) #define SHIFT 3 #include "softmmu_template.h" #undef MMUSUFFIX +#undef GEN_EXCLUSIVE_HELPERS #define MMUSUFFIX _cmmu #undef GETPC_ADJ diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 682c81d..6f6c1c0 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -351,10 +351,15 @@ struct CPUState { */ bool throttle_thread_scheduled; + /* Used by the atomic insn translation backend. */ + bool ll_sc_context; /* vCPU's exclusive addresses range. * The address is set to EXCLUSIVE_RESET_ADDR if the vCPU is not * in the middle of a LL/SC. */ struct Range excl_protected_range; + /* Used to carry the SC result but also to flag a normal store access made + * by a stcond (see softmmu_template.h). */ + bool excl_succeeded; /* Note that this is accessed at the start of every TB via a negative offset from AREG0. Leave this field at the end so as to make the diff --git a/softmmu_llsc_template.h b/softmmu_llsc_template.h new file mode 100644 index 0000000..101f5e8 --- /dev/null +++ b/softmmu_llsc_template.h @@ -0,0 +1,133 @@ +/* + * Software MMU support (esclusive load/store operations) + * + * Generate helpers used by TCG for qemu_ldlink/stcond ops. + * + * Included from softmmu_template.h only. + * + * Copyright (c) 2015 Virtual Open Systems + * + * Authors: + * Alvise Rigo <a.rigo@virtualopensystems.com> + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see <http://www.gnu.org/licenses/>. + */ + +/* This template does not generate together the le and be version, but only one + * of the two depending on whether BIGENDIAN_EXCLUSIVE_HELPERS has been set. + * The same nomenclature as softmmu_template.h is used for the exclusive + * helpers. */ + +#ifdef BIGENDIAN_EXCLUSIVE_HELPERS + +#define helper_ldlink_name glue(glue(helper_be_ldlink, USUFFIX), MMUSUFFIX) +#define helper_stcond_name glue(glue(helper_be_stcond, SUFFIX), MMUSUFFIX) +#define helper_ld glue(glue(helper_be_ld, USUFFIX), MMUSUFFIX) +#define helper_st glue(glue(helper_be_st, SUFFIX), MMUSUFFIX) + +#else /* LE helpers + 8bit helpers (generated only once for both LE end BE) */ + +#if DATA_SIZE > 1 +#define helper_ldlink_name glue(glue(helper_le_ldlink, USUFFIX), MMUSUFFIX) +#define helper_stcond_name glue(glue(helper_le_stcond, SUFFIX), MMUSUFFIX) +#define helper_ld glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX) +#define helper_st glue(glue(helper_le_st, SUFFIX), MMUSUFFIX) +#else /* DATA_SIZE <= 1 */ +#define helper_ldlink_name glue(glue(helper_ret_ldlink, USUFFIX), MMUSUFFIX) +#define helper_stcond_name glue(glue(helper_ret_stcond, SUFFIX), MMUSUFFIX) +#define helper_ld glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX) +#define helper_st glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX) +#endif + +#endif + +WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + WORD_TYPE ret; + int index; + CPUState *cpu, *this = ENV_GET_CPU(env); + CPUClass *cc = CPU_GET_CLASS(this); + hwaddr hw_addr; + unsigned mmu_idx = get_mmuidx(oi); + + /* Use the proper load helper from cpu_ldst.h */ + ret = helper_ld(env, addr, oi, retaddr); + + index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + + /* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr + xlat) + * plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */ + hw_addr = (env->iotlb[mmu_idx][index].addr & TARGET_PAGE_MASK) + addr; + if (likely(!(env->tlb_table[mmu_idx][index].addr_read & TLB_MMIO))) { + /* If all the vCPUs have the EXCL bit set for this page there is no need + * to request any flush. */ + if (!cpu_physical_memory_is_excl(hw_addr)) { + cpu_physical_memory_set_excl(hw_addr); + CPU_FOREACH(cpu) { + if (current_cpu != cpu) { + tlb_flush(cpu, 1); + } + } + } + } else { + hw_error("EXCL accesses to MMIO regions not supported yet."); + } + + cc->cpu_set_excl_protected_range(this, hw_addr, DATA_SIZE); + + /* For this vCPU, just update the TLB entry, no need to flush. */ + env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; + + /* From now on we are in LL/SC context */ + this->ll_sc_context = true; + + return ret; +} + +WORD_TYPE helper_stcond_name(CPUArchState *env, target_ulong addr, + DATA_TYPE val, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + WORD_TYPE ret; + CPUState *cpu = ENV_GET_CPU(env); + + if (!cpu->ll_sc_context) { + ret = 1; + } else { + /* We set it preventively to true to distinguish the following legacy + * access as one made by the store conditional wrapper. If the store + * conditional does not succeed, the value will be set to 0.*/ + cpu->excl_succeeded = true; + helper_st(env, addr, val, oi, retaddr); + + if (cpu->excl_succeeded) { + ret = 0; + } else { + ret = 1; + } + } + + /* Unset LL/SC context */ + cpu->ll_sc_context = false; + cpu->excl_succeeded = false; + cpu->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR; + + return ret; +} + +#undef helper_ldlink_name +#undef helper_stcond_name +#undef helper_ld +#undef helper_st diff --git a/softmmu_template.h b/softmmu_template.h index 6279437..4332db2 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -622,6 +622,18 @@ void probe_write(CPUArchState *env, target_ulong addr, int mmu_idx, #endif #endif /* !defined(SOFTMMU_CODE_ACCESS) */ +#ifdef GEN_EXCLUSIVE_HELPERS + +#if DATA_SIZE > 1 /* The 8-bit helpers are generate along with LE helpers */ +#define BIGENDIAN_EXCLUSIVE_HELPERS +#include "softmmu_llsc_template.h" +#undef BIGENDIAN_EXCLUSIVE_HELPERS +#endif + +#include "softmmu_llsc_template.h" + +#endif /* !defined(GEN_EXCLUSIVE_HELPERS) */ + #undef READ_ACCESS_TYPE #undef SHIFT #undef DATA_TYPE diff --git a/tcg/tcg.h b/tcg/tcg.h index a696922..3e050a4 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -968,6 +968,21 @@ tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr); uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr); +/* Exclusive variants */ +tcg_target_ulong helper_ret_ldlinkub_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_le_ldlinkuw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_le_ldlinkul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); +uint64_t helper_le_ldlinkq_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_be_ldlinkuw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_be_ldlinkul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); +uint64_t helper_be_ldlinkq_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); /* Value sign-extended to tcg register size. */ tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, @@ -1010,6 +1025,22 @@ uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr); uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr); +/* Exclusive variants */ +tcg_target_ulong helper_ret_stcondb_mmu(CPUArchState *env, target_ulong addr, + uint8_t val, TCGMemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_le_stcondw_mmu(CPUArchState *env, target_ulong addr, + uint16_t val, TCGMemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_le_stcondl_mmu(CPUArchState *env, target_ulong addr, + uint32_t val, TCGMemOpIdx oi, uintptr_t retaddr); +uint64_t helper_le_stcondq_mmu(CPUArchState *env, target_ulong addr, + uint64_t val, TCGMemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_be_stcondw_mmu(CPUArchState *env, target_ulong addr, + uint16_t val, TCGMemOpIdx oi, uintptr_t retaddr); +tcg_target_ulong helper_be_stcondl_mmu(CPUArchState *env, target_ulong addr, + uint32_t val, TCGMemOpIdx oi, uintptr_t retaddr); +uint64_t helper_be_stcondq_mmu(CPUArchState *env, target_ulong addr, + uint64_t val, TCGMemOpIdx oi, uintptr_t retaddr); + /* Temporary aliases until backends are converted. */ #ifdef TARGET_WORDS_BIGENDIAN
The new helpers rely on the legacy ones to perform the actual read/write. The LoadLink helper (helper_ldlink_name) prepares the way for the following StoreCond operation. It sets the linked address and the size of the access. The LoadLink helper also updates the TLB entry of the page involved in the LL/SC to all vCPUs by forcing a TLB flush, so that the following accesses made by all the vCPUs will follow the slow path. The StoreConditional helper (helper_stcond_name) returns 1 if the store has to fail due to a concurrent access to the same page by another vCPU. A 'concurrent access' can be a store made by *any* vCPU (although, some implementations allow stores made by the CPU that issued the LoadLink). Suggested-by: Jani Kokkonen <jani.kokkonen@huawei.com> Suggested-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> --- cputlb.c | 3 ++ include/qom/cpu.h | 5 ++ softmmu_llsc_template.h | 133 ++++++++++++++++++++++++++++++++++++++++++++++++ softmmu_template.h | 12 +++++ tcg/tcg.h | 31 +++++++++++ 5 files changed, 184 insertions(+) create mode 100644 softmmu_llsc_template.h