[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v3 20/51] tcg/optimize: Use fold_masks_zs in fold_exts
From: |
Richard Henderson |
Subject: |
[PATCH v3 20/51] tcg/optimize: Use fold_masks_zs in fold_exts |
Date: |
Sun, 22 Dec 2024 08:24:15 -0800 |
Avoid the use of the OptContext slots. Find TempOptInfo once.
Explicitly sign-extend z_mask instead of doing that manually.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/optimize.c | 29 ++++++++++++-----------------
1 file changed, 12 insertions(+), 17 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index f05110cb9f..ab8ce1de2a 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1761,49 +1761,44 @@ static bool fold_extract2(OptContext *ctx, TCGOp *op)
static bool fold_exts(OptContext *ctx, TCGOp *op)
{
- uint64_t s_mask_old, s_mask, z_mask, sign;
+ uint64_t s_mask_old, s_mask, z_mask;
bool type_change = false;
+ TempOptInfo *t1;
if (fold_const1(ctx, op)) {
return true;
}
- z_mask = arg_info(op->args[1])->z_mask;
- s_mask = arg_info(op->args[1])->s_mask;
+ t1 = arg_info(op->args[1]);
+ z_mask = t1->z_mask;
+ s_mask = t1->s_mask;
s_mask_old = s_mask;
switch (op->opc) {
CASE_OP_32_64(ext8s):
- sign = INT8_MIN;
- z_mask = (uint8_t)z_mask;
+ s_mask |= INT8_MIN;
+ z_mask = (int8_t)z_mask;
break;
CASE_OP_32_64(ext16s):
- sign = INT16_MIN;
- z_mask = (uint16_t)z_mask;
+ s_mask |= INT16_MIN;
+ z_mask = (int16_t)z_mask;
break;
case INDEX_op_ext_i32_i64:
type_change = true;
QEMU_FALLTHROUGH;
case INDEX_op_ext32s_i64:
- sign = INT32_MIN;
- z_mask = (uint32_t)z_mask;
+ s_mask |= INT32_MIN;
+ z_mask = (int32_t)z_mask;
break;
default:
g_assert_not_reached();
}
- if (z_mask & sign) {
- z_mask |= sign;
- }
- s_mask |= sign << 1;
-
- ctx->z_mask = z_mask;
- ctx->s_mask = s_mask;
if (0 && !type_change && fold_affected_mask(ctx, op, s_mask &
~s_mask_old)) {
return true;
}
- return fold_masks(ctx, op);
+ return fold_masks_zs(ctx, op, z_mask, s_mask);
}
static bool fold_extu(OptContext *ctx, TCGOp *op)
--
2.43.0
- [PATCH v3 02/51] tcg/optimize: Split out fold_affected_mask, (continued)
- [PATCH v3 02/51] tcg/optimize: Split out fold_affected_mask, Richard Henderson, 2024/12/22
- [PATCH v3 13/51] tcg/optimize: Use fold_and and fold_masks_z in fold_deposit, Richard Henderson, 2024/12/22
- [PATCH v3 24/51] tcg/optimize: Use fold_masks_s in fold_nand, Richard Henderson, 2024/12/22
- [PATCH v3 26/51] tcg/optimize: Use fold_masks_s in fold_nor, Richard Henderson, 2024/12/22
- [PATCH v3 30/51] tcg/optimize: Use fold_masks_zs in fold_qemu_ld, Richard Henderson, 2024/12/22
- [PATCH v3 05/51] tcg/optimize: Augment s_mask from z_mask in fold_masks_zs, Richard Henderson, 2024/12/22
- [PATCH v3 08/51] tcg/optimize: Use fold_masks_zs in fold_and, Richard Henderson, 2024/12/22
- [PATCH v3 19/51] tcg/optimize: Use finish_folding in fold_extract2, Richard Henderson, 2024/12/22
- [PATCH v3 18/51] tcg/optimize: Use fold_masks_z in fold_extract, Richard Henderson, 2024/12/22
- [PATCH v3 21/51] tcg/optimize: Use fold_masks_z in fold_extu, Richard Henderson, 2024/12/22
- [PATCH v3 20/51] tcg/optimize: Use fold_masks_zs in fold_exts,
Richard Henderson <=
- [PATCH v3 23/51] tcg/optimize: Use finish_folding in fold_mul*, Richard Henderson, 2024/12/22
- [PATCH v3 37/51] tcg/optimize: Use finish_folding in fold_cmp_vec, Richard Henderson, 2024/12/22
- [PATCH v3 32/51] tcg/optimize: Use finish_folding in fold_remainder, Richard Henderson, 2024/12/22
- [PATCH v3 34/51] tcg/optimize: Use fold_masks_z in fold_setcond, Richard Henderson, 2024/12/22
- [PATCH v3 06/51] tcg/optimize: Change representation of s_mask, Richard Henderson, 2024/12/22
- [PATCH v3 35/51] tcg/optimize: Use fold_masks_s in fold_negsetcond, Richard Henderson, 2024/12/22
- [PATCH v3 29/51] tcg/optimize: Use fold_masks_zs in fold_orc, Richard Henderson, 2024/12/22
- [PATCH v3 33/51] tcg/optimize: Distinguish simplification in fold_setcond_zmask, Richard Henderson, 2024/12/22
- [PATCH v3 09/51] tcg/optimize: Use fold_masks_zs in fold_andc, Richard Henderson, 2024/12/22