target/s390x: Fix LPSW

Currently LPSW does not invert the mask bit 12 and incorrectly copies
the BA bit into the address.

Fix by generating code similar to what s390_cpu_load_normal() does.

Reported-by: Nina Schoetterl-Glausch <nsg@linux.ibm.com>
Co-developed-by: Nina Schoetterl-Glausch <nsg@linux.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20230315020408.384766-2-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This commit is contained in:
Ilya Leoshkevich 2023-03-15 03:04:06 +01:00 committed by Thomas Huth
parent 9ba1caf510
commit 377dc84e2d

View File

@ -2910,19 +2910,21 @@ static DisasJumpType op_lpp(DisasContext *s, DisasOps *o)
static DisasJumpType op_lpsw(DisasContext *s, DisasOps *o)
{
TCGv_i64 t1, t2;
TCGv_i64 mask, addr;
per_breaking_event(s);
t1 = tcg_temp_new_i64();
t2 = tcg_temp_new_i64();
tcg_gen_qemu_ld_i64(t1, o->in2, get_mem_index(s),
MO_TEUL | MO_ALIGN_8);
tcg_gen_addi_i64(o->in2, o->in2, 4);
tcg_gen_qemu_ld32u(t2, o->in2, get_mem_index(s));
/* Convert the 32-bit PSW_MASK into the 64-bit PSW_MASK. */
tcg_gen_shli_i64(t1, t1, 32);
gen_helper_load_psw(cpu_env, t1, t2);
/*
* Convert the short PSW into the normal PSW, similar to what
* s390_cpu_load_normal() does.
*/
mask = tcg_temp_new_i64();
addr = tcg_temp_new_i64();
tcg_gen_qemu_ld_i64(mask, o->in2, get_mem_index(s), MO_TEUQ | MO_ALIGN_8);
tcg_gen_andi_i64(addr, mask, PSW_MASK_SHORT_ADDR);
tcg_gen_andi_i64(mask, mask, PSW_MASK_SHORT_CTRL);
tcg_gen_xori_i64(mask, mask, PSW_MASK_SHORTPSW);
gen_helper_load_psw(cpu_env, mask, addr);
return DISAS_NORETURN;
}