b6c636f2cd
Replace fast_memmove() variants by access_memmove() variants, that first try to probe access to all affected pages (maximum is two pages). Introduce access_get_byte()/access_set_byte(). We might be able to speed up memmove in special cases even further (do single-byte access, use memmove() for remaining bytes in page), however, we'll skip that for now. In MVCOS, simply always call access_memmove_as() and drop the TODO about LAP. LAP is already handled in the MMU. Get rid of adj_len_to_page(), which is now unused. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> |
||
---|---|---|
.. | ||
alpha | ||
arm | ||
cris | ||
hppa | ||
i386 | ||
lm32 | ||
m68k | ||
microblaze | ||
mips | ||
moxie | ||
nios2 | ||
openrisc | ||
ppc | ||
riscv | ||
s390x | ||
sh4 | ||
sparc | ||
tilegx | ||
tricore | ||
unicore32 | ||
xtensa |