Message ID | 20180405134702.5020-1-jcmvbkbc@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Le 05/04/2018 à 15:47, Max Filippov a écrit : > preadv/pwritev accept low and high parts of file offset in two separate > parameters. When host bitness doesn't match guest bitness these parts > must be appropriately recombined. > Introduce target_to_host_low_high that does this recombination and use > it in preadv/pwritev syscalls. > > This fixes glibc testsuite test misc/tst-preadvwritev64. > > Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> > --- > Changes v2->v3: > - rename helper to target_to_host_low_high; > - handle cases TARGET_LONG_BITS * 2 > HOST_LONG_BITS and > TARGET_LONG_BITS < 2 * HOST_LONG_BITS correctly. > > Changes v1->v2: > - fix host high computation in TARGET_LONG_BITS > HOST_LONG_BITS case > > linux-user/syscall.c | 34 ++++++++++++++++++++++++++++++++-- > 1 file changed, 32 insertions(+), 2 deletions(-) > > diff --git a/linux-user/syscall.c b/linux-user/syscall.c > index 5ef517613577..13ad572e0d3b 100644 > --- a/linux-user/syscall.c > +++ b/linux-user/syscall.c > @@ -3386,6 +3386,30 @@ static abi_long do_getsockopt(int sockfd, int level, int optname, > return ret; > } > > +static void target_to_host_low_high(abi_ulong tlow, > + abi_ulong thigh, > + unsigned long *hlow, > + unsigned long *hhigh) > +{ > +#if TARGET_LONG_BITS == HOST_LONG_BITS > + *hlow = tlow; > + *hhigh = thigh; > +#elif TARGET_LONG_BITS < HOST_LONG_BITS > + *hlow = tlow | (unsigned long)thigh << TARGET_LONG_BITS; > +#if TARGET_LONG_BITS * 2 <= HOST_LONG_BITS > + *hhigh = 0; > +#else > + *hhigh = (unsigned long)thigh >> (HOST_LONG_BITS - TARGET_LONG_BITS); > +#endif > +#else > + *hlow = (unsigned long)tlow; > + *hhigh = (unsigned long)(tlow >> HOST_LONG_BITS); > +#if TARGET_LONG_BITS < 2 * HOST_LONG_BITS > + *hhigh |= (unsigned long)thigh << (TARGET_LONG_BITS - HOST_LONG_BITS); > +#endif > +#endif > +} Why don't you try to de-construct then re-construct the offset? Kernel commit 601cc11d054a "Make non-compat preadv/pwritev use native register size" is interesting. static inline abi_llong target_pos_from_hilo(abi_ulong high, abi_ulong low) { #define TARGET_HALF_LONG_BITS (TARGET_LONG_BITS / 2) return (((abi_llong)high << TARGET_HALF_LONG_BITS) << TARGET_HALF_LONG_BITS) | low; } #define HOST_HALF_LONG_BITS (HOST_LONG_BITS / 2) abi_llong pos = target_pos_from_hilo(arg4, arg5); ret = get_errno(safe_preadv(arg1, vec, arg3, pos, (pos >> HOST_HALF_LONG_BITS) >> HOST_HALF_LONG_BITS)); It looks simpler, but perhaps I miss something (it's just cut&paste, I don't pretend to understand that code...)? Richard? Thanks, Laurent
On Thu, Apr 5, 2018 at 8:52 AM, Laurent Vivier <laurent@vivier.eu> wrote: > Why don't you try to de-construct then re-construct the offset? It would require 128-bit arithmetic on 64-bit host. > Kernel commit > 601cc11d054a "Make non-compat preadv/pwritev use native register size" > is interesting. > > static inline abi_llong target_pos_from_hilo(abi_ulong high, > abi_ulong low) > { > #define TARGET_HALF_LONG_BITS (TARGET_LONG_BITS / 2) > return (((abi_llong)high << TARGET_HALF_LONG_BITS) << > TARGET_HALF_LONG_BITS) | low; > } > > #define HOST_HALF_LONG_BITS (HOST_LONG_BITS / 2) > > abi_llong pos = target_pos_from_hilo(arg4, arg5); > ret = get_errno(safe_preadv(arg1, vec, arg3, > pos, > (pos >> HOST_HALF_LONG_BITS) >> HOST_HALF_LONG_BITS)); > > It looks simpler, but perhaps I miss something > (it's just cut&paste, I don't pretend to understand that code...)? If we decide that host unsigned long long is wide enough to represent meaningful file positions then this function may be simplified to something like unsigned long long off = (unsigned long long)tlow | ((unsigned long long)thigh << (TARGET_LONG_BITS / 2)) << (TARGET_LONG_BITS / 2); *hlow = (unsigned long)off; *hhigh = (unsigned long)((off >> (HOST_LONG_BITS / 2)) >> (HOST_LONG_BITS / 2)); There's also a bug that the target may specify an offset not representable on the host, that will get truncated and point to a different location in the file, possibly resulting in data corruption.
Le 05/04/2018 à 18:27, Max Filippov a écrit : > On Thu, Apr 5, 2018 at 8:52 AM, Laurent Vivier <laurent@vivier.eu> wrote: >> Why don't you try to de-construct then re-construct the offset? > > It would require 128-bit arithmetic on 64-bit host. > >> Kernel commit >> 601cc11d054a "Make non-compat preadv/pwritev use native register size" >> is interesting. >> >> static inline abi_llong target_pos_from_hilo(abi_ulong high, >> abi_ulong low) >> { >> #define TARGET_HALF_LONG_BITS (TARGET_LONG_BITS / 2) >> return (((abi_llong)high << TARGET_HALF_LONG_BITS) << >> TARGET_HALF_LONG_BITS) | low; >> } >> >> #define HOST_HALF_LONG_BITS (HOST_LONG_BITS / 2) >> >> abi_llong pos = target_pos_from_hilo(arg4, arg5); >> ret = get_errno(safe_preadv(arg1, vec, arg3, >> pos, >> (pos >> HOST_HALF_LONG_BITS) >> HOST_HALF_LONG_BITS)); >> >> It looks simpler, but perhaps I miss something >> (it's just cut&paste, I don't pretend to understand that code...)? > > If we decide that host unsigned long long is wide enough to represent > meaningful file positions then this function may be simplified to > something like > > unsigned long long off = (unsigned long long)tlow | > ((unsigned long long)thigh << (TARGET_LONG_BITS / 2)) << (TARGET_LONG_BITS / 2); > *hlow = (unsigned long)off; > *hhigh = (unsigned long)((off >> (HOST_LONG_BITS / 2)) >> (HOST_LONG_BITS / 2)); > > There's also a bug that the target may specify an offset not representable > on the host, that will get truncated and point to a different location in the > file, possibly resulting in data corruption. I don't think it can happen, because "long long" is always 64bit, so it fits in two 32bit (long is 32bit on 32bit, 64bit on 64bit). Thanks, Laurent
diff --git a/linux-user/syscall.c b/linux-user/syscall.c index 5ef517613577..13ad572e0d3b 100644 --- a/linux-user/syscall.c +++ b/linux-user/syscall.c @@ -3386,6 +3386,30 @@ static abi_long do_getsockopt(int sockfd, int level, int optname, return ret; } +static void target_to_host_low_high(abi_ulong tlow, + abi_ulong thigh, + unsigned long *hlow, + unsigned long *hhigh) +{ +#if TARGET_LONG_BITS == HOST_LONG_BITS + *hlow = tlow; + *hhigh = thigh; +#elif TARGET_LONG_BITS < HOST_LONG_BITS + *hlow = tlow | (unsigned long)thigh << TARGET_LONG_BITS; +#if TARGET_LONG_BITS * 2 <= HOST_LONG_BITS + *hhigh = 0; +#else + *hhigh = (unsigned long)thigh >> (HOST_LONG_BITS - TARGET_LONG_BITS); +#endif +#else + *hlow = (unsigned long)tlow; + *hhigh = (unsigned long)(tlow >> HOST_LONG_BITS); +#if TARGET_LONG_BITS < 2 * HOST_LONG_BITS + *hhigh |= (unsigned long)thigh << (TARGET_LONG_BITS - HOST_LONG_BITS); +#endif +#endif +} + static struct iovec *lock_iovec(int type, abi_ulong target_addr, abi_ulong count, int copy) { @@ -10449,7 +10473,10 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1, { struct iovec *vec = lock_iovec(VERIFY_WRITE, arg2, arg3, 0); if (vec != NULL) { - ret = get_errno(safe_preadv(arg1, vec, arg3, arg4, arg5)); + unsigned long low, high; + + target_to_host_low_high(arg4, arg5, &low, &high); + ret = get_errno(safe_preadv(arg1, vec, arg3, low, high)); unlock_iovec(vec, arg2, arg3, 1); } else { ret = -host_to_target_errno(errno); @@ -10462,7 +10489,10 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1, { struct iovec *vec = lock_iovec(VERIFY_READ, arg2, arg3, 1); if (vec != NULL) { - ret = get_errno(safe_pwritev(arg1, vec, arg3, arg4, arg5)); + unsigned long low, high; + + target_to_host_low_high(arg4, arg5, &low, &high); + ret = get_errno(safe_pwritev(arg1, vec, arg3, low, high)); unlock_iovec(vec, arg2, arg3, 0); } else { ret = -host_to_target_errno(errno);
preadv/pwritev accept low and high parts of file offset in two separate parameters. When host bitness doesn't match guest bitness these parts must be appropriately recombined. Introduce target_to_host_low_high that does this recombination and use it in preadv/pwritev syscalls. This fixes glibc testsuite test misc/tst-preadvwritev64. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> --- Changes v2->v3: - rename helper to target_to_host_low_high; - handle cases TARGET_LONG_BITS * 2 > HOST_LONG_BITS and TARGET_LONG_BITS < 2 * HOST_LONG_BITS correctly. Changes v1->v2: - fix host high computation in TARGET_LONG_BITS > HOST_LONG_BITS case linux-user/syscall.c | 34 ++++++++++++++++++++++++++++++++-- 1 file changed, 32 insertions(+), 2 deletions(-)