W
Wilco Dijkstra
- Jan 1, 1970
- 0
Nick Maclaren said:I am getting tired of simply pointing out factual errors, and this
will be my last on this sub-thread.
Which factual errors?
|> Most compilers, including the highly optimizing ones, do almost all
|> optimization at a far lower level. This not only avoids most of the issues
|> you're talking about, but it also ensures badly behaved programs are
|> correctly optimized, while well behaved programs are still optimized
|> aggressively.
I spent 10 years managing a wide range of HPC machines (and have advised
on such uses for much longer). You are wrong in all respects, as you
can find out if you look. Try Sun's and IBM's compiler documentation,
for a start, and most of the others (though I can't now remember which).
Your claims that it isn't a problem would make anyone with significant
HPC experience laugh hollowly. Few other people use aggressive
optimisation on whole, complicated programs. Even I don't, for most
code.
And I laugh in their face about their claims of creating a "highly optimizing
compiler" that generates incorrect code! Any idiot can write a highly
optimizing compiler if it doesn't need to be correct... I know that many
of the issues are caused by optimizations originally written for other
languages (eg. Fortran has pretty loose aliasing rules), but which
require more checks to be safe in C.
My point is that compilers have to compile existing code correctly - even
if it is written badly. It isn't hard to recognise nasty cases, for example it's
common to do *(T*)&var to convert between integer and floating point.
Various compilers treat this as an idiom and use direct int<->FP moves
which are more efficient. So this particular case wouldn't even show up
when doing type based alias analysis.
|> S370, Alpha and PA-RISC all support arithmetic right shifts. There
|> is no information available on the S-3600.
All or almost all of those use only the bottom few bits of the shift.
That is typical of all implementations, but it is not a big issue, and the
standard is correct in this respect.
I can't remember the recent systems that had only unsigned shifts, but
they may have been in one or of the various SIMD extensions to various
architectures.
Even if you only have unsigned shifts, you can still emulate arithmetic
ones. My point is there is no excuse for getting them wrong, even if
your name is Cray and you can improve cycle time by not supporting
them in hardware.
|> > Signed left shifts are undefined only if they overflow; that is undefined
|> > because anything can happen (including the CPU stopping). Signed right
|> > shifts are only implementation defined for negative values; that is
|> > because they might be implemented as unsigned shifts.
|>
|> No. The standard is quite explicit that any left shift of a negative value
|> is undefined, even if they there is no overflow. This is an inconsistency
|> as compilers change multiplies by a power of 2 into a left shift and visa
|> versa. There is no similar undefined behaviour for multiplies however.
From the standard:
[#4] The result of E1 << E2 is E1 left-shifted E2 bit
positions; vacated bits are filled with zeros. If E1 has an
unsigned type, the value of the result is E1×2^E2, reduced
modulo one more than the maximum value representable in the
result type. If E1 has a signed type and nonnegative value,
and E1×2^E2 is representable in the result type, then that is
the resulting value; otherwise, the behavior is undefined.
Exactly my point. It clearly states that ALL leftshifts of negative values are
undefined, EVEN if they would be representable. The "and nonnegative value"
excludes negative values! The correct wording should be something like:
"If E1 has a signed type and E1×2^E2 is representable in the result type, then
that is the resulting value; otherwise, the behavior is implementation defined."
Wilco