W
Wilco Dijkstra
- Jan 1, 1970
- 0
Nick Maclaren said:|>
|> > |> It's certainly true the C standard is one of the worst specified. However most
|> > |> compiler writers agree about the major omissions and platforms have ABIs that
|> > |> specify everything else needed for binary compatibility (that includes features
|> > |> like volatile, bitfield details etc). So things are not as bad in reality.
|> >
|> > Er, no. I have a LOT of experience with serious code porting, and
|> > am used as an expert of last resort. Most niches have their own
|> > interpretations of C, but none of them use the same ones, and only
|> > programmers with a very wide experience can write portable code.
|>
|> Can you give examples of such different interpretations? There are a
|> few areas that people disagree about, but it often doesn't matter much.
It does as soon as you switch on serious optimisation, or use a CPU
with unusual characteristics; both are common in HPC and rare outside
it. Note that compilers like gcc do not have any options that count
as serious optimisation.
Which particular loop optimizations do you mean? I worked on a compiler
which did advanced HPC loop optimizations. I did find a lot of bugs in the
optimizations but none had anything to do with the interpretation of the C
standard. Do you have an example?
I could send you my Objects diatribe, unless you already have it,
which describes one aspect. You can also add anything involving
sequence points (including functions in the library that may be
implemented as macros), anything involving alignment, when a library
function must return an error (if ever) and when it is allowed to
flag no error and go bananas. And more.
You have to give more specific examples of differences of interpretation.
I'd like to hear about failures of real software as a direct result of these
differences. I haven't seen any in over 12 years of compiler design besides
obviously broken compilers.
|> Interestingly most code is widely portable despite most programmers
|> having little understanding about portability and violating the C standard in
|> almost every respect.
That is completely wrong, as you will discover if you ever need to
port to a system that isn't just a variant of one you are familiar
with. Perhaps 1% of even the better 'public domain' sources will
compile and run on such systems - I got a lot of messages from
people flabberghasted that my C did.
I bet that most code will compile and run without too much trouble.
C doesn't allow that much variation in targets. And the variation it
does allow (eg. one-complement) is not something sane CPU
designers would consider nowadays.
|> Actually you don't need any "autoconfiguring" in C. Much of that was
|> needed due to badly broken non-conformant Unix compilers. I do see
|> such terrible mess every now and again, with people declaring builtin
|> functions incorrectly as otherwise "it wouldn't compile on compiler X"...
Many of those are actually defects in the standard, if you look more
closely.
I did look closely at some of the issues at the time, but they had nothing
to do with the standard, it was just working around broken compilers.
There is also a lot of software around that blatantly assumes there is
|> Properly sized types like int32_t have finally been standardized, so the
|> only configuration you need is the selection between the various extensions
|> that have not yet been standardized (although things like __declspec are
|> widely accepted nowadays).
"Properly sized types like int32_t", forsooth! Those abominations
are precisely the wrong way to achieve portability over a wide range
of systems or over the long term. I shall be dead and buried when
the 64->128 change hits, but people will discover their error then,
oh, yes, they will!
Not specifying the exact size of types is one of C's worst mistakes.
Using sized types is the right way to achieve portability over a wide
range of existing and future systems (including ones that have different
register sizes). The change to 128-bit is not going to affect this software
precisely because it already uses correctly sized types.
|> > A simple question: have you ever ported a significant amount of
|> > code (say, > 250,000 lines in > 10 independent programs written
|> > by people you have no contact with) to a system with a conforming
|> > C system, based on different concepts to anything the authors
|> > were familiar with? I have.
|>
|> I've done a lot of porting and know most of the problems. It's not nearly
|> as bad as you claim. Many "porting" issues are actually caused by bugs
|> and limitations in the underlying OS. I suggest that your experience is
|> partly colored by the fact that people ask you as a last resort.
Partly, yes. But I am pretty certain that my experience is a lot
wider than yours. I really do mean different CONCEPTS - start with
IBM MVS and move on to a Hitachi SR2201, just during the C era.
It's true that supercomputers of the past had wacky integer sizes and formats
or only supported 64-bit int/double and nothing else. But these systems
weren't designed to run off-the-shelf C, they were built to run FP code fast
(ie. Fortran, not C). In any case I'm pretty certain my experience applies to a
much larger market than yours
Wilco