Maker Pro
Maker Pro

Re: Intel details future Larrabee graphics chip

N

Nick Maclaren

Jan 1, 1970
0
This is getting ridiculously off-topic, and this will be my last
posting on this sub-thread.

|>
|> I know we can go on, but probably mean the same thing in the end,
|> but 'the program would not need inspecting (or changing)' sounds a
|> bit, eh, daring, if not wrong.
|> Take the example of a program that concatenates some wave files to
|> one larger one.
|> It will first read all headers, add the sizes, and then, if it finds the
|> output exceeds 4GB say: 'myprogram: output exceeds 4GB, aborting.'
|> So, the size check, and the reporting, would need to change, in any case.

That is not how to approach such a problem. Inter alia, it prevents
the program from concatenating files in a format with a 4 GB limit
and writing them to one with a larger limit. You should write it
like this:

Each header is read and decodes, and the length is put in an internal
format integer.

The concatenation code adds the lengths, checking that they don't
overflow, and giving a diagnostic if they do.

It then writes the result to the output, checking that the file will
fit, and diagnosing if it won't.


Regards,
Nick Maclaren.
 
W

Wilco Dijkstra

Jan 1, 1970
0
Martin Brown said:
The worst pointer related faults I have ever had to find was as an outsider diagnosing faults in a customers large
software base. The crucial mode of failure was a local copy of a pointer to an object that was subsequently
deallocated but stayed around unmolested for long enough for the program to mostly still work except when it didn't.

Those are very nasty indeed. However they aren't strictly pointer related -
a language without pointers suffers from the same issue (even garbage
collection doesn't solve this kind of problem). ValGrind is good at finding
issues like this. Automatic checking tools have improved significantly
over the last 10 years.

The worst problem I've seen is a union of a pointer and an integer which
was used as a set of booleans, which were confused by the code. So the
last few bits of the pointer were sometimes being changed by setting or
clearing the booleans. Similarly the value of the booleans were different
on different systems or if you changed command-line options, compiled
for debug etc.

Wilco
 
J

Jan Panteltje

Jan 1, 1970
0
This is getting ridiculously off-topic, and this will be my last
posting on this sub-thread.
OK.


|>
|> I know we can go on, but probably mean the same thing in the end,
|> but 'the program would not need inspecting (or changing)' sounds a
|> bit, eh, daring, if not wrong.
|> Take the example of a program that concatenates some wave files to
|> one larger one.
|> It will first read all headers, add the sizes, and then, if it finds the
|> output exceeds 4GB say: 'myprogram: output exceeds 4GB, aborting.'
|> So, the size check, and the reporting, would need to change, in any case.

That is not how to approach such a problem. Inter alia, it prevents
the program from concatenating files in a format with a 4 GB limit
and writing them to one with a larger limit. You should write it
like this:

Each header is read and decodes, and the length is put in an internal
format integer.

Yes this is what I do, so?

The concatenation code adds the lengths, checking that they don't
overflow, and giving a diagnostic if they do.

Yes, and 'overflow' is set by the format it reads, in this case the
wave format, and that is fixed at 4GB.

It then writes the result to the output, checking that the file will
fit, and diagnosing if it won't.

No, it writes only output if it fits, and if it does not, then it switches to
raw mode actually (actually it proposes a new command line).
As raw mode has no file limit (other then OS and filesystem limitations).

See, I *wrote* this program for real, you are only dreaming about one.
 
N

Nick Maclaren

Jan 1, 1970
0
|>
|> I'd certainly be interested in the document. My email is above, just make
|> the obvious edit.

Sent.

|> > |> I bet that most code will compile and run without too much trouble.
|> > |> C doesn't allow that much variation in targets. And the variation it
|> > |> does allow (eg. one-complement) is not something sane CPU
|> > |> designers would consider nowadays.
|> >
|> > The mind boggles. Have you READ the C standard?
|>
|> More than that. I've implemented it. Have you?

Some of it, in an extremely hostile environment. However, that is a lot
LESS than having written programs that get ported to radically different
systems - especially ones that you haven't heard of when you wrote the
code. And my code has been so ported, often without any changes needed.

|> It's only when you implement the standard you realise many of the issues are
|> irrelevant in practice. Take sequence points for example. They are not even
|> modelled by most compilers, so whatever ambiguities there are, they simply
|> cannot become an issue.

They are relied on, heavily, by ALL compilers that do any serious
optimisation. That is why I have seen many problems caused by them,
and one reason why HPC people still prefer Fortran.

|> Similarly various standard pendantics are moaning
|> about shifts not being portable, but they can never mention a compiler that
|> fails to implement them as expected...

Shifts are portable if you code them according to the rules, and don't
rely on unspecified behaviour. I have used compilers that treated
signed right shifts as unsigned, as well as ones that used only the
bottom 5/6/8 bits of the shift value, and ones that raised a 'signal'
on left shift overflow. There are good reasons for all of the
constraints.

No, I can't remember which, offhand, but they included the ones for
the System/370 and Hitachi S-3600. But there were also some
microprocessor ones - PA-RISC? Alpha?

|> Btw Do you happen to know the reasoning behind signed left shifts being
|> undefined while right shifts are implementation defined.

Signed left shifts are undefined only if they overflow; that is undefined
because anything can happen (including the CPU stopping). Signed right
shifts are only implementation defined for negative values; that is
because they might be implemented as unsigned shifts.

|> It will work as long as the compiler supports a 32-bit type - which it will of
|> course. But in the infinitesimal chance it doesn't, why couldn't one
|> emulate a 32-bit type, just like 32-bit systems emulate 64-bit types?

Because then you can't handle the 64-bit objects returned from the
library or read in from files! Portable programs will handle whatever
size of object the system supports, without change - 32-bit, 64-bit,
48-bit, 128-bit or whatever.

|> Actually various other languages support sized types and most software
|> used them long before C99. In many cases it is essential for correctness
|> (imagine writing 32 bits to a peripheral when it expects 16 bits etc). So
|> you really have to come up with some extraordinary evidence to explain
|> why you think sized types are fundamentally wrong.

Not at all. That applies ONLY to the actual external interface, and
Terje and I have explained why C fixed-size types don't help.


Regards,
Nick Maclaren.
 
J

Jan Vorbrüggen

Jan 1, 1970
0
That's not what happened. They hired David Cutler from DEC, where he
had worked on VMS, and pretty much left him alone. The chaos was and
is part of the culture of modern programming.

His work was significantly more disciplined when he worked for DEC than
what was the result from Redmond. But he didn't have a choice: Backward
compatibility, bug for bug and misfeature for misfeature, rule(d|s)
supreme in the Windows realm.

Jan
 
N

Nick Maclaren

Jan 1, 1970
0
|>
|> > That's not what happened. They hired David Cutler from DEC, where he
|> > had worked on VMS, and pretty much left him alone. The chaos was and
|> > is part of the culture of modern programming.
|>
|> His work was significantly more disciplined when he worked for DEC than
|> what was the result from Redmond. But he didn't have a choice: Backward
|> compatibility, bug for bug and misfeature for misfeature, rule(d|s)
|> supreme in the Windows realm.

Oh, it was worse than that! After he had done the initial design
(which was reasonable, if not excellent), he was elbowed out, and
half of his design was thrown out to placate the god Benchmarketing.

The aspect that I remember was that the GUI was brought back from
where he had exiled it to the 'kernel' - and, as we all know, the
GUIs are the source of all ills on modern systems :-(


Regards,
Nick Maclaren.
 
C

Chris M. Thomasson

Jan 1, 1970
0
Jan Vorbrüggen said:
His work was significantly more disciplined when he worked for DEC than
what was the result from Redmond.





But he didn't have a choice: Backward compatibility, bug for bug and
misfeature for misfeature, rule(d|s) supreme in the Windows realm.

;^(...
 
N

Nick Maclaren

Jan 1, 1970
0
|>
|> That did confuse me a little. The book has him holding out, against
|> Gates even, for a small kernel with a client-server relationship to
|> everything else, including all the graphics. The story ends happily
|> there, with nothing left to do but fix the circa 1000 bugs initially
|> shipped. I suppose the kernel was trashed/bloated later in the name of
|> speed.

Right in one - at least according to my understanding!

I read the book, and felt that is was a significant advance, though
not as far as the best of the research systems (e.g. the Cambridge
CHAOS system on CAP).


Regards,
Nick Maclaren.
 
J

Jan Vorbrüggen

Jan 1, 1970
0
Oh, it was worse than that! After he had done the initial design
(which was reasonable, if not excellent), he was elbowed out, and
half of his design was thrown out to placate the god Benchmarketing.

The aspect that I remember was that the GUI was brought back from
where he had exiled it to the 'kernel' - and, as we all know, the
GUIs are the source of all ills on modern systems :-(

Yep - I think that was part of the 3.51 to 4.0 transition. As I
understand it, the thing was just too resource-hungry for the hardware
of the day to be marketable in that state.

But in addition, there were things like not checking syscall arguments
in the kernel - something the VMS guys had been religous about. It was
only after people came out with a CRASHME variant that MS was shamed
into fixing those, for instance. That really shows their lack of discipline.

Jan
 
N

Nick Maclaren

Jan 1, 1970
0
|>
|> > Oh, it was worse than that! After he had done the initial design
|> > (which was reasonable, if not excellent), he was elbowed out, and
|> > half of his design was thrown out to placate the god Benchmarketing.
|> >
|> > The aspect that I remember was that the GUI was brought back from
|> > where he had exiled it to the 'kernel' - and, as we all know, the
|> > GUIs are the source of all ills on modern systems :-(
|>
|> Yep - I think that was part of the 3.51 to 4.0 transition. As I
|> understand it, the thing was just too resource-hungry for the
|> hardware of the day to be marketable in that state.

As I heard it, that was as much an excuse as a reason. What
I heard was that it did perform like a dog, but that didn't
distinguish it from any of the other major releases. And that
problem was temporary.

The other reason I heard was that the GUI (and other components?)
were so repulsive that moving all of their privileged actions to
the other side of an interface (ANY interface) was beyond the
programmers. But they didn't want to admit that, so they mounted
an internal propaganda campaign about the performance.

However, you know what such stories are like. I neither believe
nor disbelieve it.


Regards,
Nick Maclaren.
 
D

Dirk Bruere at NeoPax

Jan 1, 1970
0
Jan said:
Interesting.
For me, I have a hardware background, but also software, the two
came together with FPGA, when I wanted to implement DES as fast as possible.
I did wind up with just a bunch of gates and 1 clock cycle, so no program :)
No loops (all unfolded in hardware).
So, you need to define some boundary between hardware resources (that one used a lot of gates),
and software resources, I think.

Unless you blur the boundary further by using on-the-fly reprogrammable
gate arrays.

--
Dirk

http://www.transcendence.me.uk/ - Transcendence UK
http://www.theconsensus.org/ - A UK political party
http://www.onetribe.me.uk/wordpress/?cat=5 - Our podcasts on weird stuff
 
M

Michel Hack

Jan 1, 1970
0
Btw Do you happen to know the reasoning behind signed left shifts being
undefined while right shifts are implementation defined?

On some machines the high-order bit is shifted out, on others (e.g. S/
370)
it remains unchanged: 0x80000001 << 1 can become 0x80000002 and
not 0x00000002 in a 32-bit register. The S/370 way parallels the
common
sign-propagation method of arithmetic right shifts: the sign does not
change.
 
W

Wilco Dijkstra

Jan 1, 1970
0
Nick Maclaren said:
|>
|> I'd certainly be interested in the document. My email is above, just make
|> the obvious edit.

Sent.

Thanks, I've received it, I'll have a look at it soon (it's big...).
|> > |> I bet that most code will compile and run without too much trouble.
|> > |> C doesn't allow that much variation in targets. And the variation it
|> > |> does allow (eg. one-complement) is not something sane CPU
|> > |> designers would consider nowadays.
|> >
|> > The mind boggles. Have you READ the C standard?
|>
|> More than that. I've implemented it. Have you?

Some of it, in an extremely hostile environment. However, that is a lot
LESS than having written programs that get ported to radically different
systems - especially ones that you haven't heard of when you wrote the
code. And my code has been so ported, often without any changes needed.

My point is that such weird systems no longer get designed. The world has
standardized on 2-complement, 8-bit char, 32-bit int etc, and that is unlikely
to change. Given that there isn't much variation possible.

Putting in extra effort to allow for a theoretical system with sign-magnitude
5-bit char or a 31-bit one-complement int is completely insane.
|> It's only when you implement the standard you realise many of the issues are
|> irrelevant in practice. Take sequence points for example. They are not even
|> modelled by most compilers, so whatever ambiguities there are, they simply
|> cannot become an issue.

They are relied on, heavily, by ALL compilers that do any serious
optimisation. That is why I have seen many problems caused by them,
and one reason why HPC people still prefer Fortran.

It's only source-to-source optimizers that might need to consider these
issues, but these are very rare (we bought one of the few still available).

Most compilers, including the highly optimizing ones, do almost all
optimization at a far lower level. This not only avoids most of the issues
you're talking about, but it also ensures badly behaved programs are
correctly optimized, while well behaved programs are still optimized
aggressively.
|> Similarly various standard pendantics are moaning
|> about shifts not being portable, but they can never mention a compiler that
|> fails to implement them as expected...

Shifts are portable if you code them according to the rules, and don't
rely on unspecified behaviour. I have used compilers that treated
signed right shifts as unsigned, as well as ones that used only the
bottom 5/6/8 bits of the shift value, and ones that raised a 'signal'
on left shift overflow. There are good reasons for all of the
constraints.

No, I can't remember which, offhand, but they included the ones for
the System/370 and Hitachi S-3600. But there were also some
microprocessor ones - PA-RISC? Alpha?

S370, Alpha and PA-RISC all support arithmetic right shifts. There
is no information available on the S-3600.
|> Btw Do you happen to know the reasoning behind signed left shifts being
|> undefined while right shifts are implementation defined.

Signed left shifts are undefined only if they overflow; that is undefined
because anything can happen (including the CPU stopping). Signed right
shifts are only implementation defined for negative values; that is
because they might be implemented as unsigned shifts.

No. The standard is quite explicit that any left shift of a negative value
is undefined, even if they there is no overflow. This is an inconsistency
as compilers change multiplies by a power of 2 into a left shift and visa
versa. There is no similar undefined behaviour for multiplies however.
|> It will work as long as the compiler supports a 32-bit type - which it will of
|> course. But in the infinitesimal chance it doesn't, why couldn't one
|> emulate a 32-bit type, just like 32-bit systems emulate 64-bit types?

Because then you can't handle the 64-bit objects returned from the
library or read in from files!

You're missing the point. A theoretical 64-bit CPU that only supports
64-bit operations could emulate support for 8-bit char, 16-bit short,
32-bit int. Without such emulation it would need 64-bit char, 128-bit
short/int, 256-bit int/long in order to support C. Alpha is proof this is
perfectly feasible: the early versions emulated 8/16-bit types in
software without too much overhead.

Once we agree that it is feasible to emulate types, it is reasonable to
mandate that each implemenation supports the sized types.

Wilco
 
W

Wilco Dijkstra

Jan 1, 1970
0
Terje Mathisen said:
<BG>

That is _identical_ to the code I originally wrote as part of my post, but then deleted as it didn't really add to my
argument. :)

There are of course many possible alternative methods, including inline asm to use a hardware bitscan opcode.

Here's a possibly faster version:

int log2_floor(unsigned x)
{
int n = -1;
while (x >= 0x10000) {
n += 16;
x >>= 16;
}
if (x >= 0x100) {
n += 8;
x >>= 8;
}
if (x >= 0x10) {
n += 4;
x >>= 4;
}
/* At this point x has been reduced to the 0-15 range, use a
* register-internal lookup table:
*/
uint32_t lookup_table = 0xffffaa50;
int lookup = (int) (lookup_table >> (x+x)) & 3;

return n + lookup;
}

I like the lookup in a register method. I once did something like this:

uint8 table[32] = { ... };

int log2_floor(unsigned x)
{
if (x == 0)
return -1;
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
x *= 0x... // multiply with magic constant
return table[x >> 27]; // index into table
}

The shifted OR's force all bits after the leading one to be set too. This
reduces the number of possibilities to just 32. The multiply then shifts
the magic constant by N bits. It is chosen so that the top 5 bits end up
containing a unique bitpattern for each of the 32 possible values of x.
It took 10 instructions plus 32 bytes of table. Placing the table immediately
after the return instruction allowed the use of the LDRB r0,[PC,r0,LSR #27]
instruction, so it didn't even need an instruction to create the table address...

Wilco
 
W

Wilco Dijkstra

Jan 1, 1970
0
Michel Hack said:
On some machines the high-order bit is shifted out, on others (e.g. S/
370)
it remains unchanged: 0x80000001 << 1 can become 0x80000002 and
not 0x00000002 in a 32-bit register. The S/370 way parallels the
common
sign-propagation method of arithmetic right shifts: the sign does not
change.

It would be correct as long as there is no overflow. Ie. 0xffffffff << 1
becomes 0xfffffffe as epected. It's only when you have overflow that things
are different (for both positive and negative signed numbers). The obvious
implementation of left shift on one complement and sign magnitude systems
also work as expected. So it looks the C standard is incorrect here.

Wilco
 
Thanks, I've received it, I'll have a look at it soon (it's big...).



My point is that such weird systems no longer get designed. The world has
standardized on 2-complement, 8-bit char, 32-bit int etc, and that is unlikely
to change. Given that there isn't much variation possible.

Byte addressability is still uncommon in DSP world. And no, C
compilers for DSPs do not emulate char in a manner that you suggested
below. They simply treat char and short as the same thing, on 32-bit
systems char, short and long are all the same. I am pretty sure that
what they do is in full compliance with the C standard.

Putting in extra effort to allow for a theoretical system with sign-magnitude
5-bit char or a 31-bit one-complement int is completely insane.

Agreed

Once we agree that it is feasible to emulate types, it is reasonable to
mandate that each implemenation supports the sized types.

Wilco

It seems you overlooked the main point of Nick's concern - sized types
prevent automagical forward compatibility of the source code with
larger problems on bigger machines.
 
A

Andrew Reilly

Jan 1, 1970
0
Byte addressability is still uncommon in DSP world. And no, C compilers
for DSPs do not emulate char in a manner that you suggested below. They
simply treat char and short as the same thing, on 32-bit systems char,
short and long are all the same. I am pretty sure that what they do is
in full compliance with the C standard.

To be fair to Wilco, I think that even the DSP world has been moving more
in the direction of idiomatic C support for the last decade or so. The
TI-C6000 series are all byte-addressable 32-bit systems for exactly that
reason. The poster child for char/short/int/long=32 bit was probably the
TI-C3x and C4x families, and they're not much used any more, I think.
SHARC is in that boat, but is still used a fair bit. The WE-DSP32C was a
32-bit float DSP with byte addressability some 20 years ago, so it's not
that new an idea.

I doubt that anyone would design a new DSP architecture these days that
didn't have good (i.e., idiomatic) C support, unless it's something a bit
peculiar, like the TAS3108, which is not expected to have C support at
all.

Cheers,
 
A

Andrew Reilly

Jan 1, 1970
0
Do you know if this is true of the 16-bitters as well? Last time I used
TI's C54x family of (16-bit) DSP, all data types were... 16 bits.

Sure, they're all fairly pure 16-bitters. I wouldn't call them new
architectures, though, which is what I was getting at. The FreeScale
56k3 series are still 24-bit word addressed too, but I'm not counting
those as modern, either.

For "modern", besides the C6000 series, I'd include the ADI/Intel
Blackfin and the VLSI ZSP series. I haven't used either of those in
anger, but I believe that they're both more-or-less "C" compliant. The
main other "newness" in DSP-land are all of the DSP-augmented RISC
processors, and they're all essentially pure "C" machines, too (alignment
issues can be worse though, and you often have to use asm or intrinsics
to get at the DSP features.)
Some years ago I used the TI GSP (graphical signal processor) 34020.

Not the 34010? I don't remember hearing about an 020 version.
Given the graphical orientation, data addresses were *bit* addresses --
it was perfectly happy to add, e.g., 16 bits somewhere in the middle of
4 bytes to 16 bits completely "misaligned" in some other set of 4 bytes.
Fun to play around with, although as one would expect performance was
better when everything was algined.

Apart from the 8-times worse fan-out of the multiplexers, which might
limit clock speed, I don't really see how this would be much worse than
unaligned byte-addressed operations. I don't think that there are too
many people interested in single-bit-deep graphics systems any more,
though.

Cheers,
 
N

Nick Maclaren

Jan 1, 1970
0
I am getting tired of simply pointing out factual errors, and this
will be my last on this sub-thread.


|>
|> > |> It's only when you implement the standard you realise many of the issues are
|> > |> irrelevant in practice. Take sequence points for example. They are not even
|> > |> modelled by most compilers, so whatever ambiguities there are, they simply
|> > |> cannot become an issue.
|> >
|> > They are relied on, heavily, by ALL compilers that do any serious
|> > optimisation. That is why I have seen many problems caused by them,
|> > and one reason why HPC people still prefer Fortran.
|>
|> It's only source-to-source optimizers that might need to consider these
|> issues, but these are very rare (we bought one of the few still available).
|>
|> Most compilers, including the highly optimizing ones, do almost all
|> optimization at a far lower level. This not only avoids most of the issues
|> you're talking about, but it also ensures badly behaved programs are
|> correctly optimized, while well behaved programs are still optimized
|> aggressively.

I spent 10 years managing a wide range of HPC machines (and have advised
on such uses for much longer). You are wrong in all respects, as you
can find out if you look. Try Sun's and IBM's compiler documentation,
for a start, and most of the others (though I can't now remember which).

Your claims that it isn't a problem would make anyone with significant
HPC experience laugh hollowly. Few other people use aggressive
optimisation on whole, complicated programs. Even I don't, for most
code.

|> > |> Similarly various standard pendantics are moaning
|> > |> about shifts not being portable, but they can never mention a compiler that
|> > |> fails to implement them as expected...
|> >
|> > Shifts are portable if you code them according to the rules, and don't
|> > rely on unspecified behaviour. I have used compilers that treated
|> > signed right shifts as unsigned, as well as ones that used only the
|> > bottom 5/6/8 bits of the shift value, and ones that raised a 'signal'
|> > on left shift overflow. There are good reasons for all of the
|> > constraints.
|> >
|> > No, I can't remember which, offhand, but they included the ones for
|> > the System/370 and Hitachi S-3600. But there were also some
|> > microprocessor ones - PA-RISC? Alpha?
|>
|> S370, Alpha and PA-RISC all support arithmetic right shifts. There
|> is no information available on the S-3600.

All or almost all of those use only the bottom few bits of the shift.
I can't remember the recent systems that had only unsigned shifts, but
they may have been in one or of the various SIMD extensions to various
architectures.

|> > Signed left shifts are undefined only if they overflow; that is undefined
|> > because anything can happen (including the CPU stopping). Signed right
|> > shifts are only implementation defined for negative values; that is
|> > because they might be implemented as unsigned shifts.
|>
|> No. The standard is quite explicit that any left shift of a negative value
|> is undefined, even if they there is no overflow. This is an inconsistency
|> as compilers change multiplies by a power of 2 into a left shift and visa
|> versa. There is no similar undefined behaviour for multiplies however.

From the standard:

[#4] The result of E1 << E2 is E1 left-shifted E2 bit
positions; vacated bits are filled with zeros. If E1 has an
unsigned type, the value of the result is E1×2E2, reduced
modulo one more than the maximum value representable in the
result type. If E1 has a signed type and nonnegative value,
and E1×2E2 is representable in the result type, then that is
the resulting value; otherwise, the behavior is undefined.

[ E1×2E2 means E1 times 2 to the power E2 and got mangled in the text
version. ]

|> Once we agree that it is feasible to emulate types, it is reasonable to
|> mandate that each implemenation supports the sized types.

That is clearly your opinion. Almost all of those of us with experience
of when that was claimed before for the previous 'universal' standard
disagree.


Regards,
Nick Maclaren.
 
N

Nick Maclaren

Jan 1, 1970
0
|>
|> Byte addressability is still uncommon in DSP world. And no, C
|> compilers for DSPs do not emulate char in a manner that you suggested
|> below. They simply treat char and short as the same thing, on 32-bit
|> systems char, short and long are all the same. I am pretty sure that
|> what they do is in full compliance with the C standard.

Well, it is and it isn't :-( There was a heated debate on SC22WG14,
both in C89 and C99, where the UK wanted to get the standard made
self-consistent. We failed. The current situation is that it is in
full compliance for a free-standing compiler, but not really for a
hosted one (think EOF). This was claimed not to matter, as all DSP
compilers are free-standing!

|> > Putting in extra effort to allow for a theoretical system with
|> > sign-magnitude 5-bit char or a 31-bit one-complement int is
|> > completely insane.
|>
|> Agreed

However, allowing for ones with 16- or 32-bit chars, or signed
magnitude integers is not. The former is already happening, and there
are active, well-supported attempts to introduce the latter (think
IEEE 754R). Will they ever succeed? Dunno.

|> It seems you overlooked the main point of Nick's concern - sized types
|> prevent automagical forward compatibility of the source code with
|> larger problems on bigger machines.

Precisely.


Regards,
Nick Maclaren.
 
Top