Maker Pro
Maker Pro

AREF bypass capacitance on ATMega2560?

R

rickman

Jan 1, 1970
0
Oh, I do. I just reviewed a design that has some rather fat ones in
there and it would hardly have been possible to do this without FPGA.
However, there are circuits where FPGAs are a perfect fit and others
where they just aren't. In ordinary radios they usually aren't.


The last few posts were about car radios, because the topic was
electronics and temperature exposure in cars.

When Piotr started talking about SDR he wasn't talking about car radios
anymore.
 
S

Stef

Jan 1, 1970
0
In comp.arch.embedded,
Joerg said:
The thread has moved there, Rickman advocates that a lot of things can
be better handled by FPGA. In the end it doesn't matter, programmable is
programmable and that gets scrutinized. Has to be.



I generally have that. But this does not always suffice. Take dosage,
for example. Suppose a large patient needs a dose of 25 units while a
kid should never get more than 5. How would the hardware limiter know
whether the person sitting outside the machine is a heavy-set adult or a
skinny kid?

There are cases where a hardware limiter is an option, there are cases
where that's not an option. In your example above the biggest risk is
however the nurse calculating and setting the dose, but that's another
part of the risk analysis. ;-)
You can take your chances but it carries risks. For example, I have seen
a system blowing EMC just because the driver software for the barcode
reader was changed. The reason turned out not to be the machine but the
barcode reader itself. One never knows.

Yes, there are always chances and you have to weigh the risks. Making
sure all units pass EMC testing can only be done by fully testing each
unit under all cirumstances. Which is ofcourse impossible.

Your barcode scanner example is unfortunate. But such a scanner could
also change it's behaviour on scanning different codes and lighting
conditions. Did you perform EMC testing with all available barcodes
and forseeable lighting conditions?
The other factor is the agency. If they mandate re-testing after certain
changes you have to do it. In aerospace it can also be the customer
demanding it, for example an airline.

Yes , if the agency or customer demands re-testing, there's nothing you
can do but re-test.

--
Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)

She asked me, "What's your sign?"
I blinked and answered "Neon,"
I thought I'd blow her mind...
 
P

Piotr Wyderski

Jan 1, 1970
0
Joerg said:
But wait, there's more. Blue blinkenlights, yellow blinkenlights.

Sure you need them! A friend of my brother produces and sells some
simple ultrasound marten repelling devices. The sales went through
the roof when he added a blinking LED to the box. So blinking is of
crucial importance. :)
I never had much fun with SDR because it's expensive

You calculate costs differently when it's a hobby project.
I.e. your time is basically free and the parts are expensive,
which is exactly the oposite of professional prototyping.
and generally inferior to the classic circuits when it comes to performance.

Why should it be? The RF front-end is mostly the same.
What is the difference between feeding an ADC and feeding
an I/Q demodulator?
I do now have one spectrum analyzer (the Signalhound) that is basically an SDR.

I think most (or at least a significant fraction of) the modern
radio receivers is a form of SDR. I mean the radios in the cell
phones. You have a powerful CPU on board, often with DSP capabilities
like the NEON instruction set, so all you need to do is to build
a homodyne and move everything else to the digital domain. FM
demodulation, stereo and RDS decoding -- all that is easy.
All the PC TV USB dongles also work on this principle.

In my design I also moved the IF filtering to the FPGA.
It was so much easier to build a digital filter with
configurable passband than to do it the old school way...

Best regards, Piotr
 
P

Piotr Wyderski

Jan 1, 1970
0
rickman said:
The issue is not the cost of an MCU. There are any number of reasons
why not to use a separate chip for the CPU.

I don't mean a separate chip. All I want to say is that you don't
want your MCU to be reconfigurable. It should be as standard as
possible for many reasons. The most important of them is that
a standard CPU has mature and debugged toolchain. The next one is
that my personal memory is too precious a resource to remember
the peculiarities of niche architectures, e.g. NIOS. What is NIOS?
A variant of an ARM core? Or a PowerPC? Or perhaps a brand name
of one of the MIPSes? No, it turns out not to be the case. Then,
my dear FPGA vendor, please go and... :)

I don't insist the "standard" word should mean ARM, however it would
be good. The Virtex family used to have up to 4 PowerPC cores, which
is fine. The problem is that a soft core eats my reconfigurable
resources which artificially boosts the cost of the FPGA chip, because
I need one or two grades bigger chip than is really necessary with
a hard core. And the el cheapo lines do not have them -- a Spartan
is what I would ever need, I am not crazy enough to buy a Virtex or
Startix for advanced amateur designs. No to mention that most of them
come in BGA which is a big no-go for many reasons.
The one I encounter most often is board space.

In my case it's PCB routing complexity. I am not going
to use 4+ layers of copper just to satisfy the monster's
signal integrity requirements. 2 layers is all I can have.
If an MCU does the entire job you need, then fine, use it. But there
are plenty of times you need both and you can always include a CPU on
your FPGA, but it is hard to make a CPU do what the FPGA does.

Exactly. But then you use 60% of the chip's reconfigurable
capacity and most of the BRAMs just to embed an MCU? Makes
no sense to me. The need for a hard macrocell providing
a decent 32-bitter is so obvious...

As I said, I need ~40 PWM channels, 4 CANs and/or 6 RS485 + Ethernet.
Impossible on an MCU, so an FPGA (probably a Spartan 6 in TQFP144)
is the most promising candidate. But the rest is so much CPUish...
That's not a good idea for either an MCU or an FPGA.

Taken literally -- surely. But often you don't know all
the requirements or just didn't have the right idea at
the time of initial design. If you have spare FPGA resources,
you can use them to extend the device abilities.
There are plenty of dedicated I/Os on an FPGA. Clock lines are the most
obvious. Although you can bring a clock into the chip on any I/O pin,
if you don't use a clock pin it will have extra delay getting to the
clocked elements. This can cause timing issues on clocked I/Os.

Mort of my signals is so slow that the signal integrity issues
rarely matter.
I'm not sure what this means.

I mean you don't need to design your own PCIe controller.
Or a DDRx controller. Or Ethernet. Or CANBUS. Name your
own typical interface. It's a waste of resources (and power)
to implement them using the reconfigurable fabric. You
would like to have a hardware core + be able to add your
own when you need something fancy. Some of these have
already been "hardened", e.g. the multipliers (all
post-Cyclone era FPGAs) or the DDR controller (in Spartan 6).
An MCU, please?
It is not possible to implement an ARM on an FPGA without paying huge
license fees.

It was just an example, my favourite architecture.
Any standard core would be fine.
But no matter, what is magical about an ARM?

The toolchain is mature.
There is little utility to sticking with
one instruction set unless you program in assembly language.

No, there is no need to be "portable" when there is no
real reasons for that.
I don't know, do you?

In the digital part? 3.3V is enough for me. Most of the
ARMs (from the ball park I care about) need 1.8V, but are
kind enough to produce it themselves with a built-in LDO.
You can override it with a switcher if you care about
efficiency. No Cyclone/Spartan is equally kind.
How about your designs?

The current one: 12V rectified AC for the power part,
~8V DC (for the remote boards to compensate the wire resistance
and to feed the gates of power MOSFETs), 5V (for CAN), 4.4V for
the GSM module and 3.3V for the rest. I can get rid of the
5V rail if I buy the 3.3V TI CAN PHYs.
Are you talking about the place and route time?

The time between the act of pressing "build" and getting the resulting
bitstream in Quartus. Don't care what the tool does there.
I don't know, do you? Which processor are you thinking of using?

I mostly use AVRs and ARMs.
BTW, have you read the errata sheet for any MCU?

Touche! :)
I suggest you look at the Microsemi Fusion and Smart Fusion devices.
Fusion means it has analog on chip and Smart Fusion means it has
analog plus an ARM CPU. I have not worked with it, so I don't know
much about it. It is also a bit pricey for the original conversation
we had in this thread.

Sounds very interesting, will have a look. What can I have for, say,
$50? Something comparable to Spartan 3S200 would be enough...
And where can I buy them at the tremendous amount of 1 piece? :)

Best regards, Piotr
 
R

rickman

Jan 1, 1970
0
I don't mean a separate chip. All I want to say is that you don't
want your MCU to be reconfigurable. It should be as standard as
possible for many reasons. The most important of them is that
a standard CPU has mature and debugged toolchain. The next one is
that my personal memory is too precious a resource to remember
the peculiarities of niche architectures, e.g. NIOS. What is NIOS?
A variant of an ARM core? Or a PowerPC? Or perhaps a brand name
of one of the MIPSes? No, it turns out not to be the case. Then,
my dear FPGA vendor, please go and... :)

I will grant you that the tools may be better for a mainstream
instruction set, but otherwise I don't see the advantage. I don't see
how your personal memory has much to do with it. Most programmers use
the tools, meaning HHL compilers. The main point of using an HLL is to
*not* need to know anything about the instruction set. If you can't
write HLL code for the specific CPU involved, then you have other issues.

I don't insist the "standard" word should mean ARM, however it would
be good. The Virtex family used to have up to 4 PowerPC cores, which
is fine. The problem is that a soft core eats my reconfigurable
resources which artificially boosts the cost of the FPGA chip, because
I need one or two grades bigger chip than is really necessary with
a hard core. And the el cheapo lines do not have them -- a Spartan
is what I would ever need, I am not crazy enough to buy a Virtex or
Startix for advanced amateur designs. No to mention that most of them
come in BGA which is a big no-go for many reasons.

You are creating a straw man argument. I can fit a soft core CPU into
nearly any FPGA you hand me. Lattice is making some very tiny ones
without memory blocks, but otherwise soft cores are not so large. In
fact, it may have been this thread where someone pointed out that the
NIOS2, a 32 bit processor, can fit in fewer than 600 LUTs! That is
small enough to allow multiple CPUs in most FPGAs with tons of room left
over for peripherals and custom logic.

In my case it's PCB routing complexity. I am not going
to use 4+ layers of copper just to satisfy the monster's
signal integrity requirements. 2 layers is all I can have.

Uh, what monster???

Exactly. But then you use 60% of the chip's reconfigurable
capacity and most of the BRAMs just to embed an MCU? Makes
no sense to me. The need for a hard macrocell providing
a decent 32-bitter is so obvious...

I won't argue that having a hard CPU on an FPGA isn't a nice thing. But
it is not such a bad thing to use a soft core CPU. 60% of the
*smallest* FPGA with block RAM I have seen is enough to embed a 32 bit
soft CPU. In fact, I had to do a bitware upgrade to an existing design
and was worried about not having enough room for the logic. My fall
back plan was to remove the slow functions from the logic and use a soft
CPU because it would be so compact in comparison.

As I said, I need ~40 PWM channels, 4 CANs and/or 6 RS485 + Ethernet.
Impossible on an MCU, so an FPGA (probably a Spartan 6 in TQFP144)
is the most promising candidate. But the rest is so much CPUish...

I'm not sure that is impossible on an MCU. Maybe it is impossible on an
MCU you can buy, but a custom design might do nicely. Just saying 40
PWM channels doesn't tell me how much CPU cycles are needed. But since
you can have so many CPUs on even a smallish FPGA, I expect you could
divide and conquer quite easily. Or you can use dedicated logic and a
soft core for the Ethernet and any other functions that are better suited.

Taken literally -- surely. But often you don't know all
the requirements or just didn't have the right idea at
the time of initial design. If you have spare FPGA resources,
you can use them to extend the device abilities.

You can wave the same hands for CPU cycles. The only difference is MCU
I/Os are not as flexible, typically being constrained to one set of pins
or a small selection of I/Os. I guess that was your point?

Mort of my signals is so slow that the signal integrity issues
rarely matter.

I'm not talking about SI, I'm talking about the large (by comparison)
delays in I/O routing to the clock net. The clock inputs have a direct
connection which allow deterministic timing for I/O setup and hold
times. If your designs are so slow that isn't an issue, fine, but that
is not very common.

I mean you don't need to design your own PCIe controller.
Or a DDRx controller. Or Ethernet. Or CANBUS. Name your
own typical interface. It's a waste of resources (and power)
to implement them using the reconfigurable fabric. You
would like to have a hardware core + be able to add your
own when you need something fancy. Some of these have
already been "hardened", e.g. the multipliers (all
post-Cyclone era FPGAs) or the DDR controller (in Spartan 6).
An MCU, please?

Ok, I agree. I think that is the direction FPGAs will be headed, but
not very fast. FPGA makers see their market as the deep pockets of the
data/telecoms providers pumping out many, many boxes that keep our
communications running at warp speed (lol). So far those markets have
been served well by the same model, bigger, faster, very expensive FPGAs
pushing the limits of silicon processing just like the mainstream GP
CPUs. But just like GP CPUs, I see the market changing over the next 5
or 10 years. I think FPGAs will need to incorporate CPU cores, but not
exactly like embedding an MCU, possibly more like making a small CPU a
functional block like a LUT.

Check out the GA144 from greenarrays.com. They don't market their chip
this way, but I see it as a Field Programmable Processor Array (FPPA) to
be used in a similar manner to an FPGA. These CPUs can't be used like
conventional CPUs. The memory is too small and the CPUs are only
connected to adjacent CPUs. But if you can master the concept, I think
it can be powerful.

It was just an example, my favourite architecture.
Any standard core would be fine.


The toolchain is mature.

How mature do you need?

No, there is no need to be "portable" when there is no
real reasons for that.


In the digital part? 3.3V is enough for me. Most of the
ARMs (from the ball park I care about) need 1.8V, but are
kind enough to produce it themselves with a built-in LDO.
You can override it with a switcher if you care about
efficiency. No Cyclone/Spartan is equally kind.

Then don't use a Cyclone/Spartan part...

The current one: 12V rectified AC for the power part,
~8V DC (for the remote boards to compensate the wire resistance
and to feed the gates of power MOSFETs), 5V (for CAN), 4.4V for
the GSM module and 3.3V for the rest. I can get rid of the
5V rail if I buy the 3.3V TI CAN PHYs.

So you have some five power rails? Sounds to me like adding a 1.x
supply would be no big deal. I included a tiny 1.2 volt switcher on my
last design so I could use either version of the FPGA, the one with the
internal LDO or the one without.

The time between the act of pressing "build" and getting the resulting
bitstream in Quartus. Don't care what the tool does there.

I know you don't care, but what takes so long? My compiles only take a
few minutes.

I mostly use AVRs and ARMs.


Touche! :)


Sounds very interesting, will have a look. What can I have for, say,
$50? Something comparable to Spartan 3S200 would be enough...
And where can I buy them at the tremendous amount of 1 piece? :)

Try Digikey, I believe they sell Microsemi. I know I have checked
prices and if I didn't get it from Digikey I don't know where I did.
The ballpark price for the Smart Fusion is under $50 I believe, but not
by a lot. The packages may not make you happy though. They seem to
think the Smart Fusion chips need a bazillion I/Os. I have no idea what
market they are pursuing. I guess they are trying to keep the ASP high.
The CPU is an ARM CM3 which should make you happy,,, :)
 
P

Piotr Wyderski

Jan 1, 1970
0
rickman said:
I don't see how your personal memory has much to do with it. Most programmers use
the tools, meaning HHL compilers. The main point of using an HLL is to
*not* need to know anything about the instruction set. If you can't
write HLL code for the specific CPU involved, then you have other issues.

Rickman, professionally I am a low-level programmer. I mostly do
weird optimizations, often at the assembly level. Can read assembly
output generated by a compiler for several ISAs without any problem.
Everyone is smart when things go right. When something fails, without
that knowledge you are like a child in the fog. Please, do not teach
me my craft. :)
Uh, what monster???

A great big FPGA chip which package imposes crazy
(for a hobbyist) PCB routing requirements.
I'm not sure that is impossible on an MCU. Maybe it is impossible on an
MCU you can buy, but a custom design might do nicely.

Rickman, please... ;-)
But since you can have so many CPUs on even a smallish FPGA, I expect you could
divide and conquer quite easily.

But what for? A PWM generator is a no-brainer in VHDL. One can also
stream out the content of a BRAM in a loop directly to the IO pins,
which allows one to implement fancy spectrum spreading techniques,
equalize the amount of power consumed by shifting the relative
phases of the PWM channels, etc. One BRAM = 18 channels. Cheap. :)
And a CPU to generate the actual waveforms off-line, even a tiny one.
You can wave the same hands for CPU cycles. The only difference is MCU
I/Os are not as flexible, typically being constrained to one set of pins
or a small selection of I/Os. I guess that was your point?

More or less. I wanted to highlight that a CPU has e.g. a timer input
with input capture timestamping connected to a dedicated pin. When the
PCB is etched and you discover that connecting another pin to that input
capture allows you to do something smart, you have a problem. In case
of an FPGA you just provide an additional internal "wire" in the
routing section and presto, problem solved.
If your designs are so slow that isn't an issue, fine, but that
is not very common.

If it is necessary, I can use a dedicated pin. But I don't use
those multi-gigabit transceivers etc., so, except of the clock,
a generic IO pin is fine for most of my signals.
Check out the GA144 from greenarrays.com.

I've checked that years ago and it still is in the mental bin
labelled "bizarre". It is neither a CPU, nor an FPGA, gracefully
merging disadvantages of both. :)
How mature do you need?

The so-called industrial quality. As seen in case of
x86/ARM/PowerPC/SPARC and maybe MIPS.
Then don't use a Cyclone/Spartan part...

That's complicated. First of all, I (would) use an FPGA I can
easily buy in smal quantities. Secondly, I know the toolchain.
The experience with mediaeval-quality Lattice software ~10 years
ago has considerably chilled my enthusiasm about the "alternatives".
The packages may not make you happy though. They seem to think
the Smart Fusion chips need a bazillion I/Os.

TQFP is the only package I can handle. But will have a look,
just to learn something new.

Best regards, Piotr
 
P

Piotr Wyderski

Jan 1, 1970
0
Piotr said:
The experience with mediaeval-quality Lattice software ~10 years
ago has considerably chilled my enthusiasm about the "alternatives".

Sory, it was ACTEL, not Lattice, and the family was ProASIC with some
number.

There is a TQFP144-variant of SmartFusion. But hear, hear!
It's an improved and rebranded ProASIC3... :-D

Best regards, Piotr
 
R

rickman

Jan 1, 1970
0
Rickman, professionally I am a low-level programmer. I mostly do
weird optimizations, often at the assembly level. Can read assembly
output generated by a compiler for several ISAs without any problem.
Everyone is smart when things go right. When something fails, without
that knowledge you are like a child in the fog. Please, do not teach
me my craft. :)

I won't teach you anything if you don't want to learn.

A great big FPGA chip which package imposes crazy
(for a hobbyist) PCB routing requirements.

You mean like a 100 pin quad flat pack? Are you even trying to look at
possibilities? This is the sort of bias about FPGAs that I keep running
into. Here is a board I make with an FPGA, a CODEC, some buffering and
analog drivers.

http://arius.com/images/IRIGB_board_1-0.png

The board is 0.85" x 4.5". An MCU could not provide the SPI "like"
control interface from the motherboard and it would have been *very*
hard to generate the clock timing for the CODEC which in one mode has to
be slaved to the incoming data rate on an RS-422 interface.

Rickman, please... ;-)

Please what?

Is it possible that you aren't aware of all CPUs out there?

But what for? A PWM generator is a no-brainer in VHDL. One can also
stream out the content of a BRAM in a loop directly to the IO pins,
which allows one to implement fancy spectrum spreading techniques,
equalize the amount of power consumed by shifting the relative
phases of the PWM channels, etc. One BRAM = 18 channels. Cheap. :)
And a CPU to generate the actual waveforms off-line, even a tiny one.

But something has to control the PWM. So if you have software
controlling the PWM you have to decide how much is in software and how
much is in hardware. I don't know your requirements so I can't speak as
to where the optimal trade off point would be.

More or less. I wanted to highlight that a CPU has e.g. a timer input
with input capture timestamping connected to a dedicated pin. When the
PCB is etched and you discover that connecting another pin to that input
capture allows you to do something smart, you have a problem. In case
of an FPGA you just provide an additional internal "wire" in the
routing section and presto, problem solved.


If it is necessary, I can use a dedicated pin. But I don't use
those multi-gigabit transceivers etc., so, except of the clock,
a generic IO pin is fine for most of my signals.

You are thinking of SERDES which are specialized functions... because
they are impossible to do in the FPGA fabric. But dedicated clock pins
have been around almost since the beginning of FPGAs.

I've checked that years ago and it still is in the mental bin
labelled "bizarre". It is neither a CPU, nor an FPGA, gracefully
merging disadvantages of both. :)

It does have its limitations, I agree. But is has some great features.
I would use it to redo the board in the image above but I don't have
enough confidence in the survival of the company. The redo is because
of one of the very few EOL notices on an FPGA that I just happen to be
using. The GA144 could do the job pretty well I think.

The so-called industrial quality. As seen in case of
x86/ARM/PowerPC/SPARC and maybe MIPS.

I don't know what "industrial quality" is. I think the tools for the
microBlaze, the NIOS, NIOS2, etc are all widely used and well debugged.
Have you heard any complaints?

That's complicated. First of all, I (would) use an FPGA I can
easily buy in smal quantities. Secondly, I know the toolchain.
The experience with mediaeval-quality Lattice software ~10 years
ago has considerably chilled my enthusiasm about the "alternatives".

The project I give the image for above uses a Lattice part and I had no
trouble with the tools. We all have our biases.

TQFP is the only package I can handle. But will have a look,
just to learn something new.

Then I think you are out of luck with the SmartFusion. The GA144 is
available on a mounting board from Schmartboard. They sent me one of
the boards without the chip. Interesting. They route the top layer of
fiberglass down to an inner copper layer and drop a bead of solder in
it. I think the idea is to work with leaded parts like the QFP, but
they say it works with leadless parts too. Essentially the PCB forms a
one or two mm high solder mask and the solder acts like a heat pipe to
allow connection to QFN pins on the underside of the chip.

http://blog.schmartboard.com/blogsc...tboardand-you-can-try-it-save-some-green.html
 
R

rickman

Jan 1, 1970
0
Sory, it was ACTEL, not Lattice, and the family was ProASIC with some
number.

Ah yes, I've heard pro and con about the Actel software.

There is a TQFP144-variant of SmartFusion. But hear, hear!
It's an improved and rebranded ProASIC3... :-D

Yes, similar. There is the Smartfusion and the Smartfusion2. I don't
think the SF2 comes in any TQFPs. The proASIC lines are so old I don't
think I would design them into anything. They are available in 100 pin
QFPs though.
 
J

Joerg

Jan 1, 1970
0
Stef said:
In comp.arch.embedded,
[...]
You can take your chances but it carries risks. For example, I have seen
a system blowing EMC just because the driver software for the barcode
reader was changed. The reason turned out not to be the machine but the
barcode reader itself. One never knows.

Yes, there are always chances and you have to weigh the risks. Making
sure all units pass EMC testing can only be done by fully testing each
unit under all cirumstances. Which is ofcourse impossible.

Some companies EMC-test every machine that leaves production though.

Your barcode scanner example is unfortunate. But such a scanner could
also change it's behaviour on scanning different codes and lighting
conditions. Did you perform EMC testing with all available barcodes
and forseeable lighting conditions?

That usually isn't necessary. I told the client to get lots of different
new readers, and fast. They did that and it turned out that many that
were claimed as "class B" failed majorly. One didn't and it had so much
margin that it wasn't needed to test it under lots of conditions. I took
it apart to make sure that the designers had done a good job.

[...]
 
J

Joerg

Jan 1, 1970
0
Piotr said:
Sure you need them! A friend of my brother produces and sells some
simple ultrasound marten repelling devices. The sales went through
the roof when he added a blinking LED to the box. So blinking is of
crucial importance. :)

Oh yeah :)

You calculate costs differently when it's a hobby project.
I.e. your time is basically free and the parts are expensive,
which is exactly the oposite of professional prototyping.

I am usually a cheapskate when it comes to hobby. Not because of budget
issues like I had when I was a kid but because I like finding a real
McGyver solution.

Why should it be? The RF front-end is mostly the same.
What is the difference between feeding an ADC and feeding
an I/Q demodulator?

My gear mostly has nice 8-pole crystal filters. Neither ADC nor I/Q
demodulator can (so far) touch that when it comes to large signal
handling. On shortwave the only fence between you and a plethora of
close-by noise is this filter.

I think most (or at least a significant fraction of) the modern
radio receivers is a form of SDR. I mean the radios in the cell
phones. You have a powerful CPU on board, often with DSP capabilities
like the NEON instruction set, so all you need to do is to build
a homodyne and move everything else to the digital domain. FM
demodulation, stereo and RDS decoding -- all that is easy.
All the PC TV USB dongles also work on this principle.

Even modern ham radio gear is like that. Which is why I prefer the older
rigs.

In my design I also moved the IF filtering to the FPGA.
It was so much easier to build a digital filter with
configurable passband than to do it the old school way...

Well, head to the shortwave band on a very busy day and compare it to an
NRD-515, a Drake TR-7 or something similar. That's where the rubber
really meets the road.
 
Funny that you mention that cost doesn't matter because the "taxpayer"
is footing the bill. I worked at a company making exactly those radios
and the engineering discussions would be about the safety of the
soldiers rather than "who cares, the tax payer is paying for it". The
very best performance is demanded because there are many times when the
mission depends on it including lives. But you are right in that cost
is not a primary factor.

There is waste. But mostly that comes from misdirected government
influences. In the case of these radios the government wanted to "save
money" by reusing designs in multiple platforms. But that usually meant
making the design more expensive to perform a wider mission with more
capabilities. In the end it was a massive effort that may have saved
some money or may well have cost more than it saved.

Nonsense. There is waste from top to bottom, some incompetence, some
because cost just isn't an important parameter, and yes, some because
the whole idea is just dumb. Your admiration for the government's
forward thinking is cute, though.
 
Top