Connect with us

Electronic components aging

Discussion in 'Electronic Design' started by Piotr Wyderski, Oct 15, 2013.

Scroll to continue with content
  1. Guest

    It's a loser, all the way around. You might find an acceptable hard
    core but soft cores are a loser, for many reasons.
  2. whit3rd

    whit3rd Guest

    On Sunday, October 20, 2013 9:40:24 AM UTC-7, Don Y wrote:

    [on squeezing performance from small CPUs]
    Speaking of which, what IS available in off-the-shelf counter/timer
    support? I've still got a few DAQ cards with AMD's 9513 counter
    chips, which I KNOW are obsolete, but what's the modern replacement?

    The 9513 had five 16-bit counters, lots of modes, and topped out
    at 10 MHz; you could make an 80-bit counter, and test it once
    during the next many-lifetimes-of-the-universe.
  3. Don Y

    Don Y Guest

    I don't think there is a "free-standing" counter/timer "chip".
    Nowadays, most MCUs have counters of varying degrees of
    capability/buginess built in. So, we're supposed to learn
    But many of its modes were silly. I.e., configuring it as a
    time-of-day clock/calendar? Sheesh! What idiot decided that
    a MICROPROCESSOR PERIPHERAL needed that capability? Can you
    spell "software"?

    It also had some funky bugs, was a *huge* die (for its time
    and functionality), etc.

    A lot of counter/timers are really "uninspired" designs, lately.
    Its as if the designer had *one* idea about how it should be
    used and that's how you're going to use it!

    The Z80's CTC, crippled as it was (not bad for that era), could
    be coaxed to add significant value -- if you thought carefully
    about how you *used* it!

    E.g., you could set it up to "count down once armed", initialize the
    "count" to 1 (so, it "times out" almost immediately after being
    armed), set it to arm on a rising (or falling) edge AND PRELOAD
    THE NEXT COUNT VALUE AS '00' (along with picking an appropriate
    prescaler and enabling that interrupt source)

    As a result, when the desired edge comes along, the timer arms
    at that instant (neglecting synchronization issues). Then, "one"
    cycle (depends on prescaler) later, it times out and generates an
    interrupt. I.e., you now effectively have an edge triggered interrupt
    input -- but one that has a FIXED, built-in latency before it is
    signalled to the processor.

    The magic happens when the counter reloads on this timeout and, instead
    of reloading with '1', uses that nice *big* number that you preloaded
    in the "reload register". AND, STARTS COUNTING that very same timebase!

    So, when your ISR comes along, it can read the current count and know
    exactly how long ago (to the precision of the timebase) the actual
    for a big portion of this time! (or, was busy serving a competing
    ISR, etc.).

    The naive way of doing this would configure the device as a COUNTER,
    preload the count at '1' and program the input to the desired polarity.
    One edge comes along, counter hits '0'/terminal_count and generates
    IRQ. Then, *hopes* you get around to noticing it promptly (or, at
    least *consistently*).

    The "smarter" approach lets you actually measure events with some
    degree of precision without having to stand on your head trying to
    keep interrupt latencies down to <nothing>.
  4. I first designed high reliability products for Aerospace in 1975 using Mil-HBK-217. It was based on generic components with stress factors for environment, design stress levels or margin based on actual field reliability data..

    It is based on the assumption that the design is defect-free and proven by test validation methods and the material quality is representative of the field of data collected, which would be validated by vendor and component qualification. The overall product would be validated for reliability with Highly Accelerated Stress Screening (HASS) and Life Test (HALT) methods to investigate the weak link in the design or component.

    Failures in Test (FIT) must be fixed by design to prevent future occurrences and MTBF hours are recorded with confidence rates.

    The only thing that prevents a design from NOT meeting a 50 yr goal is lackof experience in knowing how to design and verify the above assumptions for design , material & process quality.

    You have to know how to predict every stress that a product will see, and test it with an acceptable margin requirement for aging, which means you must have the established failure rate of each part.

    This means you cannot use new components without an established reliabilityrecord. COTS parts must be tested and verified with HALT/HASS methods.

    In the end, pre-mature failures occur due to oversights in awareness of badparts, design or process and the statistical process to measure reliability.
  5. Robert Baer

    Robert Baer Guest

    Well....adding parts reduces OVERALL reliability due to the fact they
    can (and will) fail.
    Some parts,when they fail can induce spikes or surges that will
    stress "protected" parts.
    So, in some (specific) cases it is not silly.
  6. Guest

    That's true but these components will also prevent others from
    failing. That isn't what the reliability numbers show, however.
    The numbers are silly and they way they're normally used is even
    worse. It's the old "be careful what you ask for because you're
    likely to get it".
  7. Phil Hobbs

    Phil Hobbs Guest

    I find it very hard to believe that input protection networks increase
    the field return rate.


    Phil Hobbs

    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC
    Optics, Electro-optics, Photonics, Analog Electronics

    160 North State Road #203
    Briarcliff Manor NY 10510

    hobbs at electrooptical dot net
  8. josephkk

    josephkk Guest

    Unfortunately not so. The relevant manager has her/his eye altogether too
    closely on the quarterly figures that determine his/her bonus to do that.
    That preempts the preventative maintenance.
    And it might have been the result of a PUC or Court order, both rather
    non-negotioable. It is amazing how the CA PUC suddenly developed some
    teeth after the San Bruno disaster.
  9. josephkk

    josephkk Guest

    That's not all; the new memory wasn't any faster or that much lower power.
    A long time a go (~ 40 years) i worked on computer that used core with 120
    ns access time and 300 ns cycle time, faster than DRAM nearly 20 years
  10. Tim Williams

    Tim Williams Guest

    Which still hasn't changed much; RAS/CAS cycle times are around, erm, I
    see figures around 10ns. Quite a bit less in absolute terms, but with CPU
    clock rates over a thousand times higher, it simply hasn't scaled
    accordingly. I/O is even worse; one figure puts PCIe latency on the order
    of a microsecond (I forget if that's fractional or multiple us?),
    absolutely no different from the old ISA bus (8MHz, though only 8 bit).

  11. josephkk

    josephkk Guest

    This discussion reminds me of a bit of conversation i once overheard. The
    discussion was about some new (back then) high energy density metallized
    film capacitors. The goal energy density was something like 20 mF*V per
    in^3 at 400 V. The manufacturer could make them with a lifetime of 200 to
    300 hours. The operational goal was 168 hours. The problem arose when
    the customer insisted on 168 hour burn-in on 100% of the parts. After
    burn-in they could no longer meet operational goals due to the limited
    life. IIRC the infant mortality period was something like 2 hours at 120
    % rated voltage. Never did hear how it all worked out though.

Ask a Question
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Electronics Point Logo
Continue to site
Quote of the day