Connect with us

what uC do you prefer?

Discussion in 'Electronic Basics' started by feebo, Jul 20, 2007.

Scroll to continue with content
  1. feebo

    feebo Guest

    So with the plethora of micro-controllers out there, which do you
    use/prefer and why?

    I use PICs pretty much exclusively but I just had a look at ATMELs
    range - to be honest, I don't see any reason to change from what I
    know - am I missing anything?

    Do all uCs use harvard architecture? Sometimes it would be nuice to
    have data available as bytes without having to wrap it in code and
    form tables. Tonight I scared myself - I caught myself googling
    "microcontroller z80 core" eek!
     
  2. Robin

    Robin Guest

    Stay with RISC particularly if you use assembler.

    If you changed now to a CISC you *would* hate it - they are a silly
    remnant from the days when memory was expensive.

    Robin
     
  3. DJ Delorie

    DJ Delorie Guest

    I use Renesas's R8C chips (from their R8C/M16C/M32C family) in my
    furnace. Lots of peripherals and timers and such in a small package,
    and the assembly isn't as convoluted as some risc chips can get (it's
    very i386-like). GCC is available for it (disclaimer: I'm the m32c
    maintainer for gcc). They also support 2.7v to 5v operation (the
    m32c's support split supplies - like 3.3v for the cpu bus side, and 5v
    for the peripheral side)

    Digikey carries many of the chips in this family, and they're easily
    programmed with a serial port and some GPIOs. Mine are programmed
    from a gumstix (embedded xscale/linux board).
    No, most don't. The R8Cs aren't, although they do have "near" and
    "far" data (only the m16c variant really cares, the r8c is usually too
    small to have flash "out there" and the m32c has 24 bit address
    pointers anyway).
     
  4. Many micros use flash memory for code (the transitions continue) and
    include a small amount of RAM. Since these are naturally separate
    memory systems, often with differing needs for address lines and
    directional latches to access them, and since a processor often keeps
    a separate set of latches for the program counter and current
    instruction under decode, it's almost like falling off a chair to make
    it Harvard. It may take a little extra effort to make it appear as
    von-Neumann. But there are a number of micros that take the trouble
    (the MSP430, for example.) And also cores which have an external
    address/data bus, like the 8051 core, can often be easily adapted,
    externally, as a von-Neumann arranged memory space. (The basic 8051
    chips, in fact, often are wired exactly that way and certainly can be
    arranged that way in their various incarnations by manufacturers
    including both flash and ram.)

    PICs are fine. I like them for hobbyist use because they do well at
    supporting small-qty users without a lot of hassle about it (like
    asking you to work through distribution.) I also like the Atmel AVRs
    and would use them, too. There is a very nice programming system for
    them that isn't too expensive and includes buttons, leds, and so on so
    that you can program and test some ideas right away. However, with
    Atmel, I am usually routed through local distribution and/or a local
    FAE (Eric Feign, in my area, has long been filling that role) for
    questions. For a hobbyist, this can be a small problem, though not
    necessarily.

    Unless you have some reason to change, though, I think PIC is a great
    place to focus on. A fairly wide range of options and a company that
    will support its development tools quite literally, it seems, forever.
    When you pony up the money to buy something from them, and especially
    if it is a professional level tool, they will pretty much jump at your
    whim and ask what you want on the way up. I've had them replace
    entire units and modules at the mere suggestion that the On/Off switch
    might be a "little flaky." And this, an old tool they no longer even
    sell!

    So I'd say stay there unless you think there is something worth having
    elsewhere. It's a good company from my experience.

    Jon
     
  5. Eeyore

    Eeyore Guest

    8051 family.

    A. Because I know it very well.
    B. Because I have a PLM/51 compiler and I like to use PL/M
    C. Because there are as many variants of an 8051 with various inbuilt goodies as
    there are flavours of PICs (enough at least anyway).
    D. Because it's not single sourced like certain MCUs.
    E. Because you can get ultra-fast versions should you need one.
    F. Because they're almost as cheap as PICs
    G. And these days they're low power, low voltage, have flash memory and are
    fully static too.

    Graham
     
  6. I like them, too. My very first microcontroller project, where I
    actually did all my own investigation of the unique interface needs
    and then designed all of the hardware for it, used an 80C31. I
    wire-wrapped that project and it worked the very first time, placing
    the newly burned EPROM into its socket. This included a serial-port
    for use by an IBM PC (the year I did this was 1984) and it interfaced
    directly into the reed-relays of an IBM Electronic typewriter to turn
    it into a printer, for me. At the time, the Electronic 85 was fairly
    new and there was no information on doing this, so I 'scoped out the
    signals on my own and developed a table. It was such a thrill to see
    it work, right off the bat.

    The source code was written in assembly and used a table assembler,
    which had been originally developed by someone in Washington state for
    those interested in using 8051 cores to make products for the hearing
    disabled, if I recall. I may still have that tool floating about,
    though I don't use it anymore.

    I have a box of some hundreds of 80C32s (with small printed circuit
    cards neatly soldered to the back of each one, which provides a built
    in power-on reset and software accessible reset line.) Got them very
    cheap.

    Jon
     
  7. Eeyore

    Eeyore Guest

    You can find the second application for an 8051 I designed here.... This is a late
    revision, it actually dates from about 1993.

    http://mysite.wanadoo-members.co.uk/studiomaster/service/schem/r8a7-2.pdf

    Graham
     
  8. John Larkin

    John Larkin Guest


    Harvard is an anomoly. And a nasty anomoly. Decent processors have a
    unified code/data/IO address space, without specific i/o opcodes. So
    you can apply the same instructions to anything, and put data
    structures and code wherever they fit best. A table can contain mixed
    data and flags and routine addresses, for example.
    I mostly use the MC68332, a very CISCy 32-bit machine. It's a pleasure
    to program in assembly, and has a beautiful orthogonal instruction
    set, including some handy 32 and 64 bit mul/div things. The later
    versions of this architecture are the Coldfire parts.

    Like, we often use a single 8-bit wide eprom, to save board space. But
    that makes instruction fetches slow. So for a subroutine that has to
    run fast, we just copy the code from eprom to CPU internal ram, and
    run the copy there, blindingly fast. We can even reuse the ram
    workspace, overlaying different blocks of code. It's easy, with a
    decent architecture.

    The TI 16-bitter, the MSP430, is pretty nice too, and very fast. Like
    the 68332, it has a register-rich, symmetric, very PDP-11 looking
    architecture.


    John
     
  9. feebo

    feebo Guest

    I must say that the Harvard has got in the way of what I want to do a
    *lot* more times than it helps - in fact I can't think of a single
    time when having seperate data/program structures has been anything
    better than neutral i.e i don't care about it. I am fully aware of the
    dangers of having buffers and data structures butting up to the edges
    of code (buffer over-run exploits - spesh of Billy boi) but that is
    hardly a problem for a lot of embedded stuff. I guess as we march to
    more advanced uCs and solutions it might have a place but I have
    always considered buffer over-runs and the like to be shite
    programming... I have never written a piece of code that didn't check
    limits on storage areas as it worked... oh, wait... I have... years
    ago with Z80 stuff... never really kept an eye on the stack pointer
    moving down through RAM but never had a crash - code was not
    sufficiently complex or recursive enough to use up the DMZ between
    last_byte and SP :eek:). I do hanker for a contiguous RAM area where I
    can point to sections of code and modify them on the fly, place data
    in line etc. PICs are cheap and pretty well spec'ed but I am starting
    to get constrained with constant bank switching on calls and jumps or
    checks thereof and the encoding method for tables or even simple
    message to be output to having to be wrapped in code and the table
    called with a progressing pointer are a real pain. I just feel the
    core is limiting things and all the ways of doing stuff, instead of
    being "works of creativty" by the programmer are a series of
    work-arounds :eek:( Don't get me wrong, I think PICs are great and dead
    easy to program so long but you need to keep constantly aware of the
    wrinkles - and I only ever write assembler so I don't have some HLL
    keeping a check on this for me - with the resultant increase in code
    size and reduction in speed.
    That does sound a nice piece but way beyond what we have a need for
    here which are mainly small controllers/converters etc. I really liked
    motorola assembler on the 68K family - is it much different - (looks
    on shelf - sees "programming the 68000" - sighs :eek:)
    yep -standard proceedure - the preamble gets everything ready for fast
    code - the PIC is fast (in it's place) so the code execution is not a
    problem, but if you want to update the code down the phone or
    something and store it in a serial eeprom, I think only now are
    Microchip producing a part that allows sections of the program to be
    "blown on the fly"
    PDP... <whipes a tear from her eye>
     
  10. feebo

    feebo Guest

    I did look at AVR stuff last night but they dodn't offer much over the
    PIC that we currently use heavilly in our controller designs. - did I
    miss something?

    I only ever write assembler - control freak that I am :eek:)
     
  11. John Larkin

    John Larkin Guest

    Take a look at the '430 instruction set; you'll get downright
    nostalgic.

    John
     
  12. John Larkin

    John Larkin Guest

    The PIC is descended from the PDP-8, as I recall. Nasty little things.
    Me too. The code density is so low that there's lots of room for
    comments, even running dialogs. Most C program are literally "code",
    in that they require massive amounts of decoding to figure out what's
    going on.

    John
     
  13. I like the PDP-11 a lot, as well. I used to code assembly for both it
    and the PDP-8 (and rarely, the PDP-10) from DEC. (Also programmed in
    Bliss-32 and Macro-32 on the VAX, but that's another story.)

    The MSP430 is very nice, hardware-wise. It's instruction set has been
    sacrificed on the altar of "lots of registers," though. In supporting
    16 equivalent registers, they had to literally demolish the careful
    and well-hewn balance found in the PDP-11's instruction design. As a
    small trade-back, they've included the concept of a very modest
    constant generator -- not enough to even come close to making up the
    other damage, but useful. But like I said, it's hardware is really
    great. Timers can come with up to 7 capture/compares in a single
    unit, many peripherals enjoy the benefits of a DMA controller, etc. It
    breaks up the flash memory into nice-sized pieces allowing it to be
    erased in small bits. It allows you to place code into RAM, where you
    can program the flash without waiting... or you can execute code from
    flash that can modify and erase other banks of flash (but where the
    code is automatically suspended for the duration.)

    Have a look at it.

    Jon
     
  14. Eeyore

    Eeyore Guest

    Why would any intelligent person fret over an instruction set ?

    Graham
     
  15. Because even if you don't code in assembly it makes a difference in
    code space requirements, execution time, memory bandwidth, .... and
    more.

    Jon
     
  16. Eeyore

    Eeyore Guest

    And why fret over those either ? If you're so close to the bone that it's an
    issue, you're probably using the wrong device in the first place.

    Besides, the PL/M compiler I use seems to produce compact fast code anyway.

    Graham
     
  17. Eeyore

    Eeyore Guest

    That sounds interesting !

    Graham
     
  18. It was a lot of fun to do. I got started in actually trying my hand
    at thoroughly examining and modifying an existing BASIC interpreter
    that resided on an HP 2000F timesharing system, modifying the system
    to support time-shared assembly. Doing the assembler deepened some
    parsing concepts in me and taught me more about writing symbolic
    linkers (since I needed to write that, as well.) This was 1975. I
    believe I first read about PL/M in about 1977, from some Intel doc
    (I'd gotten interested in Intel, because I built an Altair 8800 from
    the kit.) I'd also been reading Wirth's 1976 book, Algorithms+Data
    Structures=Programs, where he talks about a very simple PL/0 example.
    I'd also secured and read fairly thoroughly, Aho and Ullman's 'Dragon
    Book,' actually named Principles of Compiler Design, probably in 1978.
    (I also bought the second edition when it came out in 1986 and still
    have that one on my shelf here.) I didn't get around to writing
    something close to PL/M, though, until some years later -- would have
    been early 1980's. And it wasn't for commercial use, it was more to
    'complete the circle' of interest I had at the time in just learning
    about compiler development. I also wrote a toy c compiler around this
    time, as well. But I'm not, nor have I ever been, a trained
    professional compiler developer. I just enjoy knowing a little
    something about how they work inside.

    Jon
     
  19. John Larkin

    John Larkin Guest

    I think TI calls it RISC because it's not microcoded and executes
    single-word instructions in one clock. That seems fair to me, and it
    is very fast.

    They do play some games to create addressing modes... it took me a
    while to figure it out, and was a bit miffed at the tricks. But it
    still looks like a good chip for a low-power, small, fast embedded
    thing. We're thinking about doing some 430-based solid-state relays
    maybe.

    Motorola's 68K is another PDP-11 derivative, and they made the same
    decision to break the beautiful src/dst addressing symmetry in favor
    of a much bigger instruction set and 8/16/32/sometimes 64-bit ops. The
    only unrestricted 68K opcode is MOVE, where anything goes. Most
    dual-operand instructions allow only a register as the destination.
    They also added "immediate" instructions, like ADDI, so they could
    have general addressing on destinations. Still, it's light-years more
    elegant than an Intel or PIC architecture.

    Coldfire is a RISC implementation of the very CISC, microcode-heavy
    68K instruction set... no microcode, mostly single-clock execution,
    lots of gates!

    I used to mentally assemble PDP-11 programs and toggle them in, the
    instruction set was that elegant. That's not likely on a 68K.


    John
     
Ask a Question
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Electronics Point Logo
Continue to site
Quote of the day

-