Connect with us

RS485 Timeout

Discussion in 'Electronic Design' started by [email protected], Jan 9, 2008.

Scroll to continue with content
  1. Guest

    Hi,

    I am writing an application for a RS485 network. Basically, a client
    server program.
    So when server sends command out, the client who receive it send some
    data back.
    My question is that how long the timeout should be for the server? How
    long should the server wait before report a time out to user
    interface?

    Many Thanks
     
  2. Hot Jock

    Hot Jock Guest

    8.267196e-007 fortnights seems a good choice to me.
     
  3. Joel Koltner

    Joel Koltner Guest

    You're in a much better position to answer that than we are. How long do the
    clients normally take to do whatever it is they're supposed to do? If it's
    pretty much "instantaneous," having timeouts of 100ms is entirely reasonable.
    If it takes a second or so, a 5 second timeout is probably good. If it takes
    more than 5 seconds, be sure to have a "cancel" button so that a user isn't
    forced to wait through an, e.g., 30 second timeout when they're quite certain
    something is amiss.

    Also, don't design a system the way I saw one implemented once: All commands
    *but one* would complete in <100ms. That *one* command would take something
    like 5-10 seconds, so some "clever" programmer decided that ALL commands
    should have a 30 second timeout, since they didn't want to make a "special
    case" out of the one 5-10 second command. What a pain...

    If you're feeling like doing really robust design, for any processing that
    takes more than a second or so you can have the client issue successive "still
    working..." messages. This is the nicest thing to do for your user -- you can
    use ~one second timeouts so they'll know right away if something is amiss --
    but it does tend to make the entire protocol noticeably more complicated than
    the dead-easy command/response model. (You still need an "overall" timeout in
    case the client gets stuck saying "still busy..." and then you're tempted to
    figure, "hey, I should create this "global reset" command that will always
    un-stick the client, and... etc...)
     
  4. Steve

    Steve Guest

    Depends on the application, of course. You didn't say if you have control of
    both ends or not. RS485 is usually implemented as a "master/slave(s)"
    relationship. You call it client/server system, but is it just two devices?
    Multiple uncoordinated clients sending inquiries to a common server on RS485
    sounds like a collision-fest. I'm assuming your "server" has the role of
    master if there are multiple "clients".

    As long as everyone is operating, there is no impact no matter how long you
    make the timeout, of course. But if you are polling a bunch of devices and
    some are dead/unpowered, you can waste lots of bandwidth this way. I usually
    assume a few hundred millisecs when all slaves are interrupt driven, baud
    rates in the range of 9.6 - 38k, and their responses don't take too long to
    create. I suspect others use a much smaller timeout, and use higher baud
    rates than I do. But it allows me to power up the network in random order
    and poll all 32 devices in under 10 sec (assuming they are all
    non-responsive when first tried). After they all stabilize, the latency for
    any device is under 1 second, and a single missed message runs the latency
    for all devices out to about 1.25 sec.

    Occasionally, with small micro's, I've had the slaves maintain certain long
    responses "continuously", in background mode, so that when they get a
    request, they can respond quickly and the timeout can remain small. Some
    internal routines, like C "sprintf", for instance, can take a very long time
    in real-time or high speed systems. And many can't be called within ISR's,
    because they are not re-entrant and/or they just keep interrupts disabled
    too long.

    On the flip side, I'll mention the turn-around delay needs to be considered
    too. Neither end can reply until the previous sender has had time to turn
    off its driver and free the line. If the reply starts too soon, the first
    character in the reply will be garbled at the destination. I usually set
    this time in the 1-5 msec range. Then I turn off the driver in the ISR
    (allow one extra interrupt after last character is sent). If you have to
    rely on some handshake method in your foreground code, or a callback, to
    turn off the driver, you'll probably find that you need longer times, and
    that the disable time has a lot of variation to it.

    Hope this helps,
    Steve
     
  5. Rich Grise

    Rich Grise Guest

    How long do you want to wait? Since you're writing the app, you can have
    it wait as long as you want to. You might want to do some kind of reality
    check - write a really crappy unresponsive client program, and see how
    long it takes to pick up.

    But, bottom line, it's entirely up to you. Personally I wouldn't want to
    wait more than, say, 30 seconds; a minute max. (a minute is a looooooong
    time when you're sitting there waiting for some remote computer to respond.)

    Good Luck!
    Rich
     
  6. What is the longest delay of a slave ? You could have
    a table for each command, how long it takes to execute
    and just wait this time.

    Rene
     
Ask a Question
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Electronics Point Logo
Continue to site
Quote of the day

-