Connect with us

Electrical gigabit transmission ?

Discussion in 'CAD' started by Michael Weiss, Jan 30, 2007.

Scroll to continue with content
  1. Hi all,

    I wonder what is curently state-of-the art in serial high-speed transmission
    and what are the prevailing data rates? I know about some SerDes in the
    gigabit-per-second range but I cannot imagine if 10 Gbps are really a
    challenge or the applied method or if it's 1 Gbps (or something in
    between)...?
    I recently heard about some 60 GHz in the mobile communication sector and 10
    Gbit Ethernet but as far as I know there are those multi-level modulation
    methods (like QAM for example) that are able to provide 10 Gbit bandwidth
    with a bitrate of some Mbps (is that correct?).
    I'm not interested so much in those higher modulation methods (nor in
    optical transmission) but in the baseband communication where bitrate =
    clockrate, i.e. the line rate. What can be efficiently transmitted today
    electrically (over wire or PCB)? What is the prevailing technology of those
    circuits, is it CMOS or are there alternatives?
    I am a senior electrical engineer and unfortunately did not manage to keep
    up-to-date. After googling all night I'm really depressed because I finally
    couldn't find an unambiguous answer.
    Maybe some guys in the silicon-business or practitioners know the anser and
    are willing to share there knoledge with me?

    Best regards
    Geronimo
     
  2. |>
    |> I wonder what is curently state-of-the art in serial high-speed transmission
    |> and what are the prevailing data rates? I know about some SerDes in the
    |> gigabit-per-second range but I cannot imagine if 10 Gbps are really a
    |> challenge or the applied method or if it's 1 Gbps (or something in
    |> between)...?

    Oh, it's a challenge, all right. I went to a very interesting talk on
    it, and heard about the issues. The worst problem seems to be cross-talk,
    but losses are pretty bad, too. It's feasible, for short distances, but
    is a lot harder than 1 Gbps. One of the reasons that 60 Gbps is being
    touted is that some people are doubtful about being able to get to 100
    Gbps in a realistic timescale for a feasible cost.

    |> I am a senior electrical engineer and unfortunately did not manage to keep
    |> up-to-date. After googling all night I'm really depressed because I finally
    |> couldn't find an unambiguous answer.

    Unfortunately, I am not, so I can merely tell you the above; there is
    little point in me trying to go into details of what I remember, as I
    will probably get them wrong.

    What I am certain of is that an optoelectronic breakthrough (and there
    are several possibilities) would kill medium distance, high speed
    electrical transmission dead - almost overnight. As 'they' have spent
    a couple of decades putting serious money into optoelectronic research,
    I am not holding my breath. But, as with flat screens, it could happen
    at any time.

    Unfortunately, none of that gets you a lot further :)


    Regards,
    Nick Maclaren.
     
  3. Del Cecchi

    Del Cecchi Guest

    I'll go along with the crosspost this time....

    You are talking about what is called "NRZ" or "not return to zero" and
    the state of the art for commercial products is in the 10-12 Gbit/second
    range for copper wires on backplanes or short cables. These
    serializer/deserializer (serdes) products are usually done in CMOS.

    QAM and other modulation schemes have been proposed but never really
    caught on. Likewise, advanced coding schemes like trellis or viterbi
    coding and forward error correction such as are used in long haul
    optical and in disk drives haven't caught on in the copper world. QAM
    only halves the baud or symbol rate compared to the data rate by encoding
    2 bits per baud.

    People use CMOS because it is the cheapest, although some of the chips
    involved with optics are made with more exotic materials.

    del cecchi
     
  4. The fastest signaling over copper that I'm (being a software guy, and not
    involved in bleeding edge hardware development) aware of (in production) is
    3Gig SAS/SATA cables. I'm not sure what the "baud" of the protocol is.

    Perhaps Infiniband is faster?

    - Tim
     
  5. PeteS

    PeteS Guest

    Well, one of the architects of InfiniBand posted right above you ;)

    The 1.2 spec has details for 2.5, 5 and 10Gb/s signaling per pair,
    although as I recall from the discussions we had 10Gb/s was not easily
    realisable on 'ordinary' materials at the time the 1.2 spec was being
    written.

    Cheers

    PeteS
     
  6. PeteS

    PeteS Guest

    Optics are expensive compared to copper - very expensive. I designed a
    4x InfiniBand optical interface board some 3 years ago using POP4
    transceivers and although it worked great, it was too expensive for any
    sort of large installation.

    Cheers

    PeteS
     
  7. 10 Gb/sec is commonplace (we're close to every PC having a 10 G port). 40
    Gb/sec is available (Cisco sells 40 G line cards today). 40 G exists
    because it was mostly developed during the bubble. Development has leveled
    off since then...

    The main disadvantage of these high speed serial and optical interfaces is
    heat and the size of the optical modules. They use much more power than the
    equivalent bandwidth parallel interface.

    There are challenges at every level for these interfaces, but here's one
    example: at 10 G, the packet rate for packet-over-SONET is 25 M packets /
    sec. This means you need to make a routing decision at this rate, and that
    you need random access from you buffer at this rate. So for example, RLDRAM
    can do 50 M random accesses / sec, which supports one 10 G interface (25 M
    for the write side, and 25 M for the read side). The raw bandwidth is an
    easier problem because you can always do muxing (either wavelength division
    muxing or electrical SONET-level muxing). The disadvantage of MUXing is
    that you can then not support a single flow greater than any one input to
    your mux.

    It does not help that the internet protocols (for example HDLC) were design
    for a word size of one byte (which is better than the previous standards of
    one bit, but a word size of 64-bits would be much easier).

    Now at 40 G, the packet rate is 100 M / sec for POS... you can see where
    this is going :)
     
  8. |> In article <45bf1bb9$0$18833$-online.net>,
    |>
    |> >I wonder what is curently state-of-the art in serial high-speed transmission
    |> >and what are the prevailing data rates? I know about some SerDes in the
    |> >gigabit-per-second range but I cannot imagine if 10 Gbps are really a
    |> >challenge or the applied method or if it's 1 Gbps (or something in
    |> >between)...?
    |>
    |> 10 Gb/sec is commonplace (we're close to every PC having a 10 G port). ...

    However, that doesn't help without affordable, reliable and usable cables
    and connectors - and they are the problem.


    Regards,
    Nick Maclaren.
     
  9.  
  10. Joel Kolstad

    Joel Kolstad Guest

    I'd wager there's a better chance that optical transceivers will become dirt
    cheap before flexible waveguides do.
     
  11. Rick Jones

    Rick Jones Guest

    [trimmed the followups a bit...]
    Unless you can get at least de facto agreement on a larger MTU the
    whole thing is moot for the end systems at least. Unless the 100G NIC
    can take advantage of a score of cores (or more) one isn't going to
    get anywhere near 100G speeds anyway... And even then, the small
    nature of most traffic (not all of course) makes even the de facto
    larger MTU moot for anything other than netperf TCP_STREAM, FTP and
    some other bulk transfer stuff.

    rick jones
     
  12. Wes Felter

    Wes Felter Guest

    Don't forget parallel copper. The cheapest version of 10GigE is CX4 and
    will probably remain so. 100GigE could be 12 lanes of 10GHz over
    copper, although people might not put up with the huge connectors.
     
  13. Joel Kolstad

    Joel Kolstad Guest

    Many of them would, I imagine... that's ~25 pins, right? -- which even in a
    high-density D-sub is "game port" (traditional DB-15) sized, and denser
    connectors (such as the newer parallel printer port connector) are readily
    available.

    I'm sure I'm not the only one who remembers some the truly enormous SCSI
    connectors in years past.

    Or 60 pin IDCs!
     
  14. I'm sure I'm not the only one who remembers some the truly enormous SCSI
    SCSI connectors "truly enormous"? Surely not! Take a Massbus connector instead!

    Jan
     
  15. jasen

    jasen Guest

    a flexible waveguide of the size required is little more than coaxial cable
    without the inner conductor.

    OTOH monomode fibre-optic cable is a waveguide too,...

    Bye.
    Jasen
     
  16. From the RF design point of view, one should remember that the power
    is no longer transmitted with the conductors, but instead propagates
    as a field between the conductor and ground plane (or between
    conductors in a balanced system). Thus, the dielectric losses of the
    PCB or coaxial cable insulation materials will be important, so
    ordinary glass fiber boards and PE insulated cables may be
    inappropriate at higher frequencies and more expensive materials may
    have to be used.

    Paul
     
  17. |> |>
    |> > Don't forget parallel copper. The cheapest version of 10GigE is CX4 and will
    |> > probably remain so. 100GigE could be 12 lanes of 10GHz over copper, although
    |> > people might not put up with the huge connectors.
    |>
    |> Many of them would, I imagine... that's ~25 pins, right? -- which even in a
    |> high-density D-sub is "game port" (traditional DB-15) sized, and denser
    |> connectors (such as the newer parallel printer port connector) are readily
    |> available.
    |>
    |> I'm sure I'm not the only one who remembers some the truly enormous SCSI
    |> connectors in years past.
    |>
    |> Or 60 pin IDCs!

    You ain't seen nothing yet, folks!


    Regards,
    Nick Maclaren.
     
  18. Joel Kolstad

    Joel Kolstad Guest

    Hmm... (Googles for Massbus connector imagine, finds it)... yeah, you're
    right, that is worse!
     
  19. PE is still a pretty good dielectric in the GHz range, but FR-4
    substrate starts hurting by the time you gets to a GHz (Del Cecchi made
    similar rude noises about FR-4 as well - wonder what he would think of
    PVC...). The microwave guys have had a lot of experience with PCB
    performance in the GHz range - dielectric loss will have a much worse
    effect on a stripline filter than it will on a digital signal.

    - Erik
     
  20. Rob Warnock

    Rob Warnock Guest

    +---------------
    | QAM and other modulation schemes have been proposed but never really
    | caught on. ... QAM only halves the baud or symbol rate compared to
    | the data rate by encoding 2 bits per baud.
    +---------------

    True, QAM is seldom used in baseband copper, though note that GbE
    uses PAM-5 (2 b/Baud + some slight coding). Where QAM *has* been
    really pushed is in CATV RF nets, where QAM-64 (6 b/Baud) and
    QAM-256 (8 b/Baud) are fairly common.

    Of course, now that the RF nets (including RF-over-fiber) are being
    displaced by digital fiber... ;-}


    -Rob
     
Ask a Question
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Electronics Point Logo
Continue to site
Quote of the day

-