Maker Pro
Maker Pro

Why do we have cross-over cables.

S

Sylvia Else

Jan 1, 1970
0
Or maybe the question should be why do we have cables that are not
crossover.

My first encounter with this concept came when connecting serial ports
together. Turned out there were two kinds - data set, and data terminal.

Not content with that, when UTP cables came out, we had a similar situation.

Maybe I've missed something, but it's always seemed to me that a logical
approach would be to define some pins as input and some as output, and
for cables to connect input pins to output pins, thus obviating the need
for two different ways of wiring up connecting cables.

Did I miss something? Is there a reason this situation persists?

Sylvia.
 
S

Sylvia Else

Jan 1, 1970
0
VWWall said:
Consider the case when you want to connect two of the same type
together. For example, a normal PC is considered a data terminal. If
you want to connect two together, the transmit pins on one must connect
to the receive pins on the other. Hence the crossover cable, often
called a "null modem" for the serial cable case.

If the transmit and receive pins on a data set (typically a modem) were
transposed in the original design, then the same cables could have been
used in all cases.
The same reasoning applies to CAT5 cables, but in this case there are a
total of four pairs, two transmit and two receive. Both sets are not
always used, but the ones in use must be "crossed over" to connect two
of the same type equipment. Many routers can sense the type of cable
required and will automatically do the crossover internally if required.

Ditto.

Sylvia.
 
R

Richard Henry

Jan 1, 1970
0
If the transmit and receive pins on a data set (typically a modem) were
transposed in the original design, then the same cables could have been
used in all cases.
They were.
 
S

StickThatInYourPipeAndSmokeIt

Jan 1, 1970
0
If the transmit and receive pins on a data set (typically a modem) were
transposed in the original design, then the same cables could have been
used in all cases.


Serial cables are pin for pin. This makes creating one an easy process
with low error rate during assembly, which was ALL by hand at that time.

Switching the gear was far more reliable than switching cable
conductors around, and counting on low paid assemblers to do so
consistently.

And that's a fact, Jack.
 
S

StickThatInYourPipeAndSmokeIt

Jan 1, 1970
0


Same reason. MAKING an RJ-11 cable is easier if the crimped on
connector is wired the same way ALL the time. Especially during field
service scenarios. The error rate in cables is much higher if specific
cross-overs have to be made from end to end. Also, one would need to
observe the one completed end to reference what would be needed on the
other end. Doing them all the same practically guarantees success.
Requiring cross-over practically guarantees a much higher prime pass
yield in manufacturing circles, and a higher failure rate in field
installations as well.

And that's a fact, Jack.
 
S

StickThatInYourPipeAndSmokeIt

Jan 1, 1970
0
I agree, it does make a great deal of sense to set up the interfaces
in a symmetrical fashion whenever possible. I've grown fond of the
Yost method of doing serial-port hookup... there's no differentiation
between DTE (e.g. terminal or PC) and DCE (e.g. modem), and you can
hook either to the other.

10BaseT could also have been designed with this sort of symmetry in
mind, I suppose, if we'd started out using the sort of
highly-intelligent auto-adaptive interfaces that we use today on e.g.
100BaseT.


The problem is that when these systems of interface were designed, there
was nearly no automated assembly.

Hand assembly means failures, unless error conditions are reduced to a
minimum. Wiring both ends identically means that less errors were made
in manufacture of said interface devices and systems. Making the switch
at the hardware itself was easy, and 100% repeatable.

Prime pass yield was a huge consideration in labor intensive hand
operation production procedures, and still is. That is why most
interface cables are pin-for-pin. Particulalry those that have the same
or very similar connectors on each end.
 
Or maybe the question should be why do we have cables that are not
crossover.

My first encounter with this concept came when connecting serial ports
together. Turned out there were two kinds - data set, and data terminal.

Not content with that, when UTP cables came out, we had a similar situation.

Maybe I've missed something, but it's always seemed to me that a logical
approach would be to define some pins as input and some as output, and
for cables to connect input pins to output pins, thus obviating the need
for two different ways of wiring up connecting cables.

Did I miss something? Is there a reason this situation persists?

The reasn this situation persists is that it is written into the
international standards, and a huge installed base of hardware out in
the field conforms to those standards.

I used to know about this when I worked for ITT-Creed in the U.K back
in 1979-1982, in a group that used to send people to the CCIT
standards committee meetings.

The concept dates back to the Telex and Teleprinter networks. The ASR
33 Teletype printer was orginally a data set (IIRR - 1982 is the last
time I was seriously involved) produced in huge numbers of the AT&T
network, and its use as a computer terminal was never more than a
minor spin-off.

http://www.iso.org/iso/livelinkgetfile?llNodeId=21523&llVolId=-2000
 
S

StickThatInYourPipeAndSmokeIt

Jan 1, 1970
0
Also (minor issue) it's a trifle easier to extend a straight-through
cable with another... you just use a second straight-through cable and
a one-to-one butt-heads splicer. You can't do this with two crossover
cables, as you'll end up crossing everything over twice and creating
the equivalent of a straight-through cable... your splicer needs to
include a *third* crossover!


Also a very good reason to refrain from the idea. It can easily be
incorporated into a gender change DONGLE as well.

Yes, folks, that term was in use LONG before software security keys used
it.
 
S

StickThatInYourPipeAndSmokeIt

Jan 1, 1970
0
The reasn this situation persists is that it is written into the
international standards, and a huge installed base of hardware out in
the field conforms to those standards.

I used to know about this when I worked for ITT-Creed in the U.K back
in 1979-1982, in a group that used to send people to the CCIT
standards committee meetings.

The concept dates back to the Telex and Teleprinter networks. The ASR
33 Teletype printer was orginally a data set (IIRR - 1982 is the last
time I was seriously involved) produced in huge numbers of the AT&T
network, and its use as a computer terminal was never more than a
minor spin-off.

http://www.iso.org/iso/livelinkgetfile?llNodeId=21523&llVolId=-2000


Western Electric ALSO hand assembled practically everything back then,
and they knew about error rates in cable assemblies, and they knew how
best to reduce them by making remembering the wiring procedure an easy
thing to do.
 
S

Sylvia Else

Jan 1, 1970
0
StickThatInYourPipeAndSmokeIt said:
Same reason. MAKING an RJ-11 cable is easier if the crimped on
connector is wired the same way ALL the time. Especially during field
service scenarios. The error rate in cables is much higher if specific
cross-overs have to be made from end to end. Also, one would need to
observe the one completed end to reference what would be needed on the
other end. Doing them all the same practically guarantees success.
Requiring cross-over practically guarantees a much higher prime pass
yield in manufacturing circles, and a higher failure rate in field
installations as well.

OK, I can accept the field installation error issue, though the
existence of two different standard ways of wiring the plugs seems at
least partly to defeat the goal of reducing errors.

I'd have thought that in a mass-production environment it would be as
simple as having one person do one end and another person do the other end.

Sylvia.
 
A

atec 77

Jan 1, 1970
0
Sylvia said:
Or maybe the question should be why do we have cables that are not
crossover.

My first encounter with this concept came when connecting serial ports
together. Turned out there were two kinds - data set, and data terminal.

Not content with that, when UTP cables came out, we had a similar
situation.

Maybe I've missed something, but it's always seemed to me that a logical
approach would be to define some pins as input and some as output, and
for cables to connect input pins to output pins, thus obviating the need
for two different ways of wiring up connecting cables.

Did I miss something? Is there a reason this situation persists?

Sylvia.
TROLLTROLLTROLL
has to be ?
no one could be that fucking stupid
 
J

JW

Jan 1, 1970
0
The same reasoning applies to CAT5 cables, but in this case there are a
total of four pairs, two transmit and two receive. Both sets are not
always used, but the ones in use must be "crossed over" to connect two
of the same type equipment. Many routers can sense the type of cable
required and will automatically do the crossover internally if required.

Many (most? all?) Intel GB NICs will also crossover automatically.
 
J

Jasen Betts

Jan 1, 1970
0
If the transmit and receive pins on a data set (typically a modem) were
transposed in the original design, then the same cables could have been
used in all cases.

this would presumably require the DTE to have outputs for carrier and
ring although thise concepts apply only to DCE.


the standard was developed before the invention of the smartmodem(tm)
which could indicate those two conditions as serial data.


Early modems ware loosely speaking analogue filters coupled to serial
line drivers, if you were lucky you could pulse dial by toggling the
DTR line with the correct cadence...
 
S

Sylvia Else

Jan 1, 1970
0
Jasen said:
this would presumably require the DTE to have outputs for carrier and
ring although thise concepts apply only to DCE.

Just means those pins would be tied to ground. This would have required
more than the 9 pins on modern serial ports, but would have been easily
done with the original 25 pin standard.
the standard was developed before the invention of the smartmodem(tm)
which could indicate those two conditions as serial data.


Early modems ware loosely speaking analogue filters coupled to serial
line drivers, if you were lucky you could pulse dial by toggling the
DTR line with the correct cadence...

Did something similar with a telephone in my teens, by pulsing the
handset rest.

Shame no TV suspense movie ever used it - devious villain leaves a phone
with its dial detached so that he can call his imprisoned victims - but
one captive knows better and calls the police despite the absence of a dial.

Sylvia.
 
K

krw

Jan 1, 1970
0
Rod Speed of old?

How is the "Rod Speed of old" [*] any different than the Rod Speed
of new? He still infests many groups.

[*] Ron Reaugh of old
 
Q

qrk

Jan 1, 1970
0
Or maybe the question should be why do we have cables that are not
crossover.

My first encounter with this concept came when connecting serial ports
together. Turned out there were two kinds - data set, and data terminal.

Not content with that, when UTP cables came out, we had a similar situation.

Maybe I've missed something, but it's always seemed to me that a logical
approach would be to define some pins as input and some as output, and
for cables to connect input pins to output pins, thus obviating the need
for two different ways of wiring up connecting cables.

Did I miss something? Is there a reason this situation persists?

Sylvia.

For modern ethernet, you don't need cross-over cables. The devices
figure out the pair sorting. All the NICs, routers, and switches I've
come across in the past 3 to 5 years have auto-sorting of the pairs.
 
F

Franc Zabkar

Jan 1, 1970
0
Or maybe the question should be why do we have cables that are not
crossover.

My first encounter with this concept came when connecting serial ports
together. Turned out there were two kinds - data set, and data terminal.
Not content with that, when UTP cables came out, we had a similar situation.

Maybe I've missed something, but it's always seemed to me that a logical
approach would be to define some pins as input and some as output, and
for cables to connect input pins to output pins, thus obviating the need
for two different ways of wiring up connecting cables.

Did I miss something? Is there a reason this situation persists?

Sylvia.

An even more logical approach is used by USB OTG ("On The Go"). A
device can be either a host (power provider) or a peripheral (power
consumer) depending on the status of a fifth pin. After power-up, both
devices can negotiate to swap functions. The USB data interface is
bidirectional.

See http://en.wikipedia.org/wiki/USB_On-The-Go

- Franc Zabkar
 
K

krw

Jan 1, 1970
0
For modern ethernet, you don't need cross-over cables. The devices
figure out the pair sorting. All the NICs, routers, and switches I've
come across in the past 3 to 5 years have auto-sorting of the pairs.

Don't count on it. We found a bunch of 'em that don't work as
advertised. :-(
 
Top