J
John Larkin
- Jan 1, 1970
- 0
We have a perfect-storm clock problem. A stock 16 MHz crystal
oscillator drives a CPU and two Spartan3 FPGAs. The chips are arranged
linearly in that order (xo, cpu, Fpga1, Fpga2), spaced about 1.5"
apart. The clock trace is 8 mils wide, mostly on layer 6 of the board,
the bottom layer. We did put footprints for a series RC at the end (at
Fpga2) as terminators, just in case.
Now it gets nasty: for other reasons, the ground plane was moved to
layer 5, so we have about 7 mils of dielectric under the clock
microstrip, which calcs to roughly 60 ohms. Add the chips, a couple of
tiny stubs, and a couple of vias, and we're at 50 ohms, or likely
less.
And the crystal oscillator turns out to be both fast and weak. On its
rise, it puts a step into the line of about 1.2 volts in well under 1
ns, and doesn't drive to the Vcc rail until many ns later. At Fpga1,
the clock has a nasty flat spot on its rising edge, just about halfway
up. And it screws up, of course. The last FPGA, at the termination, is
fine, and the CPU is ancient 99-micron technology or something and
couldn't care less.
Adding termination at Fpga2 helps a little, but Fpga1 still glitches
now and then. If it's not truly double-clocking then the noise margin
must be zilch during the plateau, and the termination can't help that.
One fix is to replace the xo with something slower, or kluge a series
inductor, 150 nH works, just at the xo output pin, to slow the rise.
Unappealing, as some boards are in the field, tested fine but we're
concerned they may be marginal.
So we want to deglitch the clock edges *in* the FPGAs, so we can just
send the customers an upgrade rom chip, and not have to kluge any
boards.
Some ideas:
1. Use the DCM to multiply the clock by, say, 8. Run the 16 MHz clock
as data through a dual-rank d-flop resynchronizer, clocked at 128 MHz
maybe, and use the second flop's output as the new clock source. A
Xilinx fae claims this won't work. As far as we can interpret his
English, the DCM is not a true PLL (ok, then what is it?) and will
propagate the glitches, too. He claims there *is* no solution inside
the chip.
2. Run the clock in as a regular logic pin. That drives a delay chain,
a string of buffers, maybe 4 or 5 ns worth; call the input and output
of the string A and B. Next, set up an RS flipflop; set it if A and B
are both high, and clear it if both are low. Drive the new clock net
from that flop. Maybe include a midpoint tap or two in the logic, just
for fun.
3. Program the clock logic threshold to be lower. It's not clear to us
if this is possible without changing Vccio on the FPGAs. Marginal at
best.
Any other thoughts/ideas? Has anybody else fixed clock glitches inside
an FPGA?
John
oscillator drives a CPU and two Spartan3 FPGAs. The chips are arranged
linearly in that order (xo, cpu, Fpga1, Fpga2), spaced about 1.5"
apart. The clock trace is 8 mils wide, mostly on layer 6 of the board,
the bottom layer. We did put footprints for a series RC at the end (at
Fpga2) as terminators, just in case.
Now it gets nasty: for other reasons, the ground plane was moved to
layer 5, so we have about 7 mils of dielectric under the clock
microstrip, which calcs to roughly 60 ohms. Add the chips, a couple of
tiny stubs, and a couple of vias, and we're at 50 ohms, or likely
less.
And the crystal oscillator turns out to be both fast and weak. On its
rise, it puts a step into the line of about 1.2 volts in well under 1
ns, and doesn't drive to the Vcc rail until many ns later. At Fpga1,
the clock has a nasty flat spot on its rising edge, just about halfway
up. And it screws up, of course. The last FPGA, at the termination, is
fine, and the CPU is ancient 99-micron technology or something and
couldn't care less.
Adding termination at Fpga2 helps a little, but Fpga1 still glitches
now and then. If it's not truly double-clocking then the noise margin
must be zilch during the plateau, and the termination can't help that.
One fix is to replace the xo with something slower, or kluge a series
inductor, 150 nH works, just at the xo output pin, to slow the rise.
Unappealing, as some boards are in the field, tested fine but we're
concerned they may be marginal.
So we want to deglitch the clock edges *in* the FPGAs, so we can just
send the customers an upgrade rom chip, and not have to kluge any
boards.
Some ideas:
1. Use the DCM to multiply the clock by, say, 8. Run the 16 MHz clock
as data through a dual-rank d-flop resynchronizer, clocked at 128 MHz
maybe, and use the second flop's output as the new clock source. A
Xilinx fae claims this won't work. As far as we can interpret his
English, the DCM is not a true PLL (ok, then what is it?) and will
propagate the glitches, too. He claims there *is* no solution inside
the chip.
2. Run the clock in as a regular logic pin. That drives a delay chain,
a string of buffers, maybe 4 or 5 ns worth; call the input and output
of the string A and B. Next, set up an RS flipflop; set it if A and B
are both high, and clear it if both are low. Drive the new clock net
from that flop. Maybe include a midpoint tap or two in the logic, just
for fun.
3. Program the clock logic threshold to be lower. It's not clear to us
if this is possible without changing Vccio on the FPGAs. Marginal at
best.
Any other thoughts/ideas? Has anybody else fixed clock glitches inside
an FPGA?
John