[email protected] wrote:
: DJ Delorie wrote:
:>
[email protected] writes:
:> > It really needs to be the same tool,
:>
:> Or at least *seem* like it's the same tool. Otherwise, I agree.
: On larger designs, memory is being pushed to maintain lists and objects
: instantiated already. Paging severely cuts into performance. When
: running as a separate application, there is substantial page
: replication introduced for every data page for a long list of shared
: library instances, plus replication of the netlists. Likewise,
: performance is critially tied to working set, having a second
: application running concurrently with equally large working set, will
: provoke substantial cache thrashing, which will show up as memory
: latency induced jerkyness in the UI, as the cache is flushed out and
: reloaded between contexts. While these may seem like parameters in the
: application architecture that can be ignored, perceived UI performance
: is heavily dependent on them. Similarly the communication between
: separate applications results in context switches, which causes
: additional cache thrashing by including large sections of the kernel in
: the working set. Consider the processor is some 20-100 times faster
: than L2/L3 cache these days, and the cache is frequently another 10-50
: times or more faster than memory. Exceeding cache working sets,
: effectively turns the machine into a 50MHz processor again.
: There are substantial performance reasons suggesting that it should be
: the same application, (just a different thread at most) to conserve
: memory resources, and improve performance. While they may not be
: critical for toy student projects, for many real life projects which
: are much larger, they become critical UI problems. The sample
: ProofOfConcept design I sent you, is about 1/5 the size of several
: production designs I have done using PCB.
: When the typical desktop CPU comes standard with 10MB or better of L2
: cache, these issues might go away. Last time I checked, this was only
: available for high end Itianum processors, well outside the reach of
: most mortals in cost (or me right now).
Interesting points. My comments/questions:
* From your experience, can you quantify how large a design must be
before it begins to hit memory limits when using gEDA/PCB? How many
nets/components? This information would be interesting to the
developers. (Or if your observations are about general computer
performance as opposed to gEDA/PCB, perhaps you could make that clear,
so we don't worry about possible performance enhancements we might
make?)
* As for making schematic capture and layout separate threads of the
same process: they weren't designed together, don't share
datastructures or an API, and so therefore integration represents a
lot of work.
At the level of interoperability, the schematic capture program and
the layout program work great together. But they're not the same
program, and combining them into one program is not only difficult,
but is not necessarily a good thing. This is basically a FAQ:
http://geda.seul.org/wiki/geda:faq#...grams_and_not_a_single_integrated_application
I'll note again that a board flow commonly found in the Boston area
is ViewDraw -> Allegro. ViewDraw is totally unrelated to Allegro, and
nowadays they are products of competitors. Nonetheless, this flow
works great, since ViewDraw has the ability to netlist to Allegro
quite easily. (Let's hope Mentor and Cadence don't try to break this
link moving forward.) Gschem and PCB have the same relationship: You
can netlist quite handily from gschem to PCB. Moving forward, you
will see backward annotation as well as a feature allowing you to
select a component in PCB and see the symbol light up in gschem.
Therefore, this business about "better integration" is just nonsense.
* As for this business about "toy student projects", my experience is
that a good chunk of boards are of the 6" x 8" 4-6 layer type, both in
academia as well as in the real world. Think test boards, knock-off
boards for the manufacturing floor, quick data acquisition boards,
amplifier modules, medium-sized microcontroller boards, connector
aggregation boards, sensor boards, protocol conversion boards, boards
for test fixtures, peripherial boards, PCI boards, audio boards, ham
radio boards, hobby robot boards, motor control boards, position
sensor boards etc. etc. etc. . . .
.. . . That's the target audience for gEDA/PCB. Of course, some people
have created larger boards than that using the gEDA tools, and bully
for them! But my opinion is this: If you're designing a 15"x24" 20
layer router board with controlled impedance, high-speed,
matched-length differential busses, you should probably go out and buy
one of the fine high-end products from Mentor or Cadence.
* Next, you made this comment:
: When the typical desktop CPU comes standard with 10MB or better of L2
: cache, these issues might go away. Last time I checked, this was only
: available for high end Itianum processors, well outside the reach of
: most mortals in cost (or me right now).
My answer to you is: Which to you prefer, shelling out a few
thousand $$$ for a better computer to run a powerful open-source
design suite, or shelling out tens of thousands of $$$ to run a secret
source design suite (likely requiring a high-end work station anyway)?
* Finally, I'll point out that gEDA/PCB is an open-source project, so
people interested in new features are always welcome to submit patches
for incorporation into the code base. We get a large number of people
complaining about one or another imagined misfeature in the gEDA
suite. However, the ratio of code patches to suggestions/complaints
is pitifully small. I sometimes tell the folks with suggestions: A
patch is worth a thousand suggestions.
Stuart