Maker Pro
Maker Pro

My Vintage Dream PC

J

jmfbahciv

Jan 1, 1970
0
John said:
Why? Why not run the ultimate kernal on one dedicated processor?
Processors are getting cheaper every day.


The nanokernal could run on one CPU, and needn't be multithreaded,
because it only does one thing, manage the rest of the CPUs. Device
drivers and file systems certainly should not be part of the kernal...
should not even run on the same processor. Running lots of flakey
stuff in supervisor space creates the messes we have in Windows and
Linux and other "big kernal" OSs, where any one of hundreds of modules
and drivers and GUI interfaces can take down the entire system or
compromise its security.

Sure, multithread things that need it, like stacks and GUIs. But never
allow those things to crash the core OS.

Simple: use one CPU out of 256 as the ultimate manager, and waste as
much of its cycles as you like. CPUs will soon be free.
You have obviously never been intimate with a timesharing OS at the
machine level.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
FatBytestard said:
Nice unsubstantiated, peanut gallery mentality comment.

WHAT have you heard?

Mine runs fine. Vista has run fine for over three years, and W7 has
been running fine for several months now. You nay sayer retards are
idiots.

I love it how folks that have ZERO actual experience with things
expound on them like they actually know what is going on.

You do not.

<snort> You don't know what you're talking about.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
FatBytestard said:
Likely asking you if you know what a modem is since it came right after
your cryptic description.

Since you used the term (modem) in your reply, it raises even more
questions. You really are a dufus.

Sounds like I was moving bits before your Daddy's sperm started
swimming uphill.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
TheQuickBrownFox said:
But memory caches, buffers, etc. HAVE changed, and your analysis (and
training)is about three decades OLD, minimum.

Speed scales over time. The number of transistors that can be integrated
into a given die area scales over time.

We all already know that. Your reply is meaningless.

Did you notice that I used the word "proportion"? Do you know what
that means?
The paradigm by which we utilize the hardware can and has changed, and
will continue to change. You claiming it is all the same is a sad
hilarity.

Your mind set is what has stagnated.

Do you even know what current mode logic is, for example?

I don't have to know.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
TheQuickBrownFox said:
Exactly. "multiple processes" on a single CPU is only one thread, in
the final analysis, even if it has little execution order functions, etc.
helping things out.

Oh, good grief. You are confused. It sounds like you've changed the
meanings of all those terms.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
Lawrence said:
You seem to be deluded by the belief that BAH will listen to reality.
Only those systems that were designed in the 60s to run on hardware of
the 60s are acceptable to her.

Sigh! There will have to be communication between cores. How that
gets implemented will determine the security, reliability,
and consistency of performance. Morten is using QNX today. How
this will be changed to run within a mega-core complex system
is the subject. Morten has talked about this in another post.
This concept has the monitor (your nanokernal) not knowing about
any hardware which implies that the I/O will be done using network
protocol. Now think about that.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
John said:
John said:
Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700

If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.

He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.
When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?
Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.

Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.
Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.
Its MTBF, hardware and software, could
be a million hours.

That doesn't matter at all if the monitor is responsible for the
world power grid.


Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].
The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.

/BAH

Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.

You did not read what I wrote. Those virtual OS spaces you were talking
about are applications w.r.t. monitor which is running.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?

You seem to be confusing OSes with monitors.
The top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.

Why not?
If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
Charles said:
jmfbahciv said:
JosephKK said:
[snip...] [snip...] [snip...]

Inside the OS there are usually a switcher, a scheduler, a process
manager, a file manager, an IO manager, and other basic parts.
Optional parts relate to hardware configuration and possibly dynamic
hardware configuration and temporary file systems.

Now how do UUOs and CALLIs relate to how the above mentioned
interfaces? (If at all)

My user mode code has some buffers I want to be written to the
disk. I do a series of UUOs and CALLIs to convey to the
monitor's file system handler, memory handler, device routines,
and controller device routines that I want my bits to be copied
from here to there and labelled XYZ. I ask the monitor what
date/time it thinks it is by doing a CALLI. I set up certain
rules about subsequent usage of the computer system by telling
the monitor through CALLIs and UUOs. These are the only
ways the monitor and I, the user, communicate.

How is that for a start?

/BAH

Translating the DEC'isms, UUO's and CALLI's are what is more generically
known as "system calls". IBM "big iron" would call them "Supervisor
Calls", but then that's IBM.

These calls provide system-wide information (like TIME and DATE), and
protect the OS and other users by *preventing* the normal user from
directly programming dangerous and potentially system-wide damaging code.
These calls also provide the communication mechanisms that allow a
user mode program or human to tell the monitor what to do and the
monitor has to obey. Our UUO and CALLI implementations enforced
an etiquette and was strict about how the user could ask for these
services.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
Roland said:
Indeed, Windows 7 (of which you can download the final beta and run it for
free for the next year or so) is widely held to be, as advertised, the most
secure Microsoft operating system ever.

But Microsoft is going to fix that with their next OS.
Just remember that damnation with faint praise is still damnation.

Well, there are degrees of "most secure". ;-)

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
JosephKK said:
I see your point. Get some crap going just well enough to be useful
and pretend it is the second coming.

No. YOu don't see my point. Distribution has a completely different
set of problems that need to be solved. I worked long and hard
to try to solve the simplest of them (this implies that the simplest
were extremely complicated). When your business is distribution,
then the tradeoffs of all design decisions will be made in favor
of distribution and not anything else.

On-line distribution and support means that backdoors have to be
wide open.

<snip snot>

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
JosephKK said:
Not so much forgotten as (inappropriately) devalued, and thus
untaught.

Not at all. People learn how to deal with the stuff they use.
When the only computer systems available are single-user/owner,
everybody learns how to think with small computer terms. It
very rare to be able to break out of that kind of thinking once
it's been burnt in. Very few bit gods have been able to do this
mindset change.
I must agree, i found out the hard way. Dealing more deeply with IBM
MVS in the early 1990s taught me a lot. Not even a misbehaving
"system" program could bring the system down. It got trapped,
blocked, and killed with rather thorough diagnostic logs available. I
know, i used them a few times.

Using IBM systems means that you were exposed to a mindset that was
based on handling huge amounts of data processing, not huge numbers
of users demanding instant gratification. Both require different
tradeoffs when developing the monitor and supporting software.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
JosephKK said:
Again i must agree, a non-reentrant kernel is a time bomb.

A very slow time bomb. It will tick once a minute, maximum. ;-)

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
JosephKK said:
You clearly do not have a clue as to what you are talking about.
Please leave or self-destruct.
I don't want these people to leave. The only way for old knowledge
to get relearned is to teach the newbies.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
JosephKK said:
Actually helpful to me, can't speak for anyone else.

Oh, good!
Does the user directly specify the UUOs and CALLIs when using say a
text editing program?

probably not. However, TOPS-10 tended to supply commands that would
match most of these.
This is programmer territory, isn't it? This
is OK, it establishes context.

For instance, you can write code to do a flavor of CALLI we called
TTCALLs which would set the speed or characteristics of your terminal.
There were also commands that could do the same thing (SET TTY args).

So, if I were editing and needed to change the TTY setting, I could
^C out of the edit, do a SET TTY foo, type CONTINUE and resume
editing.
Now UUO and CALLI seem to be acronyms or abbreviations.
Yes.

An expansion
of each in this modified context seems to be really to the point.
Please provide these. Perhaps even discuss an example or three of
each. Posting links is quite acceptable, as i expect the explanations
to be more than a few paragraphs.

<grin> The explanations were two notebooks' worth of 70x80lines of
documentation.

The most concise of all of these are in a file called UUOSYM.MAC.
Try to scan through that file and then see if that satisfies
your request.

For TOPS-20, I think the file MONSYM.MAC will describe the -20's
definitions.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
JosephKK said:
jmfbahciv said:
JosephKK wrote:
[snip...] [snip...] [snip...]

Inside the OS there are usually a switcher, a scheduler, a process
manager, a file manager, an IO manager, and other basic parts.
Optional parts relate to hardware configuration and possibly dynamic
hardware configuration and temporary file systems.

Now how do UUOs and CALLIs relate to how the above mentioned
interfaces? (If at all)
My user mode code has some buffers I want to be written to the
disk. I do a series of UUOs and CALLIs to convey to the
monitor's file system handler, memory handler, device routines,
and controller device routines that I want my bits to be copied
from here to there and labelled XYZ. I ask the monitor what
date/time it thinks it is by doing a CALLI. I set up certain
rules about subsequent usage of the computer system by telling
the monitor through CALLIs and UUOs. These are the only
ways the monitor and I, the user, communicate.

How is that for a start?

/BAH
Translating the DEC'isms, UUO's and CALLI's are what is more generically
known as "system calls". IBM "big iron" would call them "Supervisor
Calls", but then that's IBM.

These calls provide system-wide information (like TIME and DATE), and
protect the OS and other users by *preventing* the normal user from
directly programming dangerous and potentially system-wide damaging code.

The insight i am looking for is rather deeper than that. Things like
what is the difference between a UUO and a CALLI? And why

The CALLI was created because we were running out of UUO number
assignments. If you look at the format of the CALLI, there is
a field which contains a number. That number is the dispatch
index in the CALLI table of the monitor.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
Kim said:
No problem. If core in this context means memory for you,

No. The context of term core in this discussion has been CPU.
These CPUs will be more sensitive than the old-fasioned memory
doughnuts.

<snip>

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
Christopher said:
I still have a bunch of thermal terminal printouts from around 1974
that are holding up OK, despite not being very careful with them.
It's the ASR-33 teletype printouts from back then that are the most degraded.

Kewl.

/BAH
 
J

jmfbahciv

Jan 1, 1970
0
Greegor said:
Grace Hopper USN predicted that 30 years ago.

She also predicted that each pixel on a computer
screen would at some point have it's own processor.

http://www.waterholes.com/~dennette/1996/hopper/grace86.gif


Either approach can work if properly executed.

Wouldn't it already be difficult to find a new
PC that isn't a dual or quad processor?

The genie's already out of the bottle.

I played with one on a retail shelf. Managed to kill it
within 3 minutes.

/BAH
 
Top