Maker Pro
Maker Pro

Are PC surge protectors needed in the UK?

W

w_tom

Jan 1, 1970
0
You knew that an IDE port is not a computer; has no
intelligent functions? A disk drive is an embedded computer
complete with voltage monitoring circuits. This controller is
not the same as an IDE port.

Long before voltage drops below what digital circuits on
disk drive require, the disk drive has detected the falling
voltage and stopped writing. Disk drive hardware protects
itself from voltage drop which is also why a power down will
not interfere even with normal disk drive housekeeping.

The idea that an IDE port monitors voltage is an erroneous
assumption and was not made in any previous post. An IDE port
is nothing more than a bidirectional repeater. An IDE port
has no intelligent functions and does not monitor voltage.
Where did you get this idea that IDE port functions were even
being discussed? Are you confusing IDE with some other
hardware interface? Or did you just fail to read that
previous post will sufficient care?

Posted previously was:
The disk drive controller is a complete computer that
also monitors voltage.

Where is an IDE interface even implied here?
 
That objection misses the point. You may lose a lot of data because of
the buffering, but the commit/rollback transaction processing supposedly
means that what you lose is a complete transaction and what you have
left will always be consistent.

Sigh! Those problems were solved (one product was JMF's) in a variety
of ways in an OS that smells like NT. Consistency wasn't the major
problem with the airline reservation system.
At least, that's the theory...

Yea. And then somebody tries to program the thing. Then you
find out what the real theory is ;-).

/BAH


Subtract a hundred and four for e-mail.
 
You knew that an IDE port is not a computer; has no
intelligent functions? A disk drive is an embedded computer
complete with voltage monitoring circuits.

Oh, my! You are young.


<pins>

/BAH

Subtract a hundred and four for e-mail.
 
R

Richard Herring

Jan 1, 1970
0
That's really, really, really too bad. It used to know how.

That objection misses the point. You may lose a lot of data because of
the buffering, but the commit/rollback transaction processing supposedly
means that what you lose is a complete transaction and what you have
left will always be consistent.

At least, that's the theory...
 
F

Folkert Rienstra

Jan 1, 1970
0
Keith said:
Which would be?

Certainly, at least to some point.


You're *ONCE AGAIN* yalking through your ass.


Really? I'm not from Missouri, but close enough. An IDE port monitors
its supply voltage? You're simply talking out your ass, since it's been
shot off repeatedly.

And yours blew-up just now when you can't make the distinction between
an IDE Disk Controller and an IDE Hostbus Adapter.

*And* to the drive when it may encounter a bad sector afterwards,
nomatter what filesystem is in use, though the 'damage' is temporary.
 
F

Folkert Rienstra

Jan 1, 1970
0
Oh, my! You are young.

Well, in that case you are probably old as methusalem.
Not a working braincell left in your cranium.
 
W

w_tom

Jan 1, 1970
0
Richard Herring also continues to help you understand the
concepts:
 
In sci.physics, [email protected]
<[email protected]>
wrote
Consider a file system that writes empty blocks in numerical sequential
order. Now think of a file that's deleted. This leaves an empty
"hole" in the filled blocks. Now make a file whose size is less
than the "hole". Now you have a smaller hole that will be filled
with the next file that is written. That file is larger than the
hole so the hole gets filled, then the next block that isn't filled
is found and written into. Over time, all files, when viewed from
the geometry of the physical disk look like swiss cheese.

A defragmenter takes the whole file system and rewrites each file
such that all its block numbers are monotonically increasing.

Now, where this gets really, really fucked up is when the defragger
program "forgets" which should be the next block (real easy to do
with off-by-one bugs) or has to call its error handling when it
can't do a fit or the block chain pointers become broken. The
last one is a feature of all Misoft OSes because of memory
management problems--but that's another nightmare in the not-an-OS
biz.

Indeed. In Linux, there's no defragger [*], because the file
code in Linux is a little smarter.

Unix also has a different OS philosophy w.r.t. file organizations.
... I'd admittedly have
to look for the details though, and ext2's organization
is quite different from FAT's or NTFS. FAT in particular
is terrible, basically every file is a single chain --
but you probably knew that already.

No, it's worse than that. NOte that I have never read the code
nor the specs of FAT. However, based on the way it "behaves"
on my machine FAT treats the whole disk as a single chain.
This does not honor directory boundaries the way sane people
would expect. This would also explain all the werid-assed bugs
DOS and its layers have.

... NTFS is more or less
as I've described it in my prior post, at a high level,
and it feels like an engineered solution,

I would hope so. It's been getting "developed" since 1971.
...whereas Linux's
ext2 is more elegant, even if it's still engineered.
But there's no perfect solution anyway;

There isn't going to be one, and only one, solution because
each choice solves different problems and has orthogonal
design goals. I don't know about today but in the olden
days, the choice was between "fast" retrieval or humungous
files, a.k.a. data bases. If your system had to maintain
one file that fit on 100 disk packs, your file system OS
code would look and behave differently from a system that
needed to maintain 200,000 small files for 10,000 different
users who accessed them on unpredictable days and times.





... as you've described
the problem, there's always going to be a hole or two,
and a determined program can probably fragment any file
system if it does something like the following:

open big file
write block to big file
open little file
write block to little file
close it
write block to big file
open little file
write block to little file
close it
write block to big file
....

(It's a good thing the trend is towards centralized dedicated-machine
syslog-type logging. :) )

See my description above; I just threw a kink in your POV.
I'll admit to wondering whether NT had the rather interesting
capability or not of "let's just write it here".

That depends on the definition of "here". Are you talking
about the logical placement of the file or the physical
placement of the file? There are other flavors of "here"
but I go into them. :)
.. I base
this hypothesis on observations using DiskKeeper Lite, which
copy I had at the time on a machine at my then-employer.
Basically, the notion is to simply write the new block at
an open sector in the cylinder over which the head is
flying.

Ah, you were talking about physical. Note that physical has
nothing to do with the FAT nor NTFS.

Now the problem with cylinders is that they spin. The problem
with tracks is that they're a circle.

Of course this would fragment things terribly, and I have no proof.

No,no,no. Now you're confusing logical bit placement with physical
bit placement. The two really (TW's going to kill me) don't have
much to do with each other.
But things did fragment pretty badly when I used such tools
as Visual C++.

Now you're talking about logical placement.

You are confused :). However, I can't help you very much
because it would be a case of the blind leading the blind.
There are other people who are bit gods in this area.

/BAH

Subtract a hundred and four for e-mail.
 
Top