Maker Pro
Maker Pro

Are PC surge protectors needed in the UK?

R

Ron Reaugh

Jan 1, 1970
0
w_tom said:
IOW there is no claim in that APC sales brochure for common
mode transient protection. Even worse, they no longer make the
specific claim for normal mode protection. Ron again
demonstrates propaganda used to promote plug-in UPSes for
ineffective surge protection.

Your wacko claims have already been refuted by my citation earlier in this
thread.
APC does include "common mode" as I cited.
 
M

Mike Tomlinson

Jan 1, 1970
0
Ron Reaugh said:
Vaporizing...are you gonna bring in Klingons now as we seem to be having a
bit to drink?

I suggest you consult the thread with the same title crossposted to the
following groups before wasting any more time on w_tom:

uk.comp.vendors,uk.comp.homebuilt,alt.comp.hardware,alt.comp.hardware.pc
-homebuilt
 
N

No Spam

Jan 1, 1970
0
w_tom said:
Hard drives can be corrupted for various reasons based also
upon what the filesystem is. For example, if using FATxx
filesystems, then a loss of electrical power at the right time
can even erase files from that drive. Just another reason why
the technically informed want NTFS filesystems on drives; not
FAT.

Or RieserFS. But that is used by OS's other than Gate$
Lemingware! :)
 
T

The Ghost In The Machine

Jan 1, 1970
0
In sci.physics, John Gilmer
<[email protected]>
wrote
I'l bite:

I think I understant FAT systems.

But the only thing I "think" I know about the NTFS is that, effectively, the
system first makes a record of what it is about to do, then it does it, and
then it either erases the original record or somehow marks it.

SO: can someone "explain" the NTFS to me. (Please don't tell me to "look
it up.")

NTFS stands for NT File System, presumably. (I've no idea what NT
stands for. Certain jokesters have their own opinions, mine among
them.)

A file system is a method by which the unorganized data in
a disk partition -- basically, a very long chain of blocks,
or perhaps a mapping from an integer (the logical block
address) to a fixed-size chunk of data (the block) -- can
be organized into something more appetizing to humans:
files, directories, symbolic links, or in Microsoft
parlance (perhaps), documents, folders, and shortcuts.
DOS 1.0's FAT filesystem didn't even have directories
(that was added in 1.1 or 2.0; I forget which). NTFS is
fairly sophisticated; it has, among other things:

- per-file locking (to the intense annoyance of UNIX and Linux
programmers, this appears to be on by default)
- resource streams a la Macintosh (which aren't apparently used yet?)
- Access Control Lists
- Unicode support
- Case-preserving filenames
- a master file table, which is where the small files live
- sparse files (files with "holes" in their blocklists -- a
useful capability in some contexts related to databases, AIUI)
- short file name capability for DOS backwards compatibility
- hidden files
- support for running a defragmenter while the volume is mounted.
(Don't ask.)

There are a few other capabilities but I'd have to look.

If you really do want to look it up, you can try
the Linux source code -- an NTFS implementation
is in the kernel under /usr/src/linux/fs/ntfs or
/usr/src/linux/Documentation/filesystems/ntfs.txt. It is
definitely not for the faint of heart. There should
be some documentation somewhere on Microsoft's website,
of course; again, I'd have to look.

HTH
 
J

JM

Jan 1, 1970
0
quoting:

The files aren't actually gone forever. If the power fails during the
writing of the file, but the FAT have not been updated yet, the data will be
found as "lost clusters" by Scandisk. It'll probably be in a bunch of
pieces though, due to fragmentation.

There are little backup programs that back up the disk's FAT's, boot sector,
etc. in the event that that any of the disk's reserved sectors get
obliterated by some other means.
 
J

John Gilmer

Jan 1, 1970
0
NTFS stands for NT File System, presumably. (I've no idea what NT
stands for. Certain jokesters have their own opinions, mine among
them.)

Yeah, yeah.

And NT stands for New Technology. It was written as a "Windows Like"
operating system to run on hardware stuff by Sun Micro and the old DEC which
used UNIX. At some point Micro$oft is to make the NT and Windows
essentially the same operating system.

A file system is a method by which the unorganized data in
a disk partition -- basically, a very long chain of blocks,
or perhaps a mapping from an integer (the logical block
address) to a fixed-size chunk of data (the block) -- can
be organized into something more appetizing to humans:
files, directories, symbolic links, or in Microsoft
parlance (perhaps), documents, folders, and shortcuts.
DOS 1.0's FAT filesystem didn't even have directories
(that was added in 1.1 or 2.0; I forget which).

Well, you lost me again. The directory is supposed to point to the entry
in the FAT corresponding to the first "allocation unit" of the file. From
my old memoery, the Intel development system just had a fixed directory.
Directory entry #1 was the first allocation unit, etc. Longer files were
accomodated by "chaining" directories.

NTFS is
fairly sophisticated; it has, among other things:

- per-file locking (to the intense annoyance of UNIX and Linux
programmers, this appears to be on by default)

Well, WTF does it "lock?'
- resource streams a la Macintosh (which aren't apparently used yet?)

That doesn't help.
- Access Control Lists
OK.

- Unicode support

Which means ...
- Case-preserving filenames
OK

- a master file table, which is where the small files live
Huh?

- sparse files (files with "holes" in their blocklists -- a
useful capability in some contexts related to databases, AIUI)
Neat!

- short file name capability for DOS backwards compatibility
OK

- hidden files

Old stuff.
- support for running a defragmenter while the volume is mounted.
(Don't ask.)

Well, I understand was the defragmenter does in a FAT system but since I
still don't understand how files are stored I can't understand how that are
either fragmented or defragmented.
There are a few other capabilities but I'd have to look.
OK


If you really do want to look it up, you can try
the Linux source code -- an NTFS implementation
is in the kernel under /usr/src/linux/fs/ntfs or
/usr/src/linux/Documentation/filesystems/ntfs.txt. It is
definitely not for the faint of heart. There should
be some documentation somewhere on Microsoft's website,
of course; again, I'd have to look.

Sorry, you have just asked me to think and work harder than I care to.
 
w_tom said:
Ron. Did you read your citation before posting it? Where
is the reference to common mode protection? Where are any
numbers that apply to common mode protection?
Where is the common mode protection? Not in that citation.
Not in what Ron describes.
<snip>

Yes, it IS in that citation, exactly where he said it is.
It is the line below the two you quoted from the page
he cited.

It says: "Surge response time: 0 ns (instantaneous)
normal mode, < 5ns common mode."
 
Well, I understand was the defragmenter does in a FAT system but since I
still don't understand how files are stored I can't
understand how that are
either fragmented or defragmented.

Consider a file system that writes empty blocks in numerical sequential
order. Now think of a file that's deleted. This leaves an empty
"hole" in the filled blocks. Now make a file whose size is less
than the "hole". Now you have a smaller hole that will be filled
with the next file that is written. That file is larger than the
hole so the hole gets filled, then the next block that isn't filled
is found and written into. Over time, all files, when viewed from
the geometry of the physical disk look like swiss cheese.

A defragmenter takes the whole file system and rewrites each file
such that all its block numbers are monotonically increasing.

Now, where this gets really, really fucked up is when the defragger
program "forgets" which should be the next block (real easy to do
with off-by-one bugs) or has to call its error handling when it
can't do a fit or the block chain pointers become broken. The
last one is a feature of all Misoft OSes because of memory
management problems--but that's another nightmare in the not-an-OS
biz.


<snip>

/BAH

Subtract a hundred and four for e-mail.
 
L

Leonard Caillouet

Jan 1, 1970
0
Mike Tomlinson said:
Q. What's the most annoying thing on Usenet?

A. Cross posting. Second only to people who bitch about top posters .

Leonard
 
T

The Ghost In The Machine

Jan 1, 1970
0
In sci.physics, [email protected]
<[email protected]>
wrote
Consider a file system that writes empty blocks in numerical sequential
order. Now think of a file that's deleted. This leaves an empty
"hole" in the filled blocks. Now make a file whose size is less
than the "hole". Now you have a smaller hole that will be filled
with the next file that is written. That file is larger than the
hole so the hole gets filled, then the next block that isn't filled
is found and written into. Over time, all files, when viewed from
the geometry of the physical disk look like swiss cheese.

A defragmenter takes the whole file system and rewrites each file
such that all its block numbers are monotonically increasing.

Now, where this gets really, really fucked up is when the defragger
program "forgets" which should be the next block (real easy to do
with off-by-one bugs) or has to call its error handling when it
can't do a fit or the block chain pointers become broken. The
last one is a feature of all Misoft OSes because of memory
management problems--but that's another nightmare in the not-an-OS
biz.

Indeed. In Linux, there's no defragger [*], because the file
code in Linux is a little smarter. I'd admittedly have
to look for the details though, and ext2's organization
is quite different from FAT's or NTFS. FAT in particular
is terrible, basically every file is a single chain --
but you probably knew that already. NTFS is more or less
as I've described it in my prior post, at a high level,
and it feels like an engineered solution, whereas Linux's
ext2 is more elegant, even if it's still engineered.
But there's no perfect solution anyway; as you've described
the problem, there's always going to be a hole or two,
and a determined program can probably fragment any file
system if it does something like the following:

open big file
write block to big file
open little file
write block to little file
close it
write block to big file
open little file
write block to little file
close it
write block to big file
....

(It's a good thing the trend is towards centralized dedicated-machine
syslog-type logging. :) )

I'll admit to wondering whether NT had the rather interesting
capability or not of "let's just write it here". I base
this hypothesis on observations using DiskKeeper Lite, which
copy I had at the time on a machine at my then-employer.
Basically, the notion is to simply write the new block at
an open sector in the cylinder over which the head is
flying.

Of course this would fragment things terribly, and I have no proof.
But things did fragment pretty badly when I used such tools
as Visual C++.
<snip>

/BAH

Subtract a hundred and four for e-mail.

[*] actually, there is, but it's very rarely used.
 
W

w_tom

Jan 1, 1970
0
Common mode to what? To the safety ground? How much? Does
it conduct 1 microamp in 5 ns to the safety ground? What kind
of protection is that? Based upon facts and numbers provided,
then my digital multimeter is even a better surge protector -
a claim I can make because specs are better called an
'executive summary'.

At least that manufacturer once provided insufficient specs
for Normal mode protection that the manufacturer does claim to
provide:
Normal mode surge voltage let through <5% of test peak
voltage when subjected to IEEE 587 Cat. A 6kVA test
Now manufacturer cannot bother to provide even that
insufficient information. After all, they are not trying to
sell a 'common mode' claim to the informed. They dumb down
the numbers into rubbish so that one who wishes MOVs absorb
the energy of a surge will see what he wants.

My car tires have a common mode response time AND that
proves those tires are effective protection? Common mode
what? Does not matter. That tiny phrse would be enough even
for a poet to believe what he wants to know.

How much common mode current in less than 5 ns? From what
or which one wire is that common mode response? Is that
common mode response really just a response inside the UPS
controller circuit? Or is that a common mode response on the
serial port. RS-232 is a common mode communication ports. So
does the serial port haves a less than 5 ns response? Wow.
That means the UPS must provide massive lightning protection -
if living in the world of Harry Potter.

IOW they mention 'common mode response' but give not one
indication that the UPS provides common mode surge
protection. It only does something - and they don't even say
what or how much. That woefully insufficient and deceptive
information is enough for some to loudly declare that a UPS
provides lightning protection. IOW another urban myth has
been promoted.

There are no claims of common mode transient protection on
the incoming AC input. Provided are words without relevance so
that a poet might hope for common mode response to something -
which therefore must be a direct lightning strike? It's
called wild speculation on your part - the same person who
foolishly believes shunt mode devices (such as wire) are
designed to absorb energy. But an engineer says, "What is
this crap. There is no numerical information to work with."

That UPS does not claim common mode protection. It simply
claims some undefined of response to common mode noise from or
to an undefined location. It does not even say those 160
joules are involved in such protection. Furthermore it admits
to being grossly undersized - only 160 joules. A poet then
can assume the response time means the UPS will conduct 50,000
amps? A poet can. So can Harry Potter. Those who must deal
in reality cannot.

There is nothing in those specs beyond gobbledygook. Using
ehsjr and Ron Reaugh reasoning, should we assume the UPS is
sufficient even for aeronautical environments? After all,
they do
claim 'something' that myth purveyors can distort into a real
world miracle.

ehsjr - when will you claim that a faraday cage also makes
that UPS so effective?

I have this 741 op amp (a semiconductor amplifier). It also
has a common mode rating. So that operational amplifier (that
little IC) is also a lightning protector? Yes according to
how ehsjr reasons. Give me a break. That UPS does not even
claim to provide common mode protection - which is why they
must all but encrypt their specifications. Its called name
dropping. They dropped the phrase "common mode". That
without any numbers is enough for ehsjr to loudly claim the
UPS provides common mode protection. It is called Junk
Science reasoning.
 
W

w_tom

Jan 1, 1970
0
Power supply damages motherboard when a computer assembler
purchases power supplies without consulting specifications.
Intel specs for ATX power supplies demand that PSU not damage
motherboard and other components:
Section 3.5.1 Overvoltage Protection
The overvoltage sense circuitry and reference shall reside
in packages that are separate and distinct from the
regulator control circuitry and reference. No single
point fault shall be able to cause a sustained overvoltage
condition on any or all outputs. The supply shall provide
latch-mode overvoltage protection as defined below.
Table 11: Overvoltage Protection
Output
+12 VDC Max is 15.6 volts
+5 VDC Nominal is 6.3 volts
+3.3 VDC Nominal is 4.2 volts

Too many computer assemblers don't have necessary technical
knowledge and therefore don't even know that overvoltage
protection (OVP) has been a defacto standard for 30+ years.
That motherboard damage probably may be traceable to the
ill-informed computer assembler (who does not demand specs) or
a power supply manufacturer who outrightly lies on his
specifications.

There is nothing in a UPS that will accomplish the necessary
OVP.

Other essential functions that should be found in the power
supply specification, but that many 'bean-counter' selected
supplies may be missing:
Specification compliance: ATX 2.03 & ATX12V v1.1
Short circuit protection on all outputs
Over voltage protection
Over power protection
EMI/RFI compliance: CE, CISPR22 & FCC part 15 class B
Safety compliance: VDE, TUV, D, N, S, Fi, UL, C-UL & CB
Hold up time, full load: 16ms. typical
Efficiency; 100-120VAC and full range: >65%
Dielectric withstand, input to frame/ground: 1800VAC, 1sec.
Dielectric withstand, input to output: 1800VAC, 1sec.
Ripple/noise: 1%
MTBF, full load @ 25°C amb.: >100k hrs

Power supplies missing these and other functions are sold at
good profit in the North American computer clone market. OVP
must be in all computer supplies but is often missing in clone
computers.
 
C

Charles Perry

Jan 1, 1970
0
There is nothing in a UPS that will accomplish the necessary
OVP.


You need to specify which kind of UPS. Some UPS will provide excellant over
voltage protection.

Charles Perry P.E.
 
W

w_tom

Jan 1, 1970
0
Overvoltage protection being discussed is on the 3.3, 5, and
12 volt outputs. Table 11 from Intel specs even defines where
and what that OVP must do. There is nothing in a plug-in UPS
- outputting 120 VAC or 230 VAC to power supply - that is
going to over voltage protect those DC outputs. Nothing.
Only OVP that a UPS can provide - limit 120 VAC or 230 VAC.
That will not solve an overvoltage problem on DC output of a
'defective by design' power supply.

J.J. asked:
If a computer PSU fails then I have heard that it may
(or may not) blow the mainboard and perhaps various
other components ...

Yes, if power supply does not have OVP. No if supply does
have the required OVP. No UPS will solve this missing OVP
problem on DC outputs of power supply.
 
W

w_tom

Jan 1, 1970
0
He has no knowledge, education or experience. He has not
one technical fact to post in response. At least a junk
scientist would try to invent a fact. Ron Reaugh is even
worse than a junk scientist. He insults. Some claim that a
plug-in UPS provided hardware protection. They can insult.
That alone proves they must be right. Hey Ron. Is god on
your side? No wonder you just 'know' these things.
 
K

Keith

Jan 1, 1970
0
Again, the trashed filesystem is a problems of FAT and other
simplistic file systems. It is not a problem to superior
(journalizing) filesystems.

Which would be?
Will a disk drive write to the platter as voltage drops?

Certainly, at least to some point.
Of course not.

You're *ONCE AGAIN* yalking through your ass.
The disk drive controller is a complete computer
that also monitors voltage.

Really? I'm not from Missouri, but close enough. An IDE port monitors
its supply voltage? You're simply talking out your ass, since it's been
shot off repeatedly.

It does not matter to disk
hardware when power is turned off. But it does matter to some
'simplistic' disk filesystems that power is not removed during a write
operation.

Just another reason why FAT was obsoleted by HPFS which in
turn was obsoleted by NTFS.

You haven't a clue (as usual). NTFS is a slight modification to HPFS
(written by the same SB, AFAIK) to make sure that OS/2 couldn't access NT
systems. Neither is a JFS, nor is either less corrruptable than FAT.
Indeed NT systems are far more susceptable to corruption than other
similar OSs because of the agressive write buffering. Even (non-JFS) OS/2
systems are better at self-healing than NT. Of course JFS is a standard
part of OS/2 now. Windows? YMBK!
 
People might take you a tad more seriously if you exhibited
a scintilla of trainability. The next line is a hint...

<snip topposting>

/BAH

Subtract a hundred and four for e-mail.
 
Which would be?

Certainly, at least to some point.


You're *ONCE AGAIN* yalking through your ass.


Really? I'm not from Missouri, but close enough. An IDE port monitors
its supply voltage? You're simply talking out your ass, since it's been
shot off repeatedly.



You haven't a clue (as usual). NTFS is a slight modification to HPFS
(written by the same SB, AFAIK) to make sure that OS/2 couldn't access NT
systems. Neither is a JFS, nor is either less corrruptable than FAT.

How in the world does he think that FAT has anything to do with
physical disk specs?
Indeed NT systems are far more susceptable to corruption than other
similar OSs because of the agressive write buffering.

That's really, really, really too bad. It used to know how.

<snip>

/BAH

Subtract a hundred and four for e-mail.
 
Reference Microsoft :)

"Because you didn't shut down Windows properly, Scandisk is now trashing
your disk to complete the job.

In future, always shut down Windows properly"

Scandisk is not *always* going to pull your nuts out of the fire after a
power interruption. In fact, it might make things worse :)

Wouldn't you *expect* data corruption, if the system was writing/about to
write cached data, and the mains power went off?

Not if you have an OS that knows what it's doing. A great part
of file system code is supposed to anticipate such things. For
instance, one approach is to not put the file entry in the user
file directory file until after the last bit is written and the
user's prog has issued a close.
You don't need references to figure out that it's a bad idea to just
lose power in an uncontrolled way.

Of course it's a bad idea :) but most systems, that went through
a design, allocate quite a bit of their code to Murphy anticipation.

/BAH

Subtract a hundred and four for e-mail.
 
Top