Maker Pro
Maker Pro

Disk geometries

D

D Yuniskis

Jan 1, 1970
0
Hi,

I'm wondering what factors drive disk geometries (and, thus,
capacities). I.e., what makes certain sizes common and
others less common (e.g., you never? saw 7GB disks).

Of course, the sizes of the magnetics determine the sizes of the
magnetic domains that can be resolved, etc. But, I don't see
anything else in the design of a disk system that forces
capacities to the values that are commonplace.

E.g., semiconductor memory has reasons for wanting to be
sized in powers of two -- there is no manufacturing advantage
to making a 5KB device (e.g.). If a foundry can improve its
process, it can make a *smaller* 4KB device and (hopefully)
improve market share, profit margin, etc. that way. Ultimately,
make an 8KB device that's the size of the old 4KB device, etc.

But, disk platters are fixed sizes (?). There are no economies (?)
to be gained by shrinking platter sizes as your magnetics improve.
There are no "standards" (e.g., like with removable media) that
force the magnetic domains to be of a particular size.

(you can pursue this reasoning to considerable depth)

So, why don't we see disks with 7% more capacity as magnetics
shrink by 7% (e.g.)? Or, is it just not economical to retool
for anything less than a 2X capacity increase?

--don
 
J

Jon Kirwan

Jan 1, 1970
0
I'm wondering what factors drive disk geometries (and, thus,
capacities). I.e., what makes certain sizes common and
others less common (e.g., you never? saw 7GB disks).

Of course, the sizes of the magnetics determine the sizes of the
magnetic domains that can be resolved, etc. But, I don't see
anything else in the design of a disk system that forces
capacities to the values that are commonplace.

E.g., semiconductor memory has reasons for wanting to be
sized in powers of two -- there is no manufacturing advantage
to making a 5KB device (e.g.). If a foundry can improve its
process, it can make a *smaller* 4KB device and (hopefully)
improve market share, profit margin, etc. that way. Ultimately,
make an 8KB device that's the size of the old 4KB device, etc.

But, disk platters are fixed sizes (?). There are no economies (?)
to be gained by shrinking platter sizes as your magnetics improve.
There are no "standards" (e.g., like with removable media) that
force the magnetic domains to be of a particular size.

(you can pursue this reasoning to considerable depth)

So, why don't we see disks with 7% more capacity as magnetics
shrink by 7% (e.g.)? Or, is it just not economical to retool
for anything less than a 2X capacity increase?

Good questions. I can say that at one point drives did
increase in smaller percentages. My earliest hard drive that
I purchased myself was 10M (I'd used smaller drives before.)
The next step up, a year later, was 20M (and 30M,
optionally.) Then 40M. Then 45M. Then 60M. Then 80M. Then
100M and 120M and 150M and 180M. Roughly, about that time
the market quickly switched from very expensive MFM (well,
very expensive for the larger sizes, anyway) to the cheaper,
less repairable, but faster IDE drives and the numbers
comparatively quickly rose upwards. But I definitely
remember even then 1.2G, 1.6G, 1.7G and 1.76G, 2G and 2.1G,
and so on. So your suggestion about incremental increases
being a reasonable expectation is consistent with _some_ of
the history I recall.

Jon
 
D

D Yuniskis

Jan 1, 1970
0
Hi Vladimir,

Vladimir said:
If you look at the actual amount of usable space on disk, it is quite
different from model to model and even from version to version, although
the "rated" capacity is the same.

But those changes seem to be "in the noise". E.g., as if an
extra cylinder or two were added.

In the PC market, you start to see 50% increases being common
(1TB, 1.5TB, 2TB, etc.) instead of 2X. But, nothing finer
grained than this (and, marketing to The Unwashed Masses, you
would think there would be a push to pitch a 1.1TB disk over
a competitor's 1.0TB drive -- perhaps this is what drives the
"50%" number?)

I.e., do *all* the disk fabs use the same magnetics, coatings,
etc. One would think there would be more "natural variation"
in product offerings (?)
 
Hi,

I'm wondering what factors drive disk geometries (and, thus,
capacities). I.e., what makes certain sizes common and
others less common (e.g., you never? saw 7GB disks).

Number of surfaces. Three platters gives 1 through 6 possible surfaces.
Delete iron or heads as required by the marketeering department.
Of course, the sizes of the magnetics determine the sizes of the
magnetic domains that can be resolved, etc. But, I don't see
anything else in the design of a disk system that forces
capacities to the values that are commonplace.

E.g., semiconductor memory has reasons for wanting to be
sized in powers of two -- there is no manufacturing advantage
to making a 5KB device (e.g.). If a foundry can improve its
process, it can make a *smaller* 4KB device and (hopefully)
improve market share, profit margin, etc. that way. Ultimately,
make an 8KB device that's the size of the old 4KB device, etc.

But, disk platters are fixed sizes (?). There are no economies (?)
to be gained by shrinking platter sizes as your magnetics improve.

There is. Small disks are more stable than large ones. That's why 3.5" disks
overtook 5" disks.
There are no "standards" (e.g., like with removable media) that
force the magnetic domains to be of a particular size.

(you can pursue this reasoning to considerable depth)

So, why don't we see disks with 7% more capacity as magnetics
shrink by 7% (e.g.)? Or, is it just not economical to retool
for anything less than a 2X capacity increase?

When 2X is next month, why bother. ;-)
 
D

D Yuniskis

Jan 1, 1970
0
Hi Richard,

Richard said:
When I was working with PC's I found that we could optimize some disks
beyond the rated capacity just by fiddling with the prime factors of
the number of actual addressable sectors on the drive.

This seems a contradiction in terms. If you are factoring
the total sector count and coming up with a different "physical"
geometry, the capacity remains the same (since the product of all
of the factors is a constant).

With modern drives (last 10+ years?), the actual physical
geometry "published" bears little resemblance to the actual
geometry due to things like ZDR.

Most drives now use LBA -- either explicitly or implicitly
(the days of a drive *requiring* an INITIALIZE command to
tell *it* what it's physical geometry is are long gone :> )
 
G

Grant

Jan 1, 1970
0
Ah, the fond memories of manually entering the known-bad sectors list
hand-written on the top of the drive into the formatting utility so as to let
the OS avoid storing any files there... :)

Of course that was in the days of, e.g., 20MB drives. I bet there's already
thousands of bad sectors in a 1TB drive the day it leaves the factory!

Grab a brand new Seagate drive, hook into a Linux box, run smartctl -a,
walk away for a couple hours than check the numbers again -- all by itself
the drive busily does its runtime calibration and stuff, lots of soft errors
in the first few hours power on time.

So these days I leave the drive to itself for a few hours before installing
OS, seems much more reliable after that. Old argument used to be let the
drive properly warm up before formatting.

I also do an surface write to zeroes prior to formatting, superstition,
perhaps? (dd if=/dev/zero bs=1M, of=/dev/sd$new_drive). It gives the
controller a chance to remap iffy sectors before they've got my data on
them.

Grant.
 
D

Dirk Bruere at NeoPax

Jan 1, 1970
0
If you look at the actual amount of usable space on disk, it is quite
different from model to model and even from version to version, although
the "rated" capacity is the same.

VLV

When the last mechanical memory drive has ceased production, the true
future will have arrived.
 
A

Archimedes' Lever

Jan 1, 1970
0
When the last mechanical memory drive has ceased production, the true
future will have arrived.


It is gonna be a while yet.
 
D

D Yuniskis

Jan 1, 1970
0
Hi Joel,

Joel said:
Ah, the fond memories of manually entering the known-bad sectors list
hand-written on the top of the drive into the formatting utility so as
to let the OS avoid storing any files there... :)

I periodically check the G-list's on my SCSI drives in an attempt
to give me a heads-up re: potential failures. Much the same way
that SMART tries to work on IDE drives.

Of course, SMART hasn't proven to be very *smart* so I suspect
my efforts are probably just "self-reassuring" :-/ (though,
I think, G-list additions *are* statistically significant as
predictors)
Of course that was in the days of, e.g., 20MB drives. I bet there's
already thousands of bad sectors in a 1TB drive the day it leaves the
factory!

Could be. OTOH, materials have improved, platters are smaller.
 
D

D Yuniskis

Jan 1, 1970
0
Hi Grant,
Grab a brand new Seagate drive, hook into a Linux box, run smartctl -a,
walk away for a couple hours than check the numbers again -- all by itself
the drive busily does its runtime calibration and stuff, lots of soft errors
in the first few hours power on time.

Most drives do this. A/V drives skip it (or dramatically scale
it back) as it impacts throughput when you are operating the
drive in a near continuous fashion.
 
G

Grant

Jan 1, 1970
0
Hi Grant,


Most drives do this. A/V drives skip it (or dramatically scale
it back) as it impacts throughput when you are operating the
drive in a near continuous fashion.

Quite likely, but given the pickiness of some here, I thought I'd stick
to recounting my own experience :)

Maybe the A/V drives been 'run in' at the factory? My impression is that
the drives are not finely calibrated before shipment, they self calibrate
in use. Probably because the things need to compensate for temperature
variations and mechanical wear in normal use anyway?

Grant.
 
A

Archimedes' Lever

Jan 1, 1970
0
I'm wondering what factors drive disk geometries (and, thus,
capacities). I.e., what makes certain sizes common and
others less common (e.g., you never? saw 7GB disks).


Because like cannon, for decades only one or two companies drove new HD
design and development. That was IBM. When it was huge, multi-disc
5.25" form factor stuff, platter capacity drove it. Now that areal
density is so high, reliability, thermal and power issues drive it into
smaller form factors.

They were always breaking new records in areal density, and that allowed
for the reduced platter diameters which raised MTBF and lowered heat, so
everyone was 'buying' IBM's designs for MR recording technology. and now
on into the perpendicular orientation heads. IBM has ended their foray
and sold that division to Hitachi. Now they and Seagate have the best
drives. The rest are mass volume OEM players type folks (WD).

So mainly smaller is better because of heat and power requisite
concerns. The 15k rpm drives are like 1.2" and 1.5" 'platters'.

SAS or Serial Attached SCSI is a happening now future wave.

We will be carrying drives around soon enough. Stop buying memory
sticks and keep the mini hard drive alive!
 
D

D Yuniskis

Jan 1, 1970
0
Hi Grant,
Quite likely, but given the pickiness of some here, I thought I'd stick
to recounting my own experience :)

Maybe the A/V drives been 'run in' at the factory? My impression is that
the drives are not finely calibrated before shipment, they self calibrate
in use. Probably because the things need to compensate for temperature
variations and mechanical wear in normal use anyway?

Sorry, I wasn't as complete in my explanation as I should have
been. <:-(

The A/V drives "postpone" the "recalibration" that drives
perform periodically. This is to accommodate variations
brought about by thermal issues (i.e., things "stretch"
as they get warm).

As a crude analogy... the tracks (cylinders) effectively
"increase in diameter" as the platters heat up. So, the
drive, all by itself, moves the head and watches to see
where it ends up. Then, uses this to calibrate (one of
the) gain(s) in its heap positioning servo. Obviously,
while the drive is playing around like this, it can't
get you the data that you *want* (assuming your needs
are truly random and can't be satisfied by the read-ahead
cache) so the access time suddenly increases dramatically
(until the calibration "digression" is completed)

For typical disk accesses, you never notice this (though if
the computer is completely idle, you can hear the heads
occasionally being bounced AS IF the disk was accessing real
data). But, for applications (A/V) that expect continuous
uninterrupted data to/from the drive, these blips were
problems as throughput would fall remarkably during the
calibration cycle).

Newer drives embed the servo controls in the actual data
tracks to minimize the need for this Draconian activity.
And, they have faster access times, larger read/write caches,
etc. -- so "A/V" is an unneeded kludge.
 
A

Archimedes' Lever

Jan 1, 1970
0
But, disk platters are fixed sizes (?).


Our ability to record across more and more of that area and tighter and
tighter across the face molecules determine the actual bit count, not
merely area itself.

Heads have gotten better, electronics has gotten better at modulating
them, motor controllers have gotten better at moving them more precisely,
and spindle and head arm bearings have gotten better at moving without
'bumps' in their motion.

THEN they went and flipped the record axis 90 degrees and quadrupled
what they were already doing. (perpendicular recording)
 
A

Archimedes' Lever

Jan 1, 1970
0
There are no economies (?)
to be gained by shrinking platter sizes as your magnetics improve.


Sure there are. overall reliability for one. The spindle bearings on
a 1.5 inch platter last years longer than those on a 5.25" platter stack
due to coriolis alone.

We can also have a RAID stack of 9 1.5 inch drives that are hot
swappable when failing.

A 5.25 inch drive with 9 platters fails on a platter and the whole
drive is shot, even if the data can be rebuilt after the new drive gets
put in and formatted and rebuilt completely. a failed single RAID drive
rebuilds faster after a fail as well.

Hell, I can see an 18 drive RAID stack where each of 9 RAID elements is
mirrored so even a drive failure would be immediately recovered from.

Little drives are where it is at because they are cheap to make. One
head, one platter... it is all about cost of manufacture for maximized
return, in this case drive capacity.

I would not want a 5.25 inch 20TB drive where a single failure could
take down my entire archive, and even if it is backed up, the drive is
hugely expensive at that point.

Gimmie ten little ones that watch each other's backs any day. Call 'em
'marines'.
 
A

Archimedes' Lever

Jan 1, 1970
0
If you look at the actual amount of usable space on disk, it is quite
different from model to model and even from version to version, although
the "rated" capacity is the same.

VLV

Some drives had hidden partitions and operating systems loaded onto
them that ran on the drive itself. CP/M was common on ESDI drives.
 
A

Archimedes' Lever

Jan 1, 1970
0
Hi Vladimir,



But those changes seem to be "in the noise". E.g., as if an
extra cylinder or two were added.

In the PC market, you start to see 50% increases being common
(1TB, 1.5TB, 2TB, etc.) instead of 2X. But, nothing finer
grained than this (and, marketing to The Unwashed Masses, you
would think there would be a push to pitch a 1.1TB disk over
a competitor's 1.0TB drive -- perhaps this is what drives the
"50%" number?)

I.e., do *all* the disk fabs use the same magnetics, coatings,
etc. One would think there would be more "natural variation"
in product offerings (?)


I went from 750GB to 1TB to 1.5YB to 2TB, so the jumps are getting
bigger.

Right about the time they start to top out the 3.5" form factor, they
drop the whole line and have us all using 2.5" drives and SAS interfaces.
 
A

Archimedes' Lever

Jan 1, 1970
0
When I was working with PC's I found that we could optimize some disks
beyond the rated capacity just by fiddling with the prime factors of
the number of actual addressable sectors on the drive.

Perhaps you merely remember the old early MFM trick of setting the
drive interleave right.

Early drives had some hard sector flags in them and you were not able
to do what you describe... at all.

Though you may also be remembering early IDE drives and virtual
cylinders, etc.

Kind of like the guy telling Vizzini that the word "inconceivable" was
not a word.

You gained no additional drive space, unless you were performing the
first mentioned method.
 
Top