Maker Pro
Maker Pro

1000 year data storage for autonomous robotic facility

  • Thread starter Bernhard Kuemel
  • Start date
R

Rod Speed

Jan 1, 1970
0

Essentially because there are very few human systems that
have ever been organised to keep doing the same thing like
that for 1000 years when doing that requires quite a bit of
human effort to do.

Even the world's great religions can't be relied on to
keep going, a hell of a lot of them have in fact imploded
over time. Some in a lot less time than 1000 years.

Its just the nature of human activity.
I'm constantly replacing dried out electrolytic caps from antique radios.

Sure, but we have seen some other approaches to
preservation last fine for much longer than 1000 years.
Anything that moves (pots, controls, rheostats, variable
caps, speaker cones, dial cord, etc) are constant sources
of maintenance problems. Any switch or relay without
hermetically sealed contacts eventually oxidizes, pits,
arcs, or melts.

Sure, but we have seen some other approaches to
preservation last fine for much longer than 1000 years.
My maintenance free battery is really a throw away battery.
I've had some experience working on process controllers for
the food canning business. It's amazing how much rotting
muck can find it's way into sealed NEMA enclosures.

Sure, but we have seen some other approaches
to sealing things do fine for well over 1000 years,
most obviously stuff sealed in glass.
I don't think it's possible to make an autonomous
anything that will work even 50 years,

There are plenty of examples of stuff that has done that.
much less 1000.

There are examples of stuff that has done
that for much longer than that too, most
obviously with the pyramids.
Actually, the biggest problem are the human operators.

Which is why its better to do without those if that's feasible.
The Three Mile Island and Chernobyl reactor meltdowns comes to mind,
where the humans involved made things worse by their attempts to fix
things.

But there are plenty more examples where attempting
to fix things worked fine. You don't have to have such an
unstable system where fucking things up results in disaster.

Plenty of ancient churches and mosques etc have lasted
much longer than 1000 years with humans fixing things
that go wrong. The main problem is setting up a system
where the humans want to bother for more than 1000 years.
Yeah, maybe autonomous would better than human maintained.

No maybe about it if its feasible.
I once worked on a cost plus project, which
is essentially an unlimited cost system.

Nothing is ever an unlimited cost system.
They pulled the plug before we even got started
because we had exceeded some unstated limit.

So it wasn't in fact an unlimited cost system at all.
There's no such thing as "cost is no object".

That's clearly true of the world's great religions.
Repair how and using what materials?

The same materials that were used to make it in the first place.

ALL you need is a situation where those are everywhere.
Like I said before, do you have a CK722 transistor handy
to fix my ancient 6 transistor AM portable antique radio?

Its obviously possible to make more the
same way that the one that failed was made.
I was lucky and found one that was made in the early 1960's.
Ok, that's about 50 years. In another 50 years, such replacement
devices will only be found in museums and landfills.
<http://ck722museum.com>

But we can still make more the same way the original was made
if we have enough of a clue to document how it was made.

That is in fact done with plenty of medieval stuff, even
when it was not documented how it was made then.
You could make a plug-in work-alike replacement using Si-Ge technology.

And that's all you need when you want to keep it going for 1000 years.
However, that would require that you upgrade (evolve) your
spare semiconductor fab production line to switch from your
original technology, to the latest technology, which didn't exist
when the original was made. Or, you could keep cranking out
Ge replacement parts, until your supply of Ge runs out.

Or you can just work out what the failure rate is likely
to be, multiply that by 10, make that many and just
keep using the replacements from stock as they fail.
You gave the example of the termite and the alligator.
I provided> links which demonstrate that both have
evolved and changed over the millennium, not 1000 year.

But you did not show that what evolution had happened over
that time was NECESSARY for the survival of that species.
Many species, including man, have not evolved much in 1000 years.

So your claim that the system being discussed would have
to evolve to survive for 1000 years has blown up in your face
and covered you with black stuff very spectacularly indeed.
However, for every one that hasn't evolved,
there are literally thousands of insects, bacteria,
fish, birds, and other species that have changed.

ALL we need is examples of species that have survived
fine for 1000 years without any evolution that had
anything to do with its survival to prove that your claim
that evolution is crucial to its survival is just plain wrong.
The list of extinct and endangered species
should offer a clue as to how it works.
<http://en.wikipedia.org/wiki/IUCN_Red_List>

All that shows is that a lot of evolution happens.

NOT that its essential for survival over 1000 years.

It would be a hell of a lot more surprising if we had
not seen a massive amount of evolution given that
we ended up with something as sophisticated as
humans from what was once just pond slime.
If it were that reliable, we wouldn't need ECC (error correcting) memory.

We don't with data engraved on nickel plates.
Dynamic RAM and hard disk drive densities are down
to the point where the electronics has to literally make
a guess as to whether it's reading a zero or a one.

That utterly mangles the real story.

And there is no reason why you have to push the envelope that
hard with something you want to last for 1000 years anyway.
ECC is a big help, but all too often, the device guesses wrong.
Even cosmic rays and local radioactive sources can cause soft errors.
<http://en.wikipedia.org/wiki/Soft_error>

Yes, but soft errors are easily avoided.

And just arent a problem with data engraved on nickel plates etc.
I see these all too often in a rather weird way. When one of my
servers experiences an AC power line glitch, it often flips a bit.

Just a lousy design. Doesn't happen with mine.

And again, doesn't happen with data engraved on nickel plates.
The bit is usually not being used by the OS or by an application.
Several days later, the machine crashes, without any warning or
apparent cause, when it needs to read this bit, and finds it in an
unexpected state. I've also run memory error tests continuously
for several days on various machines (using MemTest86 and
MemTest86+) and found random errors ever few days.

Again, just a system with a fault.
You can probably build something that is reliable and stable,

We know you can because even the egyptians did that.
but it will involve low density, considerable redundancy,
and plenty of error checking and error correction.

Not if you store the data by engraving it on nickel plates.
Example please?

The data the egyptians left behind.
1000 years ago, we were in the tail end of the dark ages.

And the data the egyptians left behind was around MUCH earlier than that.
If the satellite business had a good financial reason for the birds to
last longer,

There isn't, because technology improves
so dramatically in even just 100 years.
I'm sure they would have done it.

Yes, they egyptians clearly decided that they needed
that for various reasons and achieved that.
Right now, the lifetime of LEO and MEO birds
are fairly well matched to their orbital decay life.

Because that approach makes sense.
A 1000 year lifetime on the electronics, won't make much
sense if the bird falls out of the sky at 20-30 years. There
are numerous orbital decay calculators online.
Name another approach that isn't a circular
definition, such as "making it more reliable".

Just use an approach known to last much longer
than that like storing the data by engraving it on
nickel plates that need no maintenance at all.
What design philosophy should be followed
in order to produce a 1000 year design that
does NOT evolve in some way?

See above.
Ok, lets see if that works.

Corse it works. That's how plant and animal species
survive for MUCH longer times than 1000 years.

There are plenty of trees that last for more than 1000 years.
The typical small signal transistor has an MTBF
of 300,000 to 600,000 hrs or 34 to 72 years.

Typical is irrelevant. You'd obviously use very long
lived technology if you want it to survive 1000 years.
I'll call it 50 year so I can do the math without finding my calculator.

I'll go for engraved nickel plates so I don't even need to calculate
anything.
MTBF (mean time between failures) does not predict the life of
the device, but merely predicts the interval at which failures might
be expected. So, for the 1000 year life of this device, a single
common signal transistor would be expected to blow up 200 times.

So all you need is say 2K spares.
Assuming the robot has about 1000 such transistors,
you would need 200,000 spares to make this work.

So you have 2M and survive 1000 years fine.
You can increase the MTBF using design methods
common in satellite work, but at best, you might
be able to increase it to a few million hours.

So there is no problem.
Great. You're going to seal an engineer inside the machine?

No, you use more than one to design the system in the first place.
You mean like the cloud storage servers that are erratically having
problems?

No, like the way the egyptians chose to store what data they
wanted to store, which lasted much longer than 1000 years fine.
Please provide a single server farm or data dumpster
that operated on a sealed building basis.

Look at how the egyptians did theirs.
The larger systems take storage reliability quite seriously.
For example, Google's disk drive failure analysis:
<http://static.googleusercontent.com...ch.google.com/en/us/archive/disk_failures.pdf>

Irrelevant to how the egyptians did theirs.
Geosynchronous satellites are unlikely to suffer from serious orbital
decay. However, they have been known to drift out of their assigned
orbital slot due to various failures. Unlike LEO and MEO, their
useful life is not dictated by orbital decay. So, why are they not
designed to last more than about 30 years?

Because the technology evolves so much over that time
that you don't care if they last longer than that, they are so
hopelessly obsolete that they are replaced for that reason.

We don't do it like that with stuff as
basic as books that we want to keep.
Please provide a few examples of devices that were
INTENTIONALLY designed to last more than 1000 years.

The stuff kept in egyptian pyramids.
The 10,000 year clock is a good example. Got any more?

The stuff kept in egyptian pyramids.
The methods that are used for satellite life extension
(reloadable firmware) are directly relevant to doing
the same on the ground in a sealed environment.
No.
At the risk of being repetitive, the reason that one needs
to improve firmware over a 1000 year time span is to allow
it to adapt to unpredictable and changing conditions.

The egyptians didn't bother and theirs
survived for much longer than that.
True. However, not providing a means of improving or
adapting the system to changing conditions will relegate
this machine to the junk yard in a fairly short time.

It didn't with the machine the egyptians made, the pyramids.
All it takes is one hiccup or environmental "leak", that
wasn't considered by the designers, and it's dead.

It wasn't with the machine the egyptians made, the pyramids.
Stupid machines don't last and brute force is not
a long term survival trait. Ask the dinosaurs.

I'll look at the pyramids instead.
Various countries are doing a great job of making rare minerals both
difficult to obtain and expensive for political and financial reasons.
A commodity doesn't need to be scarce in order to be difficult to obtain.

They aren't in fact at all difficult to obtain except with
a tiny subset that are potentially dangerous like uranium.
Example please.

The egyptian pyramids.
 
On 05/10/2013 07:44 AM, Jeff Liebermann wrote:

It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.

Did you see the pictures of the Fukushima reactor control room ?

So 1970's :)

But generally, also in many other heavy industry sectors with the
actual industrial hardware being used for 50-200 years, you might
still keep over 30 years old sensors, actuators, field cables and I/O
cards, while upgrading higher level functions, such as control rooms,
to modern technology.

The geostationary satellite lifetime is limited by the amount of
station keeping fuel on board. The earth is not a perfect sphere and
hence, sooner or later, the satellite would be moving in a figure of
eight, as seen from earth.

If the figure is larger than the ground antenna beam width, active
satellite tracking is needed, which would be unacceptable for at least
home receiver antennas. For these reasons, the satellite position has
to be maintained within a degree or two in both E/W as well as N/S
direction, which requires station keeping fuel, ultimately determining
the usable life time of a geostationary satellite.
Because we evolve. We update TV systems, switch from analog to digital
etc.

Since satellite transponders are simple "bent tubes", switching from
analog (FM) to DVB-S might have required some backoff in the TWT.
Going from DVB-S to DVB-S2 might require some further backoff, This of
course drops out some of the smallest receiving antennas.

The ana/digi switchover might require some higher power TWTs and/or
narrower satellite beams, otherwise the A/D change is not that
dramatic.
 
Yeah, its nothing like evolution in fact.

If we are using the evolutional model, several sites with different
technologies must be used.

Some of these sites are successful, some are not, but of course we do
not know in advance, which system will survive and which will fail.
 
R

Rod Speed

Jan 1, 1970
0
Bernhard Kuemel said:
I said: "Price is not a big issue, if necessary." I know it's gonna be
expensive and we certainly need custom designed parts, but a whole
semiconductor fab and developing radically new semiconductors are
probably beyond our limits.


Have the robots fetch a spare part from the storage and replace it.
Circuit boards, CPUs, connectors, cameras, motors, gears, galvanic
cells/membranes of the vanadium redox flow batteries, thermo couples,
etc. They need to be designed and arranged so the robots can replace them.


It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.

Yeah, that should be doable.
Also robots are usually idle and only active when there's something to
replace. The power supply, LN2 generator and sensors are more active.
I wonder how reliable rails or overhead cranes
that carry robots and parts around are.

Those can certainly be designed to last 1000 years.
If replacing rails or overhead crane beams is necessary
and unfeasible, the robots will probably drive with wheels.

Yeah, I don't see any need for overhead crane beams.

If you can get the electronics that drives everything to last
1000 years by replacement of what fails, the mechanical stuff
they need to move parts around should be easy enough.

Obviously with multiple devices that move parts around
so when one fails you just stop using that one etc.
Because we evolve. We update TV systems, switch from analog to
digital etc. My cryo store just needs to the same thing for a long time.

It doesn't actually. The approach the egyptians took lasted fine,
even when the locals chose to strip off the best of the decoration
to use in their houses etc.

Corse its unlikely that you could actually afford something that big
and hard to destroy.
Initially there will be humans verifying how the cryo store does
and improve soft/firmware and probably some hardware, too,
but there may well be a point where they are no longer available.
Then it shall continue autonomously.

That conflicts with your other proposal of a tomb like thing
in the Australian desert. Its going to be hard to stop those
involved in checking its working from telling anyone about it.

There is going to be one hell of a temptation for
one of them to spill the beans to 60 Minutes etc.
Yes. We need to consider very thoroughly every failure
mode. And when something unexpected happens, the
cryo facility will call for help via radio/internet.

At which time you have just blown your disguise as a tomb.
I even thought of serving live video of the facility so it remains
popular and people might call the cops if someone tries to harm it.

Its more likely to just attract vandals who watch the video.
Volunteers could fix bugs or implement hard/software
for not considered failure modes.

Or they might just point and laugh instead.
 
B

benj

Jan 1, 1970
0
Did you see the pictures of the Fukushima reactor control room ?

So 1970's :)

But generally, also in many other heavy industry sectors with the actual
industrial hardware being used for 50-200 years, you might still keep
over 30 years old sensors, actuators, field cables and I/O cards, while
upgrading higher level functions, such as control rooms, to modern
technology.

Oddly I've got some radio from the 1920s that are still working fine (one
Atwater Kent had the pot metal tuning mechanism disintegrate, but if you
tuned each capacitor by hand it still worked fine). But radios of
essentially the same technology from the 30s an 40s are all dead. Parts
like electrolytic capacitors do not have long life. The "improvement" of
tubes with cathode coatings also limited their useful life. Today, since
short lifetime parts are just too convenient to ignore, nobody builds for
any extended life. Electronic lifetimes just keep getting shorter and
shorter.

Some years ago I started a project of an electronic grandfather
"superclock". But the idea was not to simply build an accurate clock, but
to build one that several hundred years from now would still be running
as accurately. (Same idea as a mechanical grandfather clock...ever notice
the similarity of a tall grandfather clock to a relay rack... get the
picture)

But I soon discovered that building electronics with several hundred year
life is not so simple. Making sure all you capacitors are of materials
that don't degrade, that active parts have a decent life time and all the
rest takes some careful considerations even if the electronics ends up
shielded in air-tight containers. Sure you can pick out things like
ceramic and glass capacitors and other items that will work for hundreds
of years but using ONLY those items to build a complex device takes some
serious design thought.
 
R

Rod Speed

Jan 1, 1970
0
Jeff Liebermann said:
I'm thinking there may be a different way to do this. The basic
problem is that the life of an electronic system can currently be
built that will last about 50 years before MTBF declares that problems
will begin. With redundancy and spares, this might be extended to 100
years.
The building will last somewhat longer, but probably no
more than 100 years before maintenance problems arrive.

That's just plain wrong when its designed to last 1000
years in the first place without any maintenance.
Rather than replace all the individual components,
I suggest you consider replacing the entire building
and all the machinery every 50-100 years.

That's much harder to achieve with an autonomous
system with no humans involved.
Instead of one building, you build two buildings, in alternation.
When the first facility approaches the end of its designed life,
construction on a 2nd facility begins adjacent to the first facility.
It would be an all new design, building upon the lessons learned
from its predecessor, but also taking advantage of any
technological progress from the previous 100 years.

Impossible with an autonomous system with no humans involved.
Instead of perpetually cloning obsolete technology,
this method allows you to benefit from progress.

But does necessarily involve someone keeping
humans involved in doing that for 1000 years,
just to keep your head. Good luck with that.
When the new facility is finished, the severed heads
are moved from the old facility to the new. The old
equipment can then be scrapped, and the building
torn down to await the next reconstruction in 100 years.

And how do you proposed to recruit a new crew of humans
the next time you need to replace everything except the heads ?
Note: The 100 year interval is arbitrary, my guess(tm), and probably
wrong. The MTBF may also increase with technical progress over time.
It's called a finite state machine.
<https://en.wikipedia.org/wiki/Finite-state_machine>
Every state, including failure modes, must have a clearly defined
output state, which in this case defines the appropriate action.
These are very efficient, quite reliable, but require that all possible
states be considered. That's not easy. A friend previously did
medical electronics and used a finite states. Every possible
combination of front panel control and input was considered before the
machines servo would move. Well, that was the plan, but some clueless
operator, who couldn't be bothered to read the instructions, found a
sequence of front panel button pushing that put the machine into an
undefined and out of control state. You'll have the same problem.

Not if there are no humans involved.
 
R

Rod Speed

Jan 1, 1970
0
Mark F said:
Re 1000 year data storage:
Could Intel or some other company use modern equipment but old design
rules to make the integrated circuits have a much longer expected
lifetime?

Yes, but how much longer is less clear.
It seems like it might be possible that if dimensions of the
devices were made larger then things would last longer.

And particularly if the design was to minimise diffusion somehow.

I guess that since it's a cryo facility, one obvious way to get
a longer life is to run the ICs at that very low temp too etc.
I know that making flash memory just a few times larger and using
only single level cells increases the number of reliable life cycles
100's of times (1000 to hundreds of thousands) while at the same time
raising the data decay time from a couple of about a year to about
a 10 years. Refreshing every year would only require 1000's of
write cycles, well within the 100's of thousands possible.

You'd be better off with some form of ROM instead life wise.
I think the functions besides memory storage a couple 10's of years now,

Much longer than that with core.
but I don't know if making things a few times larger and
tuning the manufacturing process would get to a 1000 years.
(For example, I don't know if the memory cells would last a 1000
years, but data decay would not be a problem since only 100's of
rewrites/cell would be needed for refresh and 100's of thousands are
possible. (Actually, millions of rewrite cycles are likely to be
possible.)

Like I said, ROM is more viable for very long term storage.
Changing the designed circuit speed, the actual clock rate,
and operating voltage can also improve expected lifetime.
A long term power source would still be an issue
unless things can be made to not need refresh.

Yes, that's the big advantage of ROM and core.
 
All this prompts the question of whether human culture will last, to
the point that anyone will care about decoding 1's and 0's in 1000yr.

The OP said nothing about humans (the *robots* use the software
during the 1000 yrs), or why the facility needed to be autonomous for
1000yr.
If it does, one might assume that there are times during that period
where interest is sufficient to copy to new or better media.

If the facility's tech can be modified per outside developments,
does it still qualify as autonomous?
I still have files that have survived five generations of media tech.

Did you keep the machinery to read them, too?


Mark L. Fergerson
 
R

Rod Speed

Jan 1, 1970
0
The OP said nothing about humans

He did however imply that there would be humans around in
the future to thaw him out and upload the contents of his head.

He wasn't proposing that his robots do that.
(the *robots* use the software during the 1000 yrs),
or why the facility needed to be autonomous for 1000yr.

He did say that later, essentially he believes that that's
the most likely way to ensure that his frozen head will
still be around in 1000 years for the humans that that
have worked out how to upload the contents to do that.
If the facility's tech can be modified per outside
developments, does it still qualify as autonomous?

Yes, if it can operate by itself.
Did you keep the machinery to read them, too?

You don't need to if you have multiple generations, you
only need to keep the machinery for the latest generation.
 
Top