[linux-elitists] git and a sysadmin book
Karsten M. Self
Wed Jan 14 14:37:06 PST 2009
on Wed, Jan 14, 2009 at 08:09:00AM -0600, Jeremy Hankins (email@example.com) wrote:
> "Karsten M. Self" <firstname.lastname@example.org> writes:
> > on Mon, Jan 12, 2009 at 10:23:15AM -0600, Jeremy Hankins (email@example.com)
> > wrote:
> >> But you are quite correct to point out that a system that can survive
> >> high entropy conditions is even more desirable, and you're more likely
> >> to find such systems in conditions of high entropy.
> > Um, sorry, that's not entirely clear, typo? Is the last phrase supposed
> > to be "high entropy" or "low entropy"?
> No typo. It's supposed to be an evolutionary point, which is what I
> took your original point to really be. Low entropy is not bad for its
> own sake (for its own sake it's good), but bad insofar as it tends
> toward situations where the system depends on low entropy for its
> survival. If you want to find systems that can survive high entropy,
> look in conditions of high entropy.
OK, I think I've somewhat failed in my mission.
The original statement ("Large, low-entropy pools are inherently
dangerous") wasn't meant to be a submission for the Nobel Prize in
Phizzix or some sort of Shannon award, though it's borrowing from both
It *is* meant to provide a useful tool, with the additional context of
some exemplar instances of what are meant by "low-entropy pools",
for persons to incorporate into risk analysis. That risk, by the way,
is self-evidently relative to either the individual assessing that risk
or some construct or resource the person has taken a hankering to.
The Universe is very much like Catbert: it looks cute but doesn't car
if you live or die. So, in a strict physics sense, entropy (high or
low), entropy doens't have goodness or badness. Regards a specific,
human (or other sentient conciousness) with a value system and hierarchy
of rankings of various states in preference, some states, described
(somewhat loosely) in terms of entropic value, may pose higher or lower
> But this explanation depends on distinguishing low entropy (i.e.,
> ordered and stable) from high potential. If high potential is part of
> your definition of low entropy, I'll need some other term to describe
> ordered systems that may or may not be high potential.
As an example of a resource with low entropy but little *energetic*
potential, may I direct your attention to the ordering of magnetic
fields on a local matrix of spinning plates or solid-state silicon
array. Data is something which is core to a great many risk-management
scenarios, has little direct energy content (though properly configured
and utilized data can *direct* significant energy flows), and is defined
almost entirely on its randomness or non-randomness.
Similarly any system which doesn't contain or store energy but serves to
channel or control it. Valve systems. Dry riverbeds. Transportation
systems. Power distribution systems. Social and organizational
networks and hierarchies. Ordered, low-entropic. Not high-potential
(absent an energy flow to be channeled through the system).
Note also that I am *NOT* saying that low-entropic states are without
value. Almost entirely to the contrary. They are useful precisely
*because* they have the ability to induce changes in state. But that
usefulness can be tapped both for good (defined in a manner contextually
specific to both the system and observer in question) or bad, or can be
disrupted entirely (and lost, at least at a significant cost to
The question posed by the initial statement ("Large, low-entropy...")
then is how to:
- Preserve the value of such pools.
- Reduce the risks associated with such pools.
- Prevent the misappropriation of such pools.
- Prevent the destruction of such pools.
There's also the hope that given the option of choosing among systems in
which an equivalent benefit can be provided by a more or less risky
design, the less risky alternative is chosen.
> (Bringing this back to elitism, this is one reason one would expect, all
> else being equal, open development models to be more resilient than
> closed. Closed models are closed in part to maintain conditions of low
> entropy. Open models aren't better because disorganization,
> factionalism, and infighting are good, but because models that can
> survive these things are good. Systems that must prevent these things
> for the sake of their survival are not as resilient as those than can
> survive some level of such silliness.)
I think we're verging toward agreement here. Closed models tend to:
- Assume a less disruptive environment.
- Attain higher levels of internal complexity (spaghetti code).
- Require the existance of some exogenous industrial or other
institutional framework. E.g.: the vendor dies, the code dies.
For the most part, Free Software is developed in a more hostile
environment, providing it with (should it survive the environment)
Karsten M. Self <firstname.lastname@example.org> http://linuxmafia.com/~karsten
Ceterum censeo, Caldera delenda est.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 191 bytes
Desc: Digital signature
Url : http://allium.zgp.org/pipermail/linux-elitists/attachments/20090114/137fdf58/attachment.pgp
More information about the linux-elitists