"The strong and clever will twist to their advantage any laws that are made; the law is a spider's web that catches the little flies and lets large creatures break through and escape."
Broadly, the idea that systems exist; stuff in this category is stuff that drives ecologies, like the ethos of a culture or the ethos inherent in a view or an act; or like the aesthetics that drive an ecological consequence.
A thought for the morning: Over the past 24 hours, about 94% of the email my company received has been some kind of spam. That means that only a little over 6% (6.18% or so, to be precise) has been legitimate.
That's about normal for recent weeks.
The category breakdown looks like this:
It's interesting to note that medicine has so far outstripped sex. Though I have to wonder if penis enlargement is classified under "meds" or "adult."
All this having been said, I see no indication that email will go away. Most corporate environments will resort (as we have) to aggressive third-party spam filtering with whitelists. Draconian non-solutions like Serios, private email, replacing email with IM, and the like, just aren't making any headway because the value of free and open communication is so great that it easily outweighs the cost of spam mitigation. At the same time, companies like Appriver have made the process of implementing third party spam filters so seamless that even small businesses like ours can do it painlessly.
Those wacky NRO guys -- Jay Nordlinger has spent a weekend in Vermont, and now he Understands The Noble Working Man:
.... here’s how I understand it: Modestly off people — “real Vermonters,” as some people say — are voting for McCain and Palin. Comfortably off people, such as those who own ski chalets, are voting for Obama and Biden. And the following has been frequently noted about the city of my residence, New York: The rich are voting Democratic. And those who work for them — driving cars, cleaning rooms, and so on — are voting Republican.
(I guess we know what "the math" is, now.)
The Nordlinger Effect is when non-rich people respond they’re voting like the rich jerk asking them who they’re voting for just so he’ll shut up and leave them alone.
For his part, Yglesias himself has pointed (unnecessarily) to the work of Andrew Gelman to demonstrate that rich folks in poor states (like Vermont) do in fact tend to vote for Republicans, while poor folks in poor states (like Vermont) do in fact tend to vote Democratic. Others in Yglesias's comment thread take the trouble to note (among other things) that you're not a real Vermonter unless you're born there (at least), all the ski chalets are owned by "flatlanders" from Connecticut, Massachusetts and NY (who won't be voting in Vermont, anyway), and that all the actual data demonstrates amply that "Vermonters of modest means" will be voting overwhelmingly for Obama. (As for housekeeping staff voting Republican: If he believes that, I've got this bridge I'd love to sell him...)
And how the hell Jay Nordlinger can use the phrase "of modest means" without blushing, I don't know. Maybe it's an internal manifestation of the Nordlinger Effect: Jay telling Jay what Jay wants to hear, so his brain will leave him alone.
Freeman Dyson is one of the more dangerous scientists alive right now.
.... The wiggles in the [Keeling] graph show us that every carbon dioxide molecule in the atmosphere is incorporated in a plant within a time of the order of twelve years. Therefore, if we can control what the plants do with the carbon, the fate of the carbon in the atmosphere is in our hands. That is what Nordhaus meant when he mentioned "genetically engineered carbon-eating trees" as a low-cost backstop to global warming. The science and technology of genetic engineering are not yet ripe for large-scale use. We do not understand the language of the genome well enough to read and write it fluently. But the science is advancing rapidly, and the technology of reading and writing genomes is advancing even more rapidly. I consider it likely that we shall have "genetically engineered carbon-eating trees" within twenty years, and almost certainly within fifty years.
Carbon-eating trees could convert most of the carbon that they absorb from the atmosphere into some chemically stable form and bury it underground. Or they could convert the carbon into liquid fuels and other useful chemicals. Biotechnology is enormously powerful, capable of burying or transforming any molecule of carbon dioxide that comes into its grasp. Keeling's wiggles prove that a big fraction of the carbon dioxide in the atmosphere comes within the grasp of biotechnology every decade. If one quarter of the world's forests were replanted with carbon-eating varieties of the same species, the forests would be preserved as ecological resources and as habitats for wildlife, and the carbon dioxide in the atmosphere would be reduced by half in about fifty years.
That's just science fiction, of course -- not the scary part at all. This is the scary part:
It is likely that biotechnology will dominate our lives and our economic activities during the second half of the twenty-first century, just as computer technology dominated our lives and our economy during the second half of the twentieth. Biotechnology could be a great equalizer, spreading wealth over the world wherever there is land and air and water and sunlight. This has nothing to do with the misguided efforts that are now being made to reduce carbon emissions by growing corn and converting it into ethanol fuel. The ethanol program fails to reduce emissions and incidentally hurts poor people all over the world by raising the price of food. After we have mastered biotechnology, the rules of the climate game will be radically changed. In a world economy based on biotechnology, some low-cost and environmentally benign backstop to carbon emissions is likely to become a reality.
Translation: "We don't need to do anything now, because we'll invent our way out of the problem when the time comes."
I suppose I should be grateful that he's no longer appointing himself global diagnostician. At least now he admits that there might be a problem.
I've been told by people I respect that Dyson is a very good physicist. But I'm hard put to recall anything outside of his domain that wasn't just plain stupid once you got past the "oh, neato" moment. I mean, Dyson Spheres are a cool idea, but also a really dumb one if you think about them just a tiny bit. They're a triumph of the broadly logically possible: We can imagine it, therefore it must be feasible. We can imagine going Niven & Pournelle one better and building a sphere around a small star (or arranging otherwise to intercept all of the star's energy). We can imagine nesting matrioshka layers one inside the other, to overlap and trap the inevitable leakage. All we have to do is solve this list of several thousand technical problems. We've solved every other technical problem we've ever been presented with; we'll clearly be able to solve these. What is conceivable, is feasible.
We can imagine magic carbon-sequestering trees, therefore they must be feasible. We can imagine a quarter of the world's trees being replaced by these magic inventions, therefore we should count on it happening (when the alternative is essentially the collapse of civilization).
All of these speculations commit an obvious and really, really troubling error: They assume that certain important things, like rate of technological innovation, rate of increate in energy use, etc., are essentially laws of nature: That not only won't they change, but that their not changing is a righteous thing. Moore's Law will go on forever; we'll keep increasing our need for energy at a predictable and increasing rate; we'll keep inventing new ways to solve all of our problems; better living through chemistry.
This kind of thinking is usually based on a detailed look at only a very short span of human history, and a very high-level gloss of anything beyond the past three or four hundred years.
It's disturbingly short sighted, in other words, even as it pretends to vision.
This is why I don't respect Dyson: He pretends to vision, but is blind to his own short-sightedness
Technorati Tags: dyson-ex-machina
Or at least he thinks he is:
.... It is at least a possibility to be seriously considered, that China could become rich by burning coal, while the United States could become environmentally virtuous by accumulating topsoil, with transport of carbon from mine in China to soil in America provided free of charge by the atmosphere, and the inventory of carbon in the atmosphere remaining constant. We should take such possibilities into account when we listen to predictions about climate change and fossil fuels. If biotechnology takes over the planet in the next fifty years, as computer technology has taken it over in the last fifty years, the rules of the climate game will be radically changed.
When I listen to the public debates about climate change, I am impressed by the enormous gaps in our knowledge, the sparseness of our observations and the superficiality of our theories. Many of the basic processes of planetary ecology are poorly understood. They must be better understood before we can reach an accurate diagnosis of the present condition of our planet. When we are trying to take care of a planet, just as when we are taking care of a human patient, diseases must be diagnosed before they can be cured. We need to observe and measure what is going on in the biosphere, rather than relying on computer models.
Such vision! Who knew it was that simple: China burns the coal, we sequester their windblown carbon as topsoil. Mirabile dictu! Dyson ex machina.
And who knew that Dyson had such a complete grasp of the processes of planetary ecology. He must, since he feels so ready to propose that we replace all of the current thinking by climate scientists and ecologists with a suggestion by a physicist that we just give up on climate modeling and replace it with a wholistic, diagnostician model.
It's convenient to be so brilliant that one doesn't feel the need to apply the same criteria to his own theories as he does to others.
Freeman Dyson recently wrote:
In his "New Biology" article, [Carl Woese] is postulating a golden age of pre-Darwinian life, when horizontal gene transfer was universal and separate species did not yet exist. Life was then a community of cells of various kinds, sharing their genetic information so that clever chemical tricks and catalytic processes invented by one creature could be inherited by all of them. Evolution was a communal affair, the whole community advancing in metabolic and reproductive efficiency as the genes of the most efficient cells were shared. Evolution could be rapid, as new chemical devices could be evolved simultaneously by cells of different kinds working in parallel and then reassembled in a single cell by horizontal gene transfer.
But then, one evil day, a cell resembling a primitive bacterium happened to find itself one jump ahead of its neighbors in efficiency. That cell, anticipating Bill Gates by three billion years, separated itself from the community and refused to share. Its offspring became the first species of bacteria—and the first species of any kind—reserving their intellectual property for their own private use. With their superior efficiency, the bacteria continued to prosper and to evolve separately, while the rest of the community continued its communal life. Some millions of years later, another cell separated itself from the community and became the ancestor of the archea. Some time after that, a third cell separated itself and became the ancestor of the eukaryotes. And so it went on, until nothing was left of the community and all life was divided into species. The Darwinian interlude had begun.
The Darwinian interlude has lasted for two or three billion years. It probably slowed down the pace of evolution considerably. The basic biochemical machinery of life had evolved rapidly during the few hundreds of millions of years of the pre-Darwinian era, and changed very little in the next two billion years of microbial evolution. Darwinian evolution is slow because individual species, once established, evolve very little. With rare exceptions, Darwinian evolution requires established species to become extinct so that new species can replace them.
Now, after three billion years, the Darwinian interlude is over. It was an interlude between two periods of horizontal gene transfer. The epoch of Darwinian evolution based on competition between species ended about ten thousand years ago, when a single species, Homo sapiens, began to dominate and reorganize the biosphere. Since that time, cultural evolution has replaced biological evolution as the main driving force of change. Cultural evolution is not Darwinian. Cultures spread by horizontal transfer of ideas more than by genetic inheritance. Cultural evolution is running a thousand times faster than Darwinian evolution, taking us into a new era of cultural interdependence which we call globalization.
It's difficult to tell what Dyson wants to communicate. He argues against "reductionist biology" and floats a lot of pretty images of synergism and vaguely Taoist ideas about the resilience of life. But his own understanding of the complexity of life is clearly quite limited, or he wouldn't be so quick to idealize "non-Darwinian evolution" (a "golden age"?) and predict a rosy outcome from unrestricted biotech game-playing. History much more readily supports a skeptical view on the affects of biotech than it supports Dyson's positivist version. The reality will almost certainly be more of the same mixed bag we've got now: High-yield crops help feed more people and strain the land to a greater extent, which hurts crop yields, which demands still higher-tech farming technologies, and so on ad infinitum. It's not a sustainable cycle, and one would like to think someone with such a reputation for cleverness would get that. (The fact that he doesn't, is to me another indication that he was over-rated to begin with.)
Dyson's thought seems to me to be fundamentally adolescent, in the sense that he always wants more and always thinks that things are simpler than the experts do.
Darwinian evolution may indeed have slowed evolution down considerably; but it may also have stabilized it. I suspect it was Darwinian evolution that made multi-cellular life truly feasible by making it possible to rely on large support structures generation over generation. In a diverse non-Darwinian framework, that reliance just wouldn't be possible. "Designs" that are stable in one generation could change fundamentally in the next, or even before the generation propagated, leaving no basis for reproduction. What Dyson casts in clearly pejorative language ("one evil day", "refused to share", "anticipating Bill Gates") was most likely the very change that made it ultimately possible for him to make these observations.
The analogy to culture is clear: Cultural evolution is rapid and destructive. It wipes out what came before without regard, and it has no mechanism to prevent the willy-nilly propagation of cultural "genetic" material. What we end up with, then, is a bunch of unstable structures that collapse quickly and harm their constituent people in the process.
The common response is that evolutionary processes will yield stronger and more stable structures through natural selection. But what if that's not possible without some kind of constraint on what kind of "genetic material" gets incorporated?
There's also an analogy to be drawn to information theory. Dyson is a cross-pollinator. He believes that the only real change comes via cross-pollination of ideas. He doesn't want to believe that it's necessary nor, I think, even very important to create systems of thought. He thinks every wild idea needs to be considered. (With special attention to his, of course.) (What Dyson's thought on the scientific establishment boils down to, when you analyze the language, is essentially that he's smarter than they are so they should listen to him more than they do. But I digress.)
But what if it turns out that it's necessary to constrain information in order to get use out of it? That much has seemed intuitively clear to me for many years. It's the lack of such constraints that characterizes many mental illnesses, such as schizophrenia and mania.
Of course, there are plenty of people -- Dyson might be among them -- who are more than willing to idealize mental illness in the same way. I'd like to say that those are people without the experience of talking with people suffering from such mental illnesses. I'd like to say that, but I've heard too many of them illustrate their cases with allusions to their interactions with the mentally ill. Rather, I suspect that they are people more in love with their theory than with the people they hope to explain by it.
The iPhone is the partial realization of the web-based thin client dream. In typical Apple fashion, though, they've gone just far enough to make money, and not so far that it might actually enable people to communicate more freely. They could have done that, but it would have meant leaving consumers' money on the table.
Apple's recent commercial makes this abundantly clear. In it, a user watches a clip of the Kraken from Pirates II, has a craving for calamari, and rotates his iPhone 90 degrees to search out seafood restaurants in his area.
Aside from the gee-whiz UI tricks that his iPhone enables, he's basically doing a Google Maps search. In fact it looks a lot like screenshots I've seen of Zimbra Zimlets for geo-locating addresses in the Zimbra web client. Nifty stuff. But there's no particular reason that it couldn't (or won't) be done on other phones. Hell, it's probably done on other phones now, if you want to pay for the service.
Which brings me to the Palm Foleo. I hadn't heard of the Foleo before Charlie Stross wrote an analysis explaining just why he didn't think it was such a terrible idea. Basically, after looking at the fact that it's really completely independent of phones in every important way, and can connect to WiFi networks all on its own, he thinks that it was intended to be a Web 2.0 terminal. A thin client, as we used to say back when everybody who thought things through thought that was a bad idea for a business plan. Things have changed, now, though: Broadband really is ubiquitous, if you're willing to pay for the access, and good quality high-resolution displays and mass storage are cheap, and battery technology is improving radically, so that the phone and its proprietary network have to do less and less that's customized.
So the iPhone (and any other post-Blackberry phone that wants to be successful) is really a Web 2.0 Terminal. Sometimes they'll have cached data, but by and large they'll do everything they can through the airwaves. The differentiator will be in the user interface.
Apple understands that, of course. They have a late-mover advantage in this field, in that Nokia, Samsung, Symbian, MS, et al. have been so focused on solving the UI problem under now-outmoded constraints that they're having a hard time getting used to the freedom of new user interaction hardware.
It still comes down to paying for service, of course -- unless you're on WiFi, and can attach to the myriad of free nodes that are finally becoming common in our urban landscape. Like you can with the Foleo, or any one of a half dozen (non-Verizon) smart-phones I looked at earlier this week.
But not on an iPhone. You need the extra service to do that on an iPhone.
If there's one thing Apple never forgets to design in, it's making you pay.
Courtesy of the Peoria Chronicle's blog, here are links to a lecture on "New Urbanism" given by Andres Duany in Houston. It's on YouTube in 9 parts of 10 minutes each, and the first several have been posted on the Peoria Chronicle's blog. I'll be working my way through them bite by bite, as I hardly have 90 minutes to spare for anything that requires both visual and auditory attention, simultaneously. I may yet find something objectionable in it, but the basic presentation is quicker than reading Death and Life of Great American Cities.
One comment from the Chronicle blog is interesting:
“New urbanism” is just a facade being used by developers to pack as many people into the smallest footprint as possible, to increase their profits.
In San Diego, older neighborhoods are being transformed into jam packed, noisy, traffic infested cesspools, by billionaires who live on 10 acre estates in Rancho Santa Fe (SD’s Bel Aire).
The 40 year old, 10 unit, low income apt building next to me was converted to $400k “condos” last year. It’s been pure hell, with 15 rude, loudmouthed, morons moving in, several of whom are already about to default on their loans. Several units are now being rented, at 3 times the monthly rent as before. Who wins? A handful of guys sitting around dreaming up their next scheme.
That he misses the point of New Urbanism completely isn't the interesting part -- it's that he's so willing to conflate New Urbanism with a newspeak co-optation of its ideals. He's not necessarily wrong to do so. Like many idealistic movements, it has some foolishness and romanticism baked into and is vulnerable to abuse. There are plenty of people who jump into idealistic movements with a partial understanding of the situation and then end up taking it in whole new, highly rationalized direction.
That's one of my objections to "emotional design": When you choose, as Don Norman, Bruce Tognazzini et al seem to have chosen, to make your evaluation of a design's quality hinge upon its gut, emotional appeal, you're basically opening up the door to tossing out real design and replacing it with pandering. Machines become good if they look cool. By that metric, the AMC Javelin would be one of the coolest, hottest cars ever manufactured. The nigh-indisputable fact that it was a piece of crap would be irrelevant: It had great "emotional design."
Similarly, the fact that PowerBooks are screwed together using 36 (or more) tiny screws of five to six different sizes and head-types, but also force-fit using spring clips, becomes irrelevant: The design feels great, looks great. Never mind that it could cost less to manufacture, cost less to repair and upgrade, and be just as solid, just as sound, if it were designed better. It's still great "emotional design."
What's the effect of this kind of life? No doubt the people who brain-farted the idea for htis kind of a system in the first place would respond at this point that they are putting eyes on the street, they're addressing "lifestyle crime" (littering, loitering, miscellaneous minor malfeasance), and that the net effect is to get, through technology, what Jacobs asked for in the 1960s. But an honest appraisal would have to recognize that response as disingenuous. The voice is detached, judgemental, and doesn't brook response -- doesn't even afford it, since there are no pickups (that the security company is admitting to) on the cameras. It can't possibly work to provide the kind of human-scale, person-to-person interaction that happens in in the relatively messy but relatively safe neighborhoods of the real world.
From John Robb, who seems to have coined the term "open source warfare":
[The Iraqi] insurgency isn't a fragile hierarchical organization but rather a resilient network made up of small, autonomous groups. This means that the insurgency is virtually immune to attrition and decapitation. It will combine and recombine to form a viable network despite high rates of attrition. Body counts - and the military should already know this - aren't a good predictor of success.
Given this landscape, let's look at alternative strategies. First, out-innovating the insurgency will most likely prove unsuccessful. The insurgency uses an open-source community approach (similar to the decentralized development process now prevalent in the software industry) to warfare that is extremely quick and innovative. New technologies and tactics move rapidly from one end of the insurgency to the other, aided by Iraq's relatively advanced communications and transportation grid - demonstrated by the rapid increases in the sophistication of the insurgents' homemade bombs. This implies that the insurgency's innovation cycles are faster than the American military's slower bureaucratic processes (for example: its inability to deliver sufficient body and vehicle armor to our troops in Iraq).
Second, there are few visible fault lines in the insurgency that can be exploited. Like software developers in the open-source community, the insurgents have subordinated their individual goals to the common goal of the movement. This has been borne out by the relatively low levels of infighting we have seen between insurgent groups. As a result, the military is not going to find a way to chop off parts of the insurgency through political means - particularly if former Ba'athists are systematically excluded from participation in the new Iraqi state by the new Constitution.
Third, the United States can try to diminish the insurgency by letting it win. The disparate groups in an open-source effort are held together by a common goal. Once the goal is reached, the community often falls apart. In Iraq, the original goal for the insurgency was the withdrawal of the occupying forces. If foreign troops pull out quickly, the insurgency may fall apart. This is the same solution that was presented to Congress last month by our generals in Iraq, George Casey and John Abizaid.
Unfortunately, this solution arrived too late. There are signs that the insurgency's goal is shifting from a withdrawal of the United States military to the collapse of the Iraqi government. So, even if American troops withdraw now, violence will probably continue to escalate.
What's left? It's possible, as Microsoft has found, that there is no good monopolistic solution to a mature open-source effort. In that case, the United States might be better off adopting I.B.M.'s embrace of open source. This solution would require renouncing the state's monopoly on violence by using Shiite and Kurdish militias as a counterinsurgency. This is similar to the strategy used to halt the insurgencies in El Salvador in the 1980's and Colombia in the 1990's. In those cases, these militias used local knowledge, unconstrained tactics and high levels of motivation to defeat insurgents (this is in contrast to the ineffectiveness of Iraq's paycheck military). This option will probably work in Iraq too.
In fact, it appears the American military is embracing it. In recent campaigns in Sunni areas, hastily uniformed peshmerga and Badr militia supplemented American troops; and in Basra, Shiite militias are the de facto military power.
If an open-source counterinsurgency is the only strategic option left, it is a depressing one. The militias will probably create a situation of controlled chaos that will allow the administration to claim victory and exit the country. They will, however, exact a horrible toll on Iraq and may persist for decades. This is a far cry from spreading democracy in the Middle East. Advocates of refashioning the American military for top-down nation-building, the current flavor of the month, should recognize it as a fatal test of the concept.
For me, this is as interesting for its flat assertions about the nature of the Open Source ("[F/] OSS"] movement as it is for his clarification of the term as it applies to warfare. There's some very interesting -- perhaps revealing -- language, here. I can remember reading John Robb a few years back, but I don't remember anything in particular that made him stand out from the other tech-bloggers I was reading at the time. Here, he's saying some things that are different, that not everyone else (in the tech-blogging "community", at least) is saying.
For example, he's acknowledging the success of IBM, and how they got it: By 'letting the enemy win,' or more precisely, by buying the enemy their uniforms. IBM spends a ton of money on Open Source development. No other company with the arguable exception of Google has as strong a claim in Open Source councils.
Another example: While he seems to praise with one hand, he does something very interesting by tossing IBM into the same metaphorical stew with the right-wing Salvadoran and Colombian militias, trained to do the nastiest kinds of dirty work by our own CIA at our own School of the Americas. Folks at IBM who get the allusion might well be pissed off by it; I expect it's intended not as an insult, but rather as a precise analogy. The analogy bears expansion, though, because most Americans are woefully ignorant of their own history -- especially the small and dirty parts of it like what the Salvadoran militias (and, hell, their regular military) actually did to their own people, with our help and encouragement. If Robb is right, we're in the process of doing something very similar, again, and this time on a far larger scale.
Over at The Danger Room, they've posted a reminder of a thin whispering voice from the 1980 zeitgeist -- a very funky live rendition of "Life During Wartime":
My first thought was that I'd forgetten how hot that song was. My second was that this could be Bagdhad they're singing about. Or Gaza. Or Beirut.
Heard of a van that is loaded with weapons
Packed up and ready to go
Heard of some gravesites, out by the highway
A place where nobody knows
The sound of gunfire, off in the distance
I'm getting used to it now
Lived in a brownstone, lived in the ghetto
I've lived all over this town
This ain't no party, this ain't no disco
This ain't no fooling around
No time for dancing, or lovey dovey
I ain't got time for that now
Why stay in college? why go to night school?
Gonna be different this time?
Can't write a letter, can't send a postcard
I can't write nothing at all
This ain't no party, this ain't no disco
This ain't no fooling around
I'd love you hold you, i'd like to kiss you
I ain't got no time for that now
Trouble in transit, got through the roadblock
We blended in with the crowd
There was something in the air, or the water, or the synchronistic ether in the late '70s. In the summer of 1980, I began the process of fleshing out an idea for a science fiction novel that would be set in a ruined, riot-torn city. A mysterious agent would enter the city as everyone else fled, set on a mission that he could not fully know. I like to think I know, generally, where I go the ideas. It emerged from a melange of influences, including (but far from limited to) Grahame Green's The Confidential Agent, riots in England (particularly Brixton) in 1981, and this song. I know I had the general idea as early as sometime in 1980; I don't think it achieved anything like final form until the fall of 1985.
I've drawn scenarios involving the collapse of urban civil societies in notes and sketches for many projects between about 1979 and the present, and encountered a great many more in fiction. What's impressed me about the real world in those 27 years is how ready it is to snap back to the norms. The general rule seems to be that when there's trouble, people will help one another out, to the extent that they know how or that they believe they can, without harming their own. England did not disintegrate after the riots of '81 (as I imagined it might, from the naive perspective of a 17 year old American conservative). Beirut eventually settled to a relative stability. I had begun to feel that order was the rule, in human society, not the exception.
But of course there are ways to make the tendency go the other direction, and the first and most important condition for a descent into chaos after disaster is the weakening of what for lack of what I regard as a better term I will refer to as civil society. It's not sufficient that there be poverty or that there be a disconnect between people and their government (be it local, state, or federal) -- there has to have been some kind of basic collaps of the ordinary day to day organizing structures of life. There's probably nothing specific, either, that needs to collapse. In Iraq, we can see many small things that combine to make life unstable; we can see the tacit encouragement to develop on-the-ground, ad hoc civil institutions to deal with issues like insecurity and shortage. Some of those institutions will be deeply cultural; those will be the ones that excite the most devotion, as they become the means by which people define "their own."
John Robb points out continuously that the "perpetual collapse of Iraq" is in direct relation to the failure of the Iraqi state. I would take that a step further, and say that it is in direct relation to the failure of civil society in Iraq. Robb points to Maslow to make his argument that there are some basic needs that need to be met before you can have security; I would point out that even if those needs are met, who meets them and how (i.e., the ideology that informs the new structures that stand in for the state) becomes a critical factor in what it's like to live in those states.
Put another way: Stability is a matter of perspective. Being Sunni or Shi'ite has a completely different bearing on the degree to which your Maslovian needs are being met, depending on whether you live in Mosul or Bagdhad, on whether you you side with or against the Sadrists, and so on. I'm not implying that's lost on Robb. He focuses on the economics and the gross factors, and he's right to do so. By doing so, he can arrive at what he calls the "humpty-dumpty principle":
States are increasingly finding themselves in perpetual disruption or complete failure. One driver of this is globalization. Globalization has diminished state power across the board ("it melts the map"). So,if we want to build a peaceful (and profitable) system that obeys a new rule set (to borrow a phrase from Thomas Barnett), the limits of state power must be a critical factor in its development.
The phrasing is interestingly precise: "the limits of state power must be a critical factor...." Not "Limiting state power," nor "bolstering state power"; instead, what the limits are. Because what they are will have a different result, depending on where you are.
But I digress, as usual. This whole set of ruminations started as a meditation on a song lyric, and there were certain parts of that lyric that haunted me back then more than the rest of them. They haunt me now not so much because I think they might come true, but rather more because they make me ponder what would have to happen to make them come true.
Heard of a van that is loaded with weapons
Packed up and ready to go
Heard of some gravesites, out by the highway
A place where nobody knows
The sound of gunfire, off in the distance
I'm getting used to it now
Lived in a brownstone, lived in the ghetto
I've lived all over this town
This ain't no party, this ain't no disco
This ain't no fooling around
This ain't no mudd club, or c. b. g. b.
I ain't got time for that now
Heard about houston? heard about detroit?
Heard about pittsburgh, p. a.?
You oughta know not to stand by the window
Somebody might see you up there
Burned all my notebooks, what good are notebooks?
They won't help me survive
My chest is aching, burns like a furnace
The burning keeps me alive
Try to stay healthy, physical fitness
Don't want to catch no disease
Try to be careful, don't take no chances
You better watch what you say
In America, I expect it would be a gradual process. We are too big to fail that quickly, and large parts of the country would retain or define their own stability. How they do that, though, is far from pre-determined. It's more likely to take the form of organization through the manipulation of power by a few than by the distribution of power to a many.
Sitting in the pondering place, I pondered this: Where does vegetable oil come from?
The answer, of course, is that plants make it.
We have an oil-based economy, and we're running out of oil. But that's just the "mineral" petroleum, the stuff that's prehistoric. What about the stuff that the plants make?
Sure, plants can't make enough. It would be just like some nay-sayer somewhere to point out the number of acres we'd have to plant in Canola in order to make enough oil to fuel a single fleet of city buses. They'd probably say it's not cost effective, and they'd probably be right. But what about bio-engineering? How does the Canola plant make it? Or the Hemp plant, or the Olive tree, or any other plant? And what's to stop us from bio-engineering an organism to do just that?
Plenty of things, I'm sure, but most of them are moral or entail engaging foresight, and western capitalism doesn't have much history of respecting moral reasons. Or of thinking beyond the end of the depreciation cycle.
In any case, it's true that plants are very good at processing natural materials into more complex and very different natural materials. For example, they can make oil from organic waste. Or from cellulose. But plants are clearly not efficient enough. To even begin to feed the demand for fuel and synthetic plastics, we would need to operate at fairly high levels of efficiency. Fields of canola, regardless of how verdant, would not cut it.
But foetid swamps full of bacteria just might. To get the volumes we need, we would need to use open spaces, like swamps. We could digest whole forests, whole biomes, of cellulose, turn them into swamps, to get the hydrocarbons we want.
Gaseous hydrocarbons or light alcohols would probably be better for generation purposes, to drive our fuel cells, but we'd still need long-chain petrochemicals to make plastic. So I could envision different "crops," including even some semi-refined plastics.
Some of those crops would be quite hostile to life. The biological processes would most likely generate some rather toxic byproducts. And at the point where this type of production becomes necessary, I have to wonder whether the people who did it would care. These would, after all, be people arrogant enough to farm oil in an open swamp. If the global climate is sufficiently broken, all care might be thrown to the hot, dry winds. Or the fuming, damp winds, as the case may be, as we loose our hydrocarbon-synthesizing organisms onto the world and let them digest its organic waste matter into fuels.
I could envision great, sealed cities on the edge of seething hydrocarbon swamps habitable only by the most adaptable of organisms, and tended by fleets of fragmentarily sentient fuel-cell powered robots. Eventually, the robots might form their own cities (or be organized into them by a retreating humanity), existing only to tend (and perhaps contain) their swamps.
These robot cultures would evolve; they would not remain static. Evolution would apply to them as it does to us. This is where the admonitions of the Singularitarians would apply, because eventually our machines, once we are no longer an active influence upon them, will have to find their own reasons for living.
This morning on Weekend Edition, The Singularity rears its ugly head in the persons of Vernor Vinge (who coined the concept) and Cory Doctorow. It's another manifestation of our increasing dread in the face of technological change, and the increasing degree with which we approach that change in irrational ways: in the Vingean scenario, as a rescuing parent; in the Doctorovian vision, .
Doctorow posits the scenario of a modern human interacting with a pre-literate human: That they would be "in a sense, different species." That they and we would have "nothing to talk about." Maybe he was clearer in the un-aired portions about what's meant by "literate", but unless it means "without language" (and one would expect the word chosen for that to be "pre-linguistic"), he's clearly overstating his case. We can easily talk with "pre-literate" or even "illiterate" people, because there remain between us basic human consistencies that will not be removed by any extropian enhancements which we can plausibly predict.
It's a badly chosen analogy, to be sure, and surely one can be forgiven for choosing analogies badly, no? No. Because the craft of science fiction as gedankenexperiment is all about precision -- or at least, insight -- in your analogies. We need to remember that the beings making the singularity are humans. The aspects of the singularity that are truly, deeply a-human, are not likely to persist in that form. They're likely to get reshaped, recrafted, in some kind of human image.
I think Doctorow's analogy illustrates the most fundamental problem with Singularity Theory, in that it is often a failure of a certain kind of imagination: Empathy.
Vinge posits a more traditional scenario, in a way, as a revisitation of the Jack Williamson nightmare -- but with Williamson's logical problems fixed. Vinge's singularity-intelligence is more of a savior than a successor. A lost parent, restored, if you will. Clarke's technological god. Maybe it can save us from global warming.
Doctorow's singularity-beings are replacements, successors. They are what we are not -- they overcome our weaknesses, and supersede us. There's a sense of mingled dread and fascination in the prospect. I'm still trying to understand how to talk about the impulse. I feel it, myself, to be sure, but I don't have a pat name for it.
Sterling's critique still seems sound. (See his short essay in Wired; longer talk at the Long Now Foundation, as MP3 or OGG or as a summary.) He points out (among other things) that the singularity-being will not come about entirely by accident. It will come about through our choices, and some of those choices will tend to constrain the singularity-being.
Someone has finally noticed [via SmartMobs] that any PocketPC or Palm OS 5 PDA has the power to become a VOIP phone. But have they noticed yet what the consequences are? I think they have, and they're just keeping quiet about it because they're hoping that their competitors won't figure it out first and out-manouvre them.
But let's play this out. Let's say I go into my local coffee shop with free WiFi, whip out my PDA, fire up the softphone, and start talking. I'm not paying anybody for anything, except my coffee refills.
So something's gotta give:
Ultimately I'm thinking we see a flattening of offerings; everything being done via IP (or its equivalent). Phones only actually use "phone" technology in areas where it's not cost-effective to switch over. Phones become a flexible concept in this scenario, so something would have to be done about that. (The beauty of the phone as a communciations medium is the individualized, static "Phone Number": You want someone, you call their Phone Number. Elegant. Simple. Took generations to evolve to its current form and market dominance, and is likely to be the driving metaphor for whatever replaces it.)
(I just posted a version of the following over at Drupal.org, in their "Drupal Core" forum. I doubt it will make much of an impact, but I had to try...)
I propose that there is a problem with the ways that program function URLs are written in Drupal, that causes Drupal to be a disproportionate target for trackback and comment spammers.
The problem with comment and trackback spam in Drupal is this: It's too easy to guess the URL for comments and trackbacks.
In Drupal, the link for a node has the form "/node/x", where x is the node id. In fact, you can formulate a lot of Drupal URLs that way; for example, to track-back to x, the URI would be "/trackback/x"; to post a comment to x would be "/node/comment/reply/x". So you can see that it would be a trivially easy task to write a script that just walked the node table from top to bottom, trying to post comments.
Which is pretty much what spammers do to my site: They fire up a 'bot to walk my node tree, looking for nodes that are open to comment or accepting trackbacks. I have some evidence that it's different groups of spammers trying to do each thing -- one group seems to be switching IPs after a small number of attempts, and the other tends to use the same IP until I block it, and then takes a day or so to begin again -- but that hardly matters. What does matter is that computational horsepower and network bandwidth cost these guys so little that they don't even bother to stop trying after six or seven hundred failures -- they just keep on going, like the god damned energizer bunny. For the first sixteen days of August this year, I got well over 100,000 page views, of which over 95% were my 404 error page. The "not found" URL in over 90% of those cases was some variant on a standard Drupal trackback or comment-posting URL.
One way to address this would be to use something other than a sequential integer as the node ID. This is effectively what happens with tools like MoveableType and Wordform/WordPress because they use real words to form the path elements in their URIs -- for example, /archives/2005/07/05/wordform-metadata-for-wordpress/, which links to an article on Shelley Powers's site. Whether those real words correspond to real directories or not is kind of immaterial; the important point is that they're impractically difficult to crank out iteratively with a simple scripted 'bot. Having to discover the links would probably increase the 'bot's time per transaction by a factor of five or six. Better to focus on vulnerable tools, like Drupal.
But the solution doesn't need to be that literal. What if, instead of a sequential integer, Drupal assigned a Unix timestamp value as a node ID? That would introduce an order of complexity to the node naming scheme that isn't quite as dramatic as that found in MT or WordPress, but is still much, much greater than what we've got now. Unless you post at a ridiculous frequency, it would guarantee unique node IDs. And all at little cost in human readability (since I don't see any evidence that humans address articles or taxonomy terms by ID number, anyway).
Some people will immediately respond that this is "security through obscurity", and that it's therefore bad. I'd argue that they're wrong on two counts: First, it's not security through obscurity so much as security through economic disincentive; second, it's not bad, because even knowing exactly how it works doesn't help you very much to game it. The problem with security through obscurity, see, is that it's gameable. Once you know that the path to the images directory is "/roger/jessica/rabbit/", then you can get the images whenever you want; even if you know that the URL to post a comment is "/node/timestamp/reply/comment/", you're not a heck of a lot closer to getting a valid trackback URL than you were before you knew that.
... The queue was perhaps 20 feet long and right in the middle was this 10-foot gap. I was in no hurry, I thought. That gap was not going to cause me to get to the teller more than a second or so later than I might if the gap was closed. No problem.
Only it WAS a problem. As the minutes passed that gap started to drive me insane. Finally I asked the kid to move forward.
"It was making you crazy, right?" he asked, clearly enjoying the moment.
(Ah, yes, the joys of being a self-important little putz..but I editorialize....)
Google has something over $6B -- that's six billion dollars -- in cash on hand right now. That's cash -- not credit, not valuation, but real money that people have paid them. And everyone wants to know what they'll do with it.
The day when six billion could buy three Eisenhower-class aircraft carriers has long passed, but you can still make a pretty good splash with that much cash. So, as Cringely points out, all the gorillas in technology are sitting around waiting to see exactly what it is that they'll do. Which gives them an amazing amount of power -- as long as they don't actually do anything.
Putting things in perspective, Google has been really really super good at exactly one thing: Self-promotion. Sure, some of their technology is pretty good, but there's really no evidence that their algorithms are really any better than, say, Teoma's. What they do have is more power. There's a saying among marketing folks: "Go big or go home." Google went big, more or less from day ten or so. "Day ten" because they had to get the money to go big with, first. And that's where self-promotion came in.
I distinguish between "marketing" and "self-promotion" here because I think it's important. Google, at the root, has always rooted its mystique in the cult of personality that's coalesced around these mythical beasts "Sergey" and "Larry". That's suffered a little, no doubt, as a result of Eric Schmidt's incredible childishness in response to CNet feeding him a half teaspoon of his own company's medicine. Nevertheless, Google still builds its reputation in large part out of the sheepskins of its PhD-filthy workforce.
As Cringely points out (and as I've pointed out for years, myself), though, and much like Microsoft, Google's technical solutions are seldom really cutting edge, but because of their market dominance people more or less have to use them. What Google have done well is mobilize the good will of geeks; which is to say, what they've done well is to work the cult of personality for all its guerilla marketing mojo.
And now, all they have to do is twitch -- or even hint at twitching -- to make gorillas jump. Rumors abound: Google is buying up dark fiber, so they can run their own internet; Google is building a vast new data center, so large that it will need a major hydroelectric plant to power it; Google is producing their own desktop OS. Sometimes they're even true: Google is in the process of rolling out its own "desktop", a search/chat/email client that will allow them to entrench even more deeply and even more richly enhance their vast database of geographically-linked internet behaviors.
That database is the elephant in the room in any discussion about Google, though of course it's useless without the market-muscle to deploy (and grow) it. In military terms, Google's market mojo pairs up with its database like big satellite-guided bombs pair up with the geographical databases that tell you where the targets are. It's their market position that lets them get the database; the database is what's going to guarantee their market position for years to come.
This past weekend I was mulling over Thomas Friedmanâ??s question from his Friday New York Times column:
So I have a question: If I am rooting for General Motors to go bankrupt and be bought out by Toyota, does that make me a bad person?
True, Toyota is a well-managed company and has engineered hybrid energy technology, but no, GM, though late in the game, is not still â??scoffing at hybrid technology.â?
In his Sunday article, Daniel Howes of The Detroit News offered not one, but three questions in rebuttal.
I'm all for energy independence and I think the competitive pressure that Toyota's hybrid successes are putting on Detroit is a good thing. But I have a question: What about American industrial independence?
Detroit's automakers are fighting for their lives because competition from Toyota, among others, exposed their weaknesses and forced change -- on the quality of their cars and trucks, on their efficiency and, yes, on their slow response to Toyota's push into hybrid cars and SUVs.
Ford can't build its Escape hybrid SUVs fast enough. Coming, too, are Mercury Mariner and Mazda Tribute versions. There will be hybrid Fusion and Milan sedans -- five full gas-electric hybrids within the next three years, and more are planned.
GM plans to offer gas-electric hybrid versions of its Chevy Tahoe and GMC Yukon soon after the launch of its next-generation full-size SUVs early next year.
In fact, current GM production vehicles include buses and full-size pickups using hybrid technology.
Howes also questioned Toyotaâ??s fuel efficiency in comparison to GM. Iâ??m not sure he was comparing apples with apples though. I donâ??t have enough information.
I do know that regardless of corporate trials and tribulations or any debate about Toyota vs. GM, I strongly feel that hybrids are very important -- but not necessarily as a final solution. They still require the use of gasoline. Period.
And whether or not you believe in synchronicity, this past weekend Lynne also just happened to graciously email me some articles about the "drilling for oil and natural gas in the Great Lakes" debate. In addition to ecological concerns, tourism and jobs are specified as potentially being threatened, according to opponents of drilling.
Ideally, finding alternate fuel sources for vehicles should not be abandoned, regardless of Thomas Friedmanâ??s comment about â??sci-fi hydrogen fuel cells.â? The crux of the problem is expecting discovery of fully workable liquid, gaseous, or solid state storage to happen immediately. Reputable, responsible research often takes time. It could take a decade or more.
And thatâ??s where hybrid technology comes in. Right now, itâ??s a way to lessen our dependence on gas guzzling. Either that or, I guess, four or more dollars per gallon for gasoline possibly could help reduce consumption.
And what about nuclear-produced electricity to recharge those new hybrid or electrical cars at night? But then, even mentioning this, Iâ??m likely opening another can of worms, so Iâ??ll end my observations for now,...
well, except to say that Thomas Friedmanâ??s comment that â??the Bush team has been M.I.A. on energy since 9/11â? rings so true.
As grim and depressing as I can find the automation of spam and the proliferation of bot networks, I like to think I have some perspective on the matter. For example, I recognize that there's a real danger of incredible, profound disruption from bot networks like the one that's driving the spread of the Sober.x worm[s].
But that disruption won't come from "hacking" -- most particularly, it won't come from using the bot networks to crack encryption. As usual, Bruce Schneier has cut through a lot of the nonsense that passes for wisdom on the subject.
The very idea that the main threat from bot networks is cracking is ridiculous -- it displays a basic misunderstanding not only of how end to end security systems are designed, but also some very peculiar and extremely fuzzy thinking about how to defeat those systems. You defeat the systems by gaming them, not by cracking encryption. Sure, you may want to crack encryption at some point to get through some particular locked door -- but the hard part is finding that door in the first place. And more often than not, if you're clever and you know how to game systems, you'll find that you don't need to crack encryption: You can get someone to just give you the key, or even (figuratively) open the door wide and usher you through.
Of course, it is possible, and even likely, that computers will be or even are as I write this being used to game security systems more effectively than humans can. Some clever bloke somewhere might even be writing bots that crack systems. But bot networks -- "herds" of dumb, zombified PCs, even if harnessed into a computational grid -- are more or less irrelevant to that.
Heuristics like that aren't helped by brute force. Anyone who calls himself a security expert ought to know that.
The greatest threat from bot-driven disruption is not hacking or cracking, but denial of service. The person or persons controlling the Sober zombie network alone could, should they so choose, have a significant impact on the operation of the open, civilian internet. It would be easy. It would be pointless, but it would be easy.
But again: it wouldn't be the end of civilization. We'd get by. That's what we do.
And that's the ultimate lesson of security: Unless the system is severely broken (as in Iraq after the fall of Saddam or in Rwanda in '96), people will generally act to preserve structures of civilization (as we see again and again after natural disasters throughout the world).
I can foresee a day when we're nostalgic about commercially-motivated spammers and mass-mailing-worms.
I get jaded about virus and worm stories. Each day seems to bring a new watershed in rate of infection, purpose, or technique. Sober is the worm du jour: It appeared sometime during the week of May 2, spread widely and rapidly, and this week started to download updates to itself. The latest variant, Sober.Q, is being used to spread "hate speech."
So let's count the milestones: Rapid spread; remote control; use for propaganda. None all that impressive anymore, on their own. But put together, they're like seeing someone walk down the street wearing sandals with black socks: It's just another sign of the end times. It's depressing.
But Seriously, Folks: Using mass-mailing worms to spread propaganda really is something to take notice of. It's a truism that spam is just about too cheap to meter, as exemplified by the fact that it's not cost effective for a spammer to even care whether most of his messages get through, much less whether you're trying to sell cialis to a woman; it was only a matter of time before the marketers of ideology grokked the significance of that fact and started using it to virtualize their lightpost handbills.
Self-updating zombie mass-mailing worms are the computing equivalent of a bio-weapon: (mind-bogglingly) cheap, effective, and devilishly hard to kill. Previously, they've been used for a rationally-accessible goal: Making money. Now, they're being used for goals that are comprehensible only in terms of the ideologies that drive their purveyors.
Still more proof, as though we needed it, that markets are dangerously deficient metaphors for understanding human social behavior.
Over the past few days I have seen many descriptions of Marla, including those likening her to an angel or a saint. Neither of those words do her justice. She was driven by a passion I have never encountered before, and she had a boundless heart. But she was also consumed by extreme lows as well as highs, tears along with laughter. In discussing plans for a book, she wanted to be depicted as the rich and complex woman that she was. But she would quickly remind me that the families' stories were most important. So, she wasn't a saint, but she possessed saintlike qualities.
I'll bet you a magnet Support Our Troops sign that the Tillman story will continue to have legs far longer than Ruzicka's.
[Neologian on MeFi]
Somehow I doubt it.
I'm sure that Neologian hopes for better, of course, and he'd have good cause to. Marla's story is the kind of thing that deeply inspires people who are willing to commit their entire lives without the possibility of external reward. Pat Tillman arguably did the same thing, but there's a different quality to his committment. Marla could have gotten out at any time -- she just had to go to he airport and go home. She never gave up, though. Her legacy (like Tillman's for that matter) should be that effort and sacrifice are not pointless.
At the very least, Marla's memory has a better chance of remaining true to "Marla" than Pat Tillman's does of remaining true to "Pat". Both have been or will be remade into whatever their admirers want/need them to be. But where Tillman's personality was exposed to small groups of a fairly limiting nature (his family, the men in his unit), Marla forcefully projected hers across strata of society, across cultural boundaries, across domains of experience -- and, not insignificantly, across airwaves. All without apparent loss of committment.
Footnote, for now: I woke up to Ivan Watson's story about Marla on NPR on Monday morning. I've been thinking about it off and on ever since.
At first, I thought it must have been some kind of a joke, but it seems to be true: Adobe and Macromedia have agreed to a friendly takeover, at a price of about $3.4B. So the question is, does this save Adobe or destroy Macromedia? And is there any conceivable way that merging two 800 pound gorillas could be good for web developers or end users?
Macromedia and Adobe have presented as competitors for years, but they actually compete head to head in very few areas. Even places where they seem to butt up against one another, as in the case of ImageReady versus Fireworks, or FlashPaper versus PDF, the truth is more complex: In the first case, most design shops just buy their people one of each, and in the second, the formats, while presented as directly competetive, really aren't. PDF is almost zealously print-centric; FlashPaper is really an effort to make Flash more print-friendly, and in fact ends up incorporating PDF into its own standards stack. Both have more usability warts than most people on either side like to admit.
It's hard to see how this helps consumers. Adobe have become enormously complacent in recent years. They're effectively the only game in town for professional image editing, and they know it. In the graphics professions, the price of a copy of Creative Suite is simply a part of the cost of setting up a new designer graphic artist. Even heavily Macromedia-focused web shops use Adobe software at some stage in their workflow, thanks to Adobe's strong lock on certain color technologies. But they've never bothered to develop anything like Flash, and have never worked very hard to overcome the profound weaknesses of PDF as a format.
Macromedia are somewhat hungrier, somewhat more innovative -- but they, too, have a market lock. Professional web design shops either work with Macromedia StudioMX (or possibly just Dreamweaver), or they most likely do inferior work. I know of a few good web designers who stick with Creative Suite for everything, but they're old pros with lots of experience dealing with Adobe's deeply idiosyncratic conventions and techniques. Macromedia's workflow for web production is far, far superior to Adobe's in every regard except for color management and high-end image/graphic editing. Their "round-trip" code management is on an entirely different plane from Adobe's understanding of how to deal with HTML.
If I have to predict the shakeout, I'd predict that the final product lineup from the merged entity will include Dreamweaver and Flash from Macromedia, Acrobat, Photoshop, Illustrator, and InDesign from Adobe, and will probably include both ImageReady and Fireworks until they figure out which one is harder to de-integrate. My guess would be that the good bits of ImageReady would be incorporated into Fireworks, which has much, much stronger HTML generation capabilities. (That said, its file format may prove difficult to integrate with Photoshop and Illustrator.) Acrobat and Flash will have a relationship analogous to that between Flash and Director: Flash will be a mezzanine for rendering and delivering PDF, and Acrobat itself will continue as a separate product.
And, of course, Macromedia's server-side products will remain intact, because they're what Adobe really wants. Adobe is digital graphics, basically; but they aren't positioned to continue to grow in a post-Web world. Specifically, they are vulnerable to being obselesced as technology moves beyond them. Macromedia, by contrast, has spent the last several years experimenting with web-focused (not merely web-based) workflows.
ADDENDUM: After reviewing the MeFi thread, I'm no longer so sure that Adobe will be humble enough to keep Macromedia's very emperically-grown software development stack. And I see that some of my assumptions regarding the smartest choice of components may be too optimistic. One thing's for sure: Our web dev apps will sure be a lot more expensive...
Or bad enough, depending on your point of view. And it's most fun if you can fool yourself while you're at it. The tutor points out this morning that most Americans are pretty profoundly confused about what's good for them:.
We live in a democracy where most of those on the verge of bankruptcy are more concerned to repeal the Death Tax on estates above $2 mil, than they are with preserving their own home when their credit card debt catches up with them. This is a testimony to the relative power of marketing versus education. Who can blame Congress for making an honest buck off the passing of bills? Meanwhile, the media look more and more like the WB Studio Productions, what in the trade actors call "Industrials."
My point of view is from the bottom. Or down below, at least, if not on the rocks. I made a bunch of money last year; but I've made hardly any this year, and that's much more typical. I'll freely admit, that if I got badly sick, I'd be pretty screwed.
The really fun and interesting thing about all of this oppobrium about deadbeat consumers who are ruining America is that it's the culture of over-consumption that these people exemplify that keeps America going. Responsible consumption would destroy the American way of life faster and more certainly than any market crash. So the forces of Right are really fooling themselves, too, if they actually think that this is at all about helping the economy. Personal bankrupcy is the expansion grid on the American economy.
So clearly, that's not what it's really about. It's about a long-range re-solidification of the American economic class structure. The class structure broke apart in the 20th century, and (excepting the 1920s) especially since 1950. It became possible for working class families to reliably place their children into the middle and upper-middle classes; now, those at the upper end of that spectrum would like to solidify their hold on the higher strata of neo-calvinist blessedness by setting skid-traps to the underclass: Below a certain threshhold, any wrong step can take you all the way back down. And once you're down, those new bankrupcy laws will make damn sure you don't get out.
But this is America. And in America, anything is possible. The longer the odds, the bigger and sweeter seems the dream.
For about 12 hours, I've been getting hit heavily by texas-holdem spam. This, coming just two days after "texas-holdem.rohkalby.net" "spam-whacked" (to coin a phrase) its way to a high position in the Daypop Top 40, one of the key indicators of memetic activity in the blogosphere. It didn't stay there more than a day, but it was there long enough for my 12-hour aggregation cycle on Daypop Top 40 to pick it up.
This wave of comment spam here (all caught by my filter, after the initial training) is conventional comment spam. But my hunch is that the "rohkalby.net" Daypop-whack was done with trackback. I just can't imagine it happening rapidly enough and in a widespread enough form to do so without the assistance of trackback auto-discovery.
BTW, I haven't found anybody actually mentioning this incident, which is very interesting to me. It meas, I think, that they either didn't notice, didn't understand the importance, or didn't want to admit the importance. Which is huge, because this would demonstrate two things -- one very important, the other merely interesting:
We can say safely that SixApart are responsible, by the way, because they initially invented trackback as a manual means of "giving juice" to someone else, and then failed to understand that it needed to stay manual. It was intended to be initiated by human action, not automated. But then they proceeded to automate it; that made trackback geometrically more attractive as a target for spam: It meant that spammers could potentially place themselves into the various automatically-compiled "top"-lists in a completely automated fashion (i.e., at cost to the spammer approaching zero). And with no legal consequences, to boot: They couldn't be prosecuted under email laws, because it wasn't email; they couldn't be charged with theft of service or hacking because -- and this is key -- the spamming was being carried out as a normal designed feature of the "exploited" systems, using their resources.
The great mystery isn't that it happened, but that it took so long to happen.
Shelley et al.'s "tagback" concept might profide a "fix" for this, of a very limited sort, but it still leaves us without trackback. Trackback was a very useful concept; it allowed people to create implicit webs of interest, one connection at a time and without central planning, and -- and this is really important -- without the mediation of a third party like Google or Technorati. And we all know that spammers will find a way to parasitize themselves onto tagback, as well.
And anyway, reliance on third parties for integration gives them power they should not be allowed to have. It's a bad design principle. Trackback, pre-autodiscovery, was a good simple piece of enabling technology. But it was mis-applied, quickly, with the encouragement of people who should have known better. And now it will be forgotten. Which is really, deeply stupid, when instead it could simply be re-invented without auto-discovery.
In all the fuss over President Bush's "plan" for Social Security privitization, a simple fact keeps getting ignored: It can't possibly work, because it's based on a mistaken premise. And the error is so obvious, that I find it hard to understand why people don't see that the clear goal is to eliminate Social Security altogether.
The error, to me, is the assumption that Social Security funds invested in the stock market would actually accrue enough interest to "save" the system. It's painfully clear that's almost certainly wrong, even if we consider two simple data points:
Really, this should be obvious to everyone, and especially to "fiscally conservative" Republicans. Of course, "fiscally conservative Republican" is an oxymoron, so what should I expect? To the American Conservative mind, "social security" is a mistaken concept. There is no quicker way to marginalize yourself in American political discourse than to take seriously the concept of a commons (i.e., to treat community as an actual community). Ecological viewpoints -- the very idea of considering second-order effects in reckoning what Might Be -- are frowned upon. That requires subtle thinking. And that's not something we go for over here.
We live in the Era of Air Freight.
My new Mac Mini shipped early this morning from Shenzhen, China, via FedEx. From there it will probably fly non-stop to Nashville on a FedEx 747-400, 777, or 767, and thence be routed here. I can track the movement of the package online, and see by implication how it's travelling: It hit FedEx at 8:51pm (local time) on 1/18/2005 ("Package received after FedEx cutoff time"). It left the FedEx ramp at 7:09pm. By my reckoning it will be in the air about 12-13 hours, based on the distance from Nashville to Shenzhen. So I should be able to browse to the FedEx site and see the Arrival Scan by about 9pm EST today. I'll be able to follow it hop by hop until it goes out on the delivery truck, which will be either Friday or Monday, depending on how seriously Apple takes their delivery-date promises.
We live in the Era of Air Freight. This ecological fact is in many ways the most important practical implication of advancing technology: Computing and networking technology makes the coordination global logistics possible, and efficient long-haul cargo aircraft from Boeing and Airbus make it cost-effective to distribute directly from a factory in China to a doorstep in western NY state. And all of this allows economies to pump capital more quickly -- allows the concrete manifestations of ideas and desire to move across the globe at 700 miles per hour. Thinking of it all in terms of goods and capital seems trivial, but this kind of point-to-point distribution is really the engine that drives the global marketplace, which in turn is what drives global society, for good or ill. We can blame the idea on Sears and Ward. The transit of goods in turn subsidized the rest of our long distance mass transportation network, as the big widebodies pack the extra space in their bellies with cargo, the complex spoke-end to spoke-end routing enabled by efficiently networked logistics systems.
And yet, all we see moving are the people. We are blind to the goods in the cargo hold on all the big planes; we taxi by the big, windowless cargo-haulers, logoed-up for DHL, UPS or FedEx, and most of us probably just have a quickly-forgotten moment of "Oh, so that's how they do it..." We only think about the people.
When the World Trade Center was attacked on 9/11, British Airways lost about 40 Concorde frequent-flyers. The impact went much deeper, though, than just the loss of 40 reliable fares. Many of those 40 were senior decision-makers at their companies. They were the people who could approve the expensive Concorde tickets, either formally or tacitly. The Concorde relied almost entirely on human passengers to pay its way, and so from "Golden Eagle", the Concorde returned to it's early-'80s status as a money-burner. So we can add the Concorde to the list of things that can be said to have been killed by 9/11.
Amidst all the hoopla surrounding the formal unveiling of the Airbus A380 "super-jumbo", many asked where Boeing's competetive product was. Boeing's answer: The A380 is a "big plane for a small market." The same could have been said of the Concorde: It's market was so small, that losing 40 passengers upset it's fare-ecology sufficiently to make the plane commercially non-viable. But the Super Jumbo won't suffer the same fate. I heard it said more than once in the news coverage that half the orders were "from Asia" -- which means, they're for air freight.
Public reaction (and amongst the "public" I include most media business analysts) to the A380 under-reports a very important point: While hub-to-hub people-hauling is important, the 580-seat luxury model and even the as-yet unbuilt 800-seat steerage special versions of the A380 are really almost red-herrings. The real target market for these aircraft is not passenger hauling, but air freight. There's big money to be saved by increasing the weight and range of the planes, even just between major hubs. This is a plane designed to fly non-stop from Yokohama to Louisville, Shenzhen to Nashville, Taipei to LA, São Paulo to London, with a really big payload of shoes and consumer electronics. Airbus's bread and butter customers for the A380 are outfits like FedEx, UPS, DHL, that won't stop using the hub-and-spoke model for the bulk of their traffic for a long, long time.
It's easy to see this as a triumph for economic models of understanding. But that would be a mistake. While all of this can be seen in economic terms, its effect is human, social, and the field on which the economic facts are cast is fundamentally ecological. And that's the reason that economists (Marx first and chief among them) fail to predict accurately: They fail to understand that economics is only ecology writ fine, and hence divorced from the larger picture. And from the fine-writ bits from other aspects of the big picture. Capital -- money -- is fuel in an ecology of commerce. But it is not, yet, the reason. For the reason, we can still, at least, look to such intangibles as desire.
Thomas P. M. Barnett, of the Naval War College, on Getting What You Wish For: "... Rumsfeld's answer was that 'sometimes you go to the war with the army that you have, not the army that you want' -- not exactly. You go to the war with the army you've been wanting." Because that's the army you've got: "we've been wanting for the past 15 years an army that doesn't do peacekeeping, doesn't do nation building, doesn't do post-conflict stabilization." [Thomas P. M. Barnett, "The Pentagon's New Map", on ME 2005-01-18] [as RealAudio] So what we got was an army on the ground in Iraq that couldn't handle "winning the peace", just as General Eric Shinseki predicted.
Alas, we didn't wish for Iraq to become the world's new terrorist training ground. But we got it. What do these things have in common? That they were both predicted by anyone who bothered to think about it. That they were both obvious consequences of the way we chose to do things. That people responsible for seeing this stuff in advance, did see it, and did raise warnings. That people responsible for listening to those warnings did not heed them (and perhaps did not even listen).
But common sense has never traded well in American politics. If it did, we'd have taken the common sense approach that Barnett advocates and opened trade with Cuba. That would have brought Cuba into economic contact with the rest of the world. Barnett argues that when nations are brought into economic contact with the world -- when they're given a stake in the world community, as it were -- they don't cause problems. Ergo, if we wanted to make Cuba play well with the rest of the world, we should have practiced a little [ahem!] constructive engagement. (If it's good enough for China, why not for Cuba? Oh, wait, that's logic...)
Since this is basically consonant with orthodox market theory, with conservatives theories of personal responsibility, and also plays well with the "liberal" comprehension of the fact that we've gotten ourselves far up the excrement race with no apparent means of propulsion, I'll be curious to see what kind of traction these ideas get.
Barnett's a military historian, so there's a military angle to all of this, too, of course: He seems to suggest basically that if we're to become colonial lords, we should make our army fit that bill. We should procure and plan and train for a force and force structure that is suited to peacekeeping, is suited to nation building, is suited to post-conflict stabilization. Common sense. Though I'm afraid having such a force might make us more likely to meddle in the business of other countries (I'd rather not see us as colonial lords), perhaps we'd get into less trouble if our army weren't well-suited for large-scale invasions. I'll be curious to see his reviews.
I'm thrilled that we're pouring hundreds of millions of dollars into the relief effort, but the tsunami was only a blip in third-world mortality. Mosquitoes kill 20 times more people each year than the tsunami did, and in the long war between humans and mosquitoes it looks as if mosquitoes are winning.
One reason is that the U.S. and other rich countries are siding with the mosquitoes against the world's poor - by opposing the use of DDT.
"It's a colossal tragedy," says Donald Roberts, a professor of tropical public health at Uniformed Services University of the Health Sciences. "And it's embroiled in environmental politics and incompetent bureaucracies."
In the 1950's, 60's and early 70's, DDT was used to reduce malaria around the world, even eliminating it in places like Taiwan. But then the growing recognition of the harm DDT can cause in the environment - threatening the extinction of the bald eagle, for example - led DDT to be banned in the West and stigmatized worldwide. Ever since, malaria has been on the rise.
Thus speaks Nicholas D. Kristof of The New York Times today.
It might behoove those interested in reducing malaria to check out facts and politics surrounding the original DDT debate. There are proponents of using DDT who challenge whether DDT is harmful to the environment, and specifically the bald eagle.
DDT was being used when I was young. Iâ??m not aware of any injurious effects due to my ingestion of low levels of DDT. Proper use of the chemical seems paramount in assuring safety.
It seems ironic that an administration that is presently being vilified for not caring about the environment likewise might be using a blind eye when supporting questionable environmental precautions.
SixApart have announced they're acquiring LiveJournal in a friendly takeover. This is actually bigger news at a cultural level than Microsoft breaking in with "MSN Spaces" or even than Google acquiring Pyra.
Whether the merger can be successful at all will hinge largely on how seriously the "bloggers" at SixApart take the "LiveJournalers", but there are powerful synergies to be achieved here that I'm not sure either SixApart or LiveJournal really understand. There are significant cultural differences between the two "communities" that are commonly parsed as socioeconomic (by the LiveJournalers) and generational (by the MoveableType-focused bloggers). There are lots of dimensions to the cultural split, and of course it's often an error to speak of statistical humans, but the more salient long-range divide is really hands-dirty versus hands-clean: Do you open the hood, or do you rely on your mechanic? Do you mod your vehicle (or PC case or backpack), or do you leave it as-is? And when you mod, are you picking from a menu, or thinking up ideas on your own?
And that's the dimension on which the new, merged SixApart-LiveJournal entity will attain success or not: The continuum from commodity to customization -- from people who are content to buy and use off-the-shelf to the country of the hard-core modifiers. LiveJournal is off the shelf, with essentially menu-driven site customizations that are still very branded as "LiveJournal" sites. MoveableType, and TypePad to a lesser extent, are under-the-hood affairs, which are capable of driving rich visual and functional customization. They're right that they don't need to merge the products or the codebases -- the merger of the two organizations will succeed at a basic level if they can overcome cultural biases. But if they can learn to move fluidly (and cost-effectively) along that continuum from commodity to customization, they will morph into a truly powerful challenger to established players, and maybe even a cultural force to be reckoned with.
This is more than mass-customization redux; it's really the first true-coming of a model that was heralded by Saturn in the '90s, but it goes beyond the product delivery to the customers desire to make the "product" their own. Penn Jillette sang an early paean to this desire back in 1990, and Toyota recently started a whole division based on the idea that what you might really want to do is plug stuff in after the fact. But hey, they'll be happy to let the dealer do some value-adds for you, too...
But back to the merger. Technical issues are certainly important. Mena Trott plays up LiveJournal's experience with scalability, and that's important for SixApart: TypePad is probably as scalable as MoveableType could be made in the relevant timeframe, but my sense is that it doesn't achieve the economies of scale they'll need to accommodate 30 million new bloggers a year, and I'm sure this will have occurred to Ben Trott. They'll need to be cautious, though, about taking an overly-architectural tack; considering recent advances in automation and system virtualization, it's probably more cost effective (and almost certainly quicker-to-market) to build a big, comprehensive automation and virtualization infrastructure than it is to re-architect MoveableType for scalability. (Incidentally, that approach would also give them better traction while moving back and forth on that critical commodity-customization continuum.)
All this having been said, I think it's an even bet whether or not SixApart will "get it" enough to really synergize their merger. They're really good with feedback, as their quick response to last May's license fiasco demonstrates. But they also have a history of making exactly the mistake that precipitated that problem: They try to retain too much control over their user base. I would have been a big fan of MoveableType in its early days, except for one little detail: Their license forbade any licensee from charging for customization services. "That's our business," they explained. "We make money doing that." I saw that as short-sighted, and time proved me right: There are now no such restrictions, and part of the reason is that people went out and went nuts modifying MoveableType, and probably in many cases in violation of those license terms.
My point is that even though they corrected, they did made that same mistake twice, and now they're saying things that lead me to believe they're missing some crucial points. So the real bottom line on the success of this merger might be whether people of more expansive vision will be guiding the course of the company, or whether they'll still be taking protectionist gut-checks at every step.
The Business Blogging Boot Camp (@ Windsor Media) provides a more bottom-line perspective on the growth of blogging, driven by Fortune's 1/10/2005 feature story on technology trends; their observations came to me as part of an email thread related to the BBC story I mentioned last night. They stress the importance of blogging for business, and furthermore the importance of blogging earnestly. They cite the Kryptonite affair and moves toward blog-monitoring by Bacon's Information -- the latter characterized as tentative, "inane", 'Not Getting It.' (I'm usually leery when a huge quant-marketing shop is characterized as Not Getting It. Often it's true, yes; but as far as I can see they often understand a lot more than they bother to explain to us proles. But I digress.)
There are two things I feel compelled to point out before going further: First, blogs are qualitatively analogous to specialist newsletters, which are nothing new to savvy marketers. As with specialty newsletters, the influence of a blog hinges on a subtle balance between the publisher's access to information, their (perceived) personal integrity, and the volume (direct or indirect) of their readership. What's new is the speed of blogging. I'm leery of pointing out emergent qualities, but it's hard to argue that a ten-day cycle time doesn't indicate that (a lack of) quantity may indeed, in this case, have a quality all its own.
The second thing I feel compelled to point out -- and this is both much more and much less important than it sounds -- is that the Kryptonite business not only didn't start on blogs, but didn't get its first traction there. It started on the cycling boards, and that's where it was hashed out, refined, debugged, and researched, and where the first instructional videos were posted, before it was ever reported on a blogospherically-integrated weblog. Some of these bicycling boards are almost as old as the web, and most have many members who trace their net-cred back to Usenet days. My point being that anyone focusing only on blogs as such is setting themselves up for obsolescence. Blogs as they are, are almost certainly not blogs as they will be.
Anyway, Windsor Media's take is largely blogospheric orthodoxy. And in practical terms, it's probably right: The important thing for businesses to do right now is to make it part of some people's jobs to go out, and read and post like humans. But there's a second thing that not only needs to happen, but will happen, and what's more will be enabled by the first: Smart businesses will take steps to understand how the blogosphere works, and how it can be gamed, and then they will go forth and game it. And it will work. The knowledge required will come from a few main sources: From big outfits like Bacon and free-range old-school marketing pundits (who will keep it to themselves and share out bits of wisdom to key clients); and from less old-school marketing pundits like Darren Barefoot and BL Ochman, and from product evangelism folks at big companies (who as a group will tend to share it on their blogs, undercutting Bacon et al's old-school attempts to make money off consulting). And, perhaps most important of all, it will come from research in social network analysis. More on that another time.
Blogging will be gamed by corporate and business interests, make no mistake about that. Because it can be, and is being, gamed. It happens every day. And, contrary to the blogospheric orthodoxy, the broader the cross-section of people who get involved in blogging, the easier it gets to game the system without looking like a weasel. And if the golden rule of capitalist systems is that money wants to make more money (and I'm pretty sure it's something like that), and if blogging has an impact on the growth and flow of money, then money will drive blogging, and blogging will get gamed.
Now I'm getting into blogging heterodoxy. The conventional wisdom on the blogging ethos is very cluetrain, and in fact, the Kryptonite affair does indeed show a real "cluetrain" cause-effect loop. I missed it at the time because I just didn't tune in to the story, but the folks at Fortune and Windsor Media are right about that: The ten-day problem-to-product recall cycle at Kryptonite was characterized by all the corporate communications failures criticized in the Cluetrain Manifesto. It just took a lot longer for this first clear case to emerge than either they or, frankly, I thought it would.
The orthodox position is that the more people get involved in blogging, the harder it gets to game the system. It's a variation on the open-source golden rule of debugging ("Given enough eyes, all bugs are trivial"): "Given enough eyes, all misinformation will be found." But open-source debugging works (when it works, which it often doesn't, but that's another story) because the "eyes on the code" belong to people who know how to spot a problem, and have the capability to affect it more or less directly. In blogging, the "eyes on the information" are often people with little or no real expertise. Much of what they spout is nonsense.
And yet, it's effective.
The blogosphere shifts like a body of water: Fast, and irresistibly. Part of the reason that happens is that the blogging community is comprised largely of small communities with large enough membership to make an impact, and what's more, those communities overlap: PoliBloggers are tight with techbloggers who are tight with lifestyle bloggers who are tight with polibloggers.... So when the loop has looped a few times, we find that a relatively small group of people can pretty reliably and rapidly shift the character of the blogosphere. But as the blogosphere becomes larger, it grows more statistically homogeneous, and small communities of movers will not have the same kind of predictable results anymore. Then it will seem less like water, and more like mud.
But I digress, again. I started this to talk about gaming the blogosphere, and that this will happen, I do not doubt for an instant. There's a lot of money riding on this, after all. Some people will figure out how to game the blogosphere -- to game the cluetrain. It will be a painful process with lots of false starts, but we are well beyond the beginning of the process. It started long before the Kryptonite affair; if I had to pick a point in time, I'd pick the consolidation of successful blogs like Wonkette, Gizmodo and ... under the Gawker Media banner. Gawker sells lots of ads, gets lots and lots of daily eyeballs, and their more overtly commercial blogs (like Gizmodo and Jalopnik) have pull with the product managers by virtue of the fact that they can say things like:
What consumers wantâ??an out-of-box way to share and transmit files between different storage media and computers (and users)â??is exactly what manufacturers don't want to give them, but they'll tease us a little. So, if you're really rich, DigitalDeck Entertainment Network is busting out an in-home network PC to gear to DVD sharing system that costs $4000 - $5000. It probably consists of a bunch of cables and a universal remote that your geeked-out younger brother could hack together himself.
And so, we've come back around again to the specialist newsletter: I take Gizmodo seriously (and I confess, I do read it more or less every day) because I see things like this that indicate to me that they bother to think a bit about what they're reviewing. They have credibility for me because they speak not merely in a human voice, but in one that says credible things. And they have the benefit of comprehensiveness because somebody (namely, Gawker Media) is paying them to do nothing but that.
And by the way, at some point does it stop being "blogging" and start being journalism? Open question, IMO.
It's just that relatively few people have realized it, yet. As I so often say: When there's big money involved, the alternate modalities will be co-opted. (Or crushed.) Even more than information wants to be free, money wants to make more money. We're now sitting in that fragile cusp (oh, hell, we may be past it) where the "winners" of the next gold-rush will be decided. It's not a huge gold rush -- not yet -- but in its own way, it will be just as hokum-driven as the dotcom boom.
I know this because I bothered to do some simple math with numbers in a news story about American blogging habits. From Britain, of all places. A friend pointed me to the BBC's obligatory popular rundown on what a blog is and why their readers should care, combined with a little bit of exoticism regarding us cousins. The article relies heavily on a report from Pew Internet and American Life Project; it's thin on details, but the do provide us with a helpful bullet list in their sidebar:
- Blog readership has shot up by 58% in 2004
- Eight million have created a blog
- 27% of online Americans have read a blog
- 5% use RSS aggregators to get news and other information
- 12% of online Americans have posted comments on blogs
- Only 38% of online Americans have heard about blogs
By implication (according to the sidebar), of Americans who've heard of blogs (38% of online Americans), 71% have read read them (27% of online Americans -- 27/38=.71); and a bit less than half of those have gone on to post comments (12% of online Americans -- 12/27=.44). (Less interesting, but more impressive: about 30% of people who've heard of blogs have posted comments...) Interesting. If taken at face value (which wouldn't be a good idea), that means almost half of people who've read blogs have posted comments to them. Before we even start to think about commercial applications, that may well represent a radical increase in the population of people participating in online forums.
But here's the real meat: When they saw those numbers in the sidebar, direct marketing people in the reading audience (who eat, sleep and breathe much more complex math than that) were drooling on their keyboards. Consider that a direct mail campaign is regarded as doing very, very well at 5% response. These are not numbers to swing elections as a constituency; but they are well into "thought-leader" territory. These blog readers are high-throughput nodes. They're the folks who spread Jib-Jab movies and forwarded the Kick Osama Butt song. At least, that's how the consultants will spin it.
Also quite interesting: Almost a fifth of people who've "read a blog" (5% of online Americans) use RSS readers to aggregate blog content. RSS readers by their definition identify regular readers, so something in excess of about 20% of blog-readers are regular blog-readers. And the stream of drool intensifies.
You have to actually do some math to sort all that out, mind; I think they're probably better at it over there, but I wonder if they weren't actively hiding those numbers by not crunching the numbers. (In America, I'd just go for ignorance -- I don't have much faith that our reporters have the math skills to calculate a proportion.)
I can honestly say that I never thought blogging was a fad. But I will go out on a limb (not that I have to go very far) and say that "podcasting" was dead before it started. Or, at least, the meaning of the term will change. "Podcasting" will come to be the audio equivalent of "TiVO", as we start to see those forthcoming gizmos that let folks TiVO-ize satellite-radio broadcasts. They'll start as special attachments for iPods. (Perhaps even as an iPod itself -- though I don't think Apple will go that far. It would hurt iTunes sales.) Then they'll spread to other music players ("there are music players besides the iPod?!"). Podcasting as we currently know it will die a quick and inglorious death, mourned only by the people who hoped to have their name forever attached to the term.
Blogging has previously never really been at an equivalent risk. The technical barriers to entry are low: A decent secondary education and enough disposable income to afford $10/mo or less in hosting fees. They face very little competition. (Well, except for newspaper columnists. What are those? Well, um, they're these folks who'd regularly get their "blogs" printed in newspapers. See, these newspapers, they're printed on really big paper, so everything is in columns, and a columnist would get one column out of six on the page... ... Newspapers. They print them, on paper, and sell them to people so they can carry them around and read them.... How do they know how many to print? They don't. A lot get wasted. Yes, I know that's a waste...)
The latest leading-edge thinking in traffic-calming is that we should remove traffic controls, not add them. Passive controls, that is, like signage; active controls, hard controls, like traffic circles (rotaries, roundabouts), merge lanes -- those can stay. But Yield signs at the traffic circle entrance, "lane ending" indicators, even curbs, stop signs and traffic lights: Those should go.
The thinking is that without them, we think more. With them, we give over our control over our fates to the signage. At the same time, we can do things that, superficially, make a road more dangerous: We allow parking where we'd previously barred it; we make the road-beds narrower instead of wider; we remove turn lanes and traffic lights; we remove explicit barriers between people and traffic. (Note that this doesn't mean eliminating sidewalks altogether: "Instead of a raised curb, sidewalks are denoted by texture and color.")
Results are counter-intuitive: Traffic moves more slowly, and yet trip times are reduced. It's the kind of result that a simplistic understanding of systems can't predict, but that an ecological understanding can.
I have to admit that I was resistent to the idea when I first read it. It reminded me of a trip to Seattle in February of 2000, when I noted the conspicuous absence of stop signs at intersections in many residential neighborhoods. But as I reflect on it, it strikes me that, at the least, bad signage really is worse than no signage. Signage, after all, plays to our conscious, rational mind, which is easily stymied by contradiction and inconsistency in ways that our sub-conscious, a-rational mind is not. And I recall that, when I approached those intersections, I stopped and looked very carefully. I paid attention to what I was doing (driving) instead of to other things.
As I think through it further, I find myself thinking of least three other ideas: The human factors design concept of affordance; Jane Jacobs's "eyes on the street"; and the zen/taoist/buddhist tightrope of mindfulness:mindlessness. The common thread is that they all tap into aspects of humanity that are essentially sub-conscious, in the sense of being as tied to our animal nature as to our human nature. They are rational in the sense that sense can be made of them; they are also a-rational, in the sense that we seldom bother to try. (And also in the sense that when we do bother to try, we often screw it up.) Most imporatantly, they are ecological, not based on a simplistic, modernist understanding of systems theory.
We still need to be able to inculcate awareness of self-interest at a low level of consciousness. We can only rely on our natural accident-avoidance to carry us so far, especially with as many distractions as the world affords.