The truth will set you free. But not until it is finished with you.
Naomi Klein, shilling her new book (The Shock Doctrine: The Rise of Disaster Capitalism) in the Guardian, observes that:
If [a suspect] is a victim of the CIA's "extraordinary rendition" programme, kidnapped off the streets of Milan or while changing planes at a US airport, then whisked to a so-called black site somewhere in the CIA's archipelago of secret prisons, the hooded prisoner will likely fly in a Boeing 737, designed as a deluxe executive jet, retrofitted for this purpose. According to the New Yorker, Boeing has been acting as the "CIA's travel agent" - blocking out flightplans for as many as 1,245 rendition voyages, arranging ground crews and even booking hotels. A Spanish police report explains that the work was done by Jeppesen International Trip Planning, a Boeing subsidiary in San Jose. In May 2007, the American Civil Liberties Union launched a lawsuit against the Boeing subsidiary; the company has refused to confirm or deny the allegations. [links added]
But there's more to it than constitutional niceties. To get why this matters, it's necessary to understand that the state of permanent war is highly lucrative. It's not just that Pinkertons or Blackwater[er]s are beholden to no one but their clients (well, really, their management, but that's splitting hairs) -- it's that all these Pinkertons and the companies that make their guns are getting paid a shitload of money. Where money flows, its flow can be tapped for power in a myriad of ways: Jobs for consitutents, favors for powerful men, personal prestige, campaign contributions, tacit promises to participate in financial shenanigans down the road, and so on. There's a whole system of political and economic ecosystems that draws its energy from this flow of private war money.
And as the wars are kept "private", they become divorced from political will. John Robb, as usual, puts it succinctly and clearly:
If you think the wars in Iraq and Afghanistan will end with this US presidency, think again. These wars will likely outlast the next several Presidents. The old Vietnam era formulas don't apply anymore. The reason is that the moral weaknesses that have traditionally limited the state's ability to fight long guerrilla wars have dissipated, and modern states may now have the ability and the desire to wage this type of war indefinitely. Here's what changed:
- A radical improvement in marketing war. The US military learned from Vietnam that it needed to be much better at marketing wars to domestic audiences in order to prevent moral collapse. It has gotten better at this, and that information operations/strategic communications capability has reached a new level of effectiveness with General Petraeus. Despite this improvement, the military and its civilian leadership still don't have the ability to garner wide domestic support for guerrilla wars beyond the initial phases. However, they do have the ability to maintain support within a small but vocal base -- as seen in the use of weblogs to generate grass roots support for war -- and the capability to trump those that call for withdrawal (by keeping the faintest glimmer of potential success alive and using fear/uncertainty/doubt FUD to magnify the consequences of defeat). In our factional political system, that is sufficient to prevent withdrawal.
- The threat that justifies the state and the perpetual war that codifies it. The ongoing threat of terrorism has become the primary justification for the existence of a strong nation-state (and its greatest instrument of power, the military) at the very moment it finds itself in decline due to globalization (or more accurately: irrelevance). The militarization of "the war against terrorism" reverses this process of dissipation, since it can be used to make the case for the acquisition of new powers, money, and legitimacy (regardless of party affiliation) -- for example, everything from increases in conventional military spending to the application of technical reconnaissance on domestic targets. Of course, this desire for war at the political level is complimented by the huge number of contractors (and their phalanxes of lobbyists) attracted by the potential of Midas level profits from the privatization of warfare. The current degree of corporate participation in warfare makes the old "military industrial complex" look tame in comparison.
- The privatization of conflict. This is likely the critical factor that makes perpetual warfare possible. For all intents and purposes, the US isn't at war. The use of a professional military in combination with corporate partners has pushed warfare to the margins of political/social life. A war's initiation and continuation is now merely a function of our willingness/ability to finance it. Further, since privatization mutes moral opposition to war (i.e. "our son isn't forced to go to war to die") the real damage at the ballot box is more likely to impact those that wish to end its financing. To wit: every major presidential candidate in the field today now gives his/her full support to the continuation of these wars.
The estimable General Smedley Butler clued us in to this as long ago as 1935, when he penned his infamous and too-poorly-remembered screed War Is A Racket: "A few profit – and the many pay. But there is a way to stop it. You can't end it by disarmament conferences. You can't eliminate it by peace parleys at Geneva. Well-meaning but impractical groups can't wipe it out by resolutions. It can be smashed effectively only by taking the profit out of war."
It's kind of amazing that we need to keep re-learning this lesson, but we do, and part of the reason is that the capitalists are so ingenious at marketing the tiny differences in approach: Instead of sending in the Marines, we send in Blackwater or Executive Outcomes. (But is that really all that different from how we ran things in the Philippines during the 30s?) Then it was manifest destiny; now it's the fight against islamofascism.
Or at least he thinks he is:
.... It is at least a possibility to be seriously considered, that China could become rich by burning coal, while the United States could become environmentally virtuous by accumulating topsoil, with transport of carbon from mine in China to soil in America provided free of charge by the atmosphere, and the inventory of carbon in the atmosphere remaining constant. We should take such possibilities into account when we listen to predictions about climate change and fossil fuels. If biotechnology takes over the planet in the next fifty years, as computer technology has taken it over in the last fifty years, the rules of the climate game will be radically changed.
When I listen to the public debates about climate change, I am impressed by the enormous gaps in our knowledge, the sparseness of our observations and the superficiality of our theories. Many of the basic processes of planetary ecology are poorly understood. They must be better understood before we can reach an accurate diagnosis of the present condition of our planet. When we are trying to take care of a planet, just as when we are taking care of a human patient, diseases must be diagnosed before they can be cured. We need to observe and measure what is going on in the biosphere, rather than relying on computer models.
Such vision! Who knew it was that simple: China burns the coal, we sequester their windblown carbon as topsoil. Mirabile dictu! Dyson ex machina.
And who knew that Dyson had such a complete grasp of the processes of planetary ecology. He must, since he feels so ready to propose that we replace all of the current thinking by climate scientists and ecologists with a suggestion by a physicist that we just give up on climate modeling and replace it with a wholistic, diagnostician model.
It's convenient to be so brilliant that one doesn't feel the need to apply the same criteria to his own theories as he does to others.
Freeman Dyson recently wrote:
In his "New Biology" article, [Carl Woese] is postulating a golden age of pre-Darwinian life, when horizontal gene transfer was universal and separate species did not yet exist. Life was then a community of cells of various kinds, sharing their genetic information so that clever chemical tricks and catalytic processes invented by one creature could be inherited by all of them. Evolution was a communal affair, the whole community advancing in metabolic and reproductive efficiency as the genes of the most efficient cells were shared. Evolution could be rapid, as new chemical devices could be evolved simultaneously by cells of different kinds working in parallel and then reassembled in a single cell by horizontal gene transfer.
But then, one evil day, a cell resembling a primitive bacterium happened to find itself one jump ahead of its neighbors in efficiency. That cell, anticipating Bill Gates by three billion years, separated itself from the community and refused to share. Its offspring became the first species of bacteria—and the first species of any kind—reserving their intellectual property for their own private use. With their superior efficiency, the bacteria continued to prosper and to evolve separately, while the rest of the community continued its communal life. Some millions of years later, another cell separated itself from the community and became the ancestor of the archea. Some time after that, a third cell separated itself and became the ancestor of the eukaryotes. And so it went on, until nothing was left of the community and all life was divided into species. The Darwinian interlude had begun.
The Darwinian interlude has lasted for two or three billion years. It probably slowed down the pace of evolution considerably. The basic biochemical machinery of life had evolved rapidly during the few hundreds of millions of years of the pre-Darwinian era, and changed very little in the next two billion years of microbial evolution. Darwinian evolution is slow because individual species, once established, evolve very little. With rare exceptions, Darwinian evolution requires established species to become extinct so that new species can replace them.
Now, after three billion years, the Darwinian interlude is over. It was an interlude between two periods of horizontal gene transfer. The epoch of Darwinian evolution based on competition between species ended about ten thousand years ago, when a single species, Homo sapiens, began to dominate and reorganize the biosphere. Since that time, cultural evolution has replaced biological evolution as the main driving force of change. Cultural evolution is not Darwinian. Cultures spread by horizontal transfer of ideas more than by genetic inheritance. Cultural evolution is running a thousand times faster than Darwinian evolution, taking us into a new era of cultural interdependence which we call globalization.
It's difficult to tell what Dyson wants to communicate. He argues against "reductionist biology" and floats a lot of pretty images of synergism and vaguely Taoist ideas about the resilience of life. But his own understanding of the complexity of life is clearly quite limited, or he wouldn't be so quick to idealize "non-Darwinian evolution" (a "golden age"?) and predict a rosy outcome from unrestricted biotech game-playing. History much more readily supports a skeptical view on the affects of biotech than it supports Dyson's positivist version. The reality will almost certainly be more of the same mixed bag we've got now: High-yield crops help feed more people and strain the land to a greater extent, which hurts crop yields, which demands still higher-tech farming technologies, and so on ad infinitum. It's not a sustainable cycle, and one would like to think someone with such a reputation for cleverness would get that. (The fact that he doesn't, is to me another indication that he was over-rated to begin with.)
Dyson's thought seems to me to be fundamentally adolescent, in the sense that he always wants more and always thinks that things are simpler than the experts do.
Darwinian evolution may indeed have slowed evolution down considerably; but it may also have stabilized it. I suspect it was Darwinian evolution that made multi-cellular life truly feasible by making it possible to rely on large support structures generation over generation. In a diverse non-Darwinian framework, that reliance just wouldn't be possible. "Designs" that are stable in one generation could change fundamentally in the next, or even before the generation propagated, leaving no basis for reproduction. What Dyson casts in clearly pejorative language ("one evil day", "refused to share", "anticipating Bill Gates") was most likely the very change that made it ultimately possible for him to make these observations.
The analogy to culture is clear: Cultural evolution is rapid and destructive. It wipes out what came before without regard, and it has no mechanism to prevent the willy-nilly propagation of cultural "genetic" material. What we end up with, then, is a bunch of unstable structures that collapse quickly and harm their constituent people in the process.
The common response is that evolutionary processes will yield stronger and more stable structures through natural selection. But what if that's not possible without some kind of constraint on what kind of "genetic material" gets incorporated?
There's also an analogy to be drawn to information theory. Dyson is a cross-pollinator. He believes that the only real change comes via cross-pollination of ideas. He doesn't want to believe that it's necessary nor, I think, even very important to create systems of thought. He thinks every wild idea needs to be considered. (With special attention to his, of course.) (What Dyson's thought on the scientific establishment boils down to, when you analyze the language, is essentially that he's smarter than they are so they should listen to him more than they do. But I digress.)
But what if it turns out that it's necessary to constrain information in order to get use out of it? That much has seemed intuitively clear to me for many years. It's the lack of such constraints that characterizes many mental illnesses, such as schizophrenia and mania.
Of course, there are plenty of people -- Dyson might be among them -- who are more than willing to idealize mental illness in the same way. I'd like to say that those are people without the experience of talking with people suffering from such mental illnesses. I'd like to say that, but I've heard too many of them illustrate their cases with allusions to their interactions with the mentally ill. Rather, I suspect that they are people more in love with their theory than with the people they hope to explain by it.
The Palm Foleo is catching a lot of heat. Some of it is well deserved. (Just what the hell is this device supposed to "assist" a smartphone with? Shouldn't it be the other way around?) But most of it is feeding-frenzy pileon by people who got burned in the first try at thin clients, ten years ago.
Which is to say that AFAIAC most of the most strident critics of the Foleo don't want to admit that they've gotten the point -- they pretend not to understand what the device really is, which is plainly and simply a thin client for web 2.0 applications. But it's a thin client that could actually work: It's got a real web browser to access the real web applications that have sprung up in the interim via the near-ubiquitous broadband that we weren't even close to having the last time around.
Sour grapes like this prevents people from seeing the two real reasons that it will fail: It's not fast enough, and its being sold by idiots. Really, again, that whole 'smartphone assistant' thing: The Foleo should be (and will more likely be) "assisting" the phone, rather than vice versa. It's the thing with the network connectivity, not the phone. It's the thing with the USB connection, not the phone.
Semi-surprisingly, Jakob Nielsen has joined in the fray with a decidedly mainstream take on the specs:
Much too big and fat for a mobile device. At that size, you might as well buy a small laptop, which would run all the software you are already used to. For example Sony's Vaio TZ90 is 10% lighter and thinner...
... and 150% more expensive than the Foleo. Though it does have similar battery life. But that's still kind of a pathetic excuse for a pile-on. Criticize it for something real, why don't you, like, say, what you could do with it, instead of demanding that the device embrace all the weaknesses it's clearly designed to overcome:
- weight: 2.5 pounds (1.1 kg)
- thickness: 1 inch (2.4 cm)
- size: 11x6 inches (28x15 cm) - estimated from photo
Much too big and fat for a mobile device. At that size, you might as well buy a small laptop, which would run all the software you are already used to. For example Sony's Vaio TZ90 is 10% lighter and thinner than the Foleo.
A mobile information appliance should be thinner than 1 cm (0.4 in), weigh less than 1 pound (.45 kg), and be about 6x4 inches (15x10 cm) big. Something with these specs makes sense because it would fit the ecological niche between the laptop and the phone.
So let's get this straight: A mobile device should be too small to easily read on, too small to type on, but still too big to fit easily in slacks pockets? Where's the text entry? Where's the user interface? Seems like a rather strange set of requirements. Let's restate them so they make more sense, in functional terms. A mobile device must:
Conspicuously missing, but important:
In addition, we can make some other generalizations about what a device in the Foleo's class should do:
So the specs that Nielsen (and so many others) have seen as so ripe for critcism are not at all the ones that are important. The ones that are important, and the ones that will end up being technically critical for this device, are:
So at a technical level, I'm actually positive it fails on only one point, and that's run-time. Nielsen does raise a valid point, though:
Palm seems ashamed of its own specs since they are nowhere to be found on the product pages.
This is a blatant violation of all guidelines for e-commerce. I can't believe even the worst designer would suggest making a site where you can't find out how big the product is (especially for a mobile device). It must be a deliberate decision to hide the facts.
I think he's actually right about that. I think the product managers and marketers at Palm were so gun-shy about identifying Foleo as a thin client that they invented this whole "smart-phone companion" nonsense to cover it up. They basically threw the game -- decided the product was a bust before they even started, and concocted a marketing plan that, while it couldn't possibly succeed, at least had good graphics.
But come on -- a "smart phone accessory" that's ten times the size of the phone? Idiots.
The iPhone is the partial realization of the web-based thin client dream. In typical Apple fashion, though, they've gone just far enough to make money, and not so far that it might actually enable people to communicate more freely. They could have done that, but it would have meant leaving consumers' money on the table.
Apple's recent commercial makes this abundantly clear. In it, a user watches a clip of the Kraken from Pirates II, has a craving for calamari, and rotates his iPhone 90 degrees to search out seafood restaurants in his area.
Aside from the gee-whiz UI tricks that his iPhone enables, he's basically doing a Google Maps search. In fact it looks a lot like screenshots I've seen of Zimbra Zimlets for geo-locating addresses in the Zimbra web client. Nifty stuff. But there's no particular reason that it couldn't (or won't) be done on other phones. Hell, it's probably done on other phones now, if you want to pay for the service.
Which brings me to the Palm Foleo. I hadn't heard of the Foleo before Charlie Stross wrote an analysis explaining just why he didn't think it was such a terrible idea. Basically, after looking at the fact that it's really completely independent of phones in every important way, and can connect to WiFi networks all on its own, he thinks that it was intended to be a Web 2.0 terminal. A thin client, as we used to say back when everybody who thought things through thought that was a bad idea for a business plan. Things have changed, now, though: Broadband really is ubiquitous, if you're willing to pay for the access, and good quality high-resolution displays and mass storage are cheap, and battery technology is improving radically, so that the phone and its proprietary network have to do less and less that's customized.
So the iPhone (and any other post-Blackberry phone that wants to be successful) is really a Web 2.0 Terminal. Sometimes they'll have cached data, but by and large they'll do everything they can through the airwaves. The differentiator will be in the user interface.
Apple understands that, of course. They have a late-mover advantage in this field, in that Nokia, Samsung, Symbian, MS, et al. have been so focused on solving the UI problem under now-outmoded constraints that they're having a hard time getting used to the freedom of new user interaction hardware.
It still comes down to paying for service, of course -- unless you're on WiFi, and can attach to the myriad of free nodes that are finally becoming common in our urban landscape. Like you can with the Foleo, or any one of a half dozen (non-Verizon) smart-phones I looked at earlier this week.
But not on an iPhone. You need the extra service to do that on an iPhone.
If there's one thing Apple never forgets to design in, it's making you pay.
Courtesy of the Peoria Chronicle's blog, here are links to a lecture on "New Urbanism" given by Andres Duany in Houston. It's on YouTube in 9 parts of 10 minutes each, and the first several have been posted on the Peoria Chronicle's blog. I'll be working my way through them bite by bite, as I hardly have 90 minutes to spare for anything that requires both visual and auditory attention, simultaneously. I may yet find something objectionable in it, but the basic presentation is quicker than reading Death and Life of Great American Cities.
One comment from the Chronicle blog is interesting:
“New urbanism” is just a facade being used by developers to pack as many people into the smallest footprint as possible, to increase their profits.
In San Diego, older neighborhoods are being transformed into jam packed, noisy, traffic infested cesspools, by billionaires who live on 10 acre estates in Rancho Santa Fe (SD’s Bel Aire).
The 40 year old, 10 unit, low income apt building next to me was converted to $400k “condos” last year. It’s been pure hell, with 15 rude, loudmouthed, morons moving in, several of whom are already about to default on their loans. Several units are now being rented, at 3 times the monthly rent as before. Who wins? A handful of guys sitting around dreaming up their next scheme.
That he misses the point of New Urbanism completely isn't the interesting part -- it's that he's so willing to conflate New Urbanism with a newspeak co-optation of its ideals. He's not necessarily wrong to do so. Like many idealistic movements, it has some foolishness and romanticism baked into and is vulnerable to abuse. There are plenty of people who jump into idealistic movements with a partial understanding of the situation and then end up taking it in whole new, highly rationalized direction.
That's one of my objections to "emotional design": When you choose, as Don Norman, Bruce Tognazzini et al seem to have chosen, to make your evaluation of a design's quality hinge upon its gut, emotional appeal, you're basically opening up the door to tossing out real design and replacing it with pandering. Machines become good if they look cool. By that metric, the AMC Javelin would be one of the coolest, hottest cars ever manufactured. The nigh-indisputable fact that it was a piece of crap would be irrelevant: It had great "emotional design."
Similarly, the fact that PowerBooks are screwed together using 36 (or more) tiny screws of five to six different sizes and head-types, but also force-fit using spring clips, becomes irrelevant: The design feels great, looks great. Never mind that it could cost less to manufacture, cost less to repair and upgrade, and be just as solid, just as sound, if it were designed better. It's still great "emotional design."
What's the effect of this kind of life? No doubt the people who brain-farted the idea for htis kind of a system in the first place would respond at this point that they are putting eyes on the street, they're addressing "lifestyle crime" (littering, loitering, miscellaneous minor malfeasance), and that the net effect is to get, through technology, what Jacobs asked for in the 1960s. But an honest appraisal would have to recognize that response as disingenuous. The voice is detached, judgemental, and doesn't brook response -- doesn't even afford it, since there are no pickups (that the security company is admitting to) on the cameras. It can't possibly work to provide the kind of human-scale, person-to-person interaction that happens in in the relatively messy but relatively safe neighborhoods of the real world.
In 1992, Thaler shocked the world with bizarre experiments in which the neurons within artificial neural networks were randomly destroyed. Guess what? The nets first relived all of their experiences (i.e., life review) and then, within advanced stages of destruction, generated novel experience. With this very compelling model of near-death experience (NDE) hopes for a supernatural or mystical explanation of this much celebrated phenomena were forever dashed.
Pop quiz -- does this passage describe the present, or the future?
You sit immersed in a wireless cloud, navigating your way through the folders on your hard drive. It is a floating forest of branching tree directories anchored to a root folder buried somewhere deep inside the machine. You are listening to streaming audio whilst a torrent of more music flows into your MP3 player. While it downloads, your system is organising your music library into fields within a database and generating a feed direct to your homepage. Via your Flock browser you twitter to your friends about the latest item on the newsriver then post a few paragraphs to your blog, where they join the complex trail of links and paths going in and out of your site. While you surf, it's easy to forget that beneath you lies a creepy invisible underworld populated by spiders, bugs, crawlers, worms, and microscopic viruses, whilst above ground your transactions are hungrily devoured by sheep that shit grass before being aggregated into the Long Tail. That data trail you're leaving behind stimulates the synapses of the global brain, which is in turn pulled towards the gravitational core of the Web 2.0 solar system...
Answer: It's the present, of course.
I've been doing a lot of video blogging on BEYOND THE BEYOND lately, which must be annoying to readers who don't have broadband. But look: outside the crass duopoly of the USA's pitifully inadequate broadband, digital video is gushing right through the cracks. There's just no getting away from it. There is so much broadband, so cheap and so widespread, that the video pirates are going out of business. I used to walk around Belgrade and there wasn't a street-corner where some guy wasn't hawking pirated plastic disks. Those crooks and hucksters are going away, their customers are all on YouTube or LimeWire...
Broadband isn't the problem. Bruce makes his living being a visionary. I make my living doing work for other people. It's truly not the visionaries who actually change things -- it's the people who buy (into) their visions, and those people just don't have the time to look and listen at the same time to continuous "bites" of see-hear content.
Podcasts are bad enough -- I have to listen for as long as someone speaks in order to get their point, I can't really skim ahead or scan around with my eyes. I've got to buy into their narrative construction. And I'm paying for that purchase with my time and attention.
This also goes to Cory Doctorow's point about text bites. He's grazing around, taking in small chunks of text at a go, and the web is fine for that, that's his message. Great. Fine. But text can be time- and space-shifted far more effectively than audio, which in turn can be time-/space-shifted far more effectively than video.
What's really needful as I've noted before is a way to mode shift text into audio without human intervention. Or video, for that matter, if you want to get visionary about it. But I'm not going to worry about video right now, because audio is something that some basement hacker could actually pull off with an evening's work, and refine with the labor of just a few weeks. Or so it seems to me. On my Mac, right now, I can select text and have the machine speak it to me, complete with sentence and paragraph pauses. The Speech service is AppleScript-able, so (if I actually knew AppleScript) I could script it to pick up blog posts and pump them into audio files, that in turn could be pumped onto my audio player for listening in the gym or on the road. If I spent that much time in the gym or on the road. Which I don't.
I have seen the infamous "Hilary 1984" video, and I am profoundly unimpressed. Presumably the creator thought he was doing something profound, or clever, or both, but he's not really saying anything to anybody who hasn't already swallowed the "Hilary is the Anti-Christ" koolaid. Are we supposed to see Hilary Clinton as as "Big Sister"? Are we supposed to hear her words as Newspeak, just because we seem them juxtaposed with elements from Ridley Scott's bombastic vision-for-hire?
To cut to the chase: Does something become profound as soon as you mash it up with sacred (or at least iconic) (commercial) content? Ridley Scott rubbing off on Phil De Vellis, just by virtue of De Vellis getting his grubby mitts on Scott's footage?
My first feeling on viewing the mashup was disgust. I'm not quite the farthest thing from a Hilary Clinton supporter, but I'm not far off from that. She's more or less unelectable, as far as I'm concerned, and I do strongly suspect that she's got some control issues, as the therapists like to put it.
But this is just sophomoric. If I were Barack Obama, I'd be embarrassed to have supporters like that. Good thing I'm not Barack Obama, of course, because to get elected he's going to need a lot of supporters like that, and he can't afford to let them know they embarrass him...
My second thought was that you could pretty effectively cut the legs out from under Phil De Vellis's juvenile pseudo-intellectualism by just taking the same bombastic content and splicing in somebody else. Like, oh, I don't know, maybe...Barack Obama?
And so now I see that I'm not the only person who finds the whole thing kind of silly and puerile. Though honestly, I had something more like Everybody Loves Raymond in mind. That might actually border on profound.
And it's not only church-state watchdogs and atheists who are skeptical about whether teachers can pull off the non-devotional tightrope walk. "My own sense," says Mark Noll, an acclaimed historian at Notre Dame who is an evangelical Christian, "is that the Bible is a pretty explosive book. If students read it carefully, they'd be changed in a way that public schools couldn't handle -- and appropriately so.
I agree. But probably not about what the change would be. Unless by "read it carefully," he means 'read it under the guidance of a qualified, believing, religious professional.' And not, say, a cynical camp counselor. Or, for that matter, on their own. If so, they might well be changed in a way that the churches couldn't handle.
Because at the end of the day, the Bible is still an old book full of bloody stories and finicky, contradictory aphorisms. In the words of the Reverend Tim Lovejoy: "Have you actually read this thing? Technically we're not allowed to go to the bathroom."
From John Robb, who seems to have coined the term "open source warfare":
[The Iraqi] insurgency isn't a fragile hierarchical organization but rather a resilient network made up of small, autonomous groups. This means that the insurgency is virtually immune to attrition and decapitation. It will combine and recombine to form a viable network despite high rates of attrition. Body counts - and the military should already know this - aren't a good predictor of success.
Given this landscape, let's look at alternative strategies. First, out-innovating the insurgency will most likely prove unsuccessful. The insurgency uses an open-source community approach (similar to the decentralized development process now prevalent in the software industry) to warfare that is extremely quick and innovative. New technologies and tactics move rapidly from one end of the insurgency to the other, aided by Iraq's relatively advanced communications and transportation grid - demonstrated by the rapid increases in the sophistication of the insurgents' homemade bombs. This implies that the insurgency's innovation cycles are faster than the American military's slower bureaucratic processes (for example: its inability to deliver sufficient body and vehicle armor to our troops in Iraq).
Second, there are few visible fault lines in the insurgency that can be exploited. Like software developers in the open-source community, the insurgents have subordinated their individual goals to the common goal of the movement. This has been borne out by the relatively low levels of infighting we have seen between insurgent groups. As a result, the military is not going to find a way to chop off parts of the insurgency through political means - particularly if former Ba'athists are systematically excluded from participation in the new Iraqi state by the new Constitution.
Third, the United States can try to diminish the insurgency by letting it win. The disparate groups in an open-source effort are held together by a common goal. Once the goal is reached, the community often falls apart. In Iraq, the original goal for the insurgency was the withdrawal of the occupying forces. If foreign troops pull out quickly, the insurgency may fall apart. This is the same solution that was presented to Congress last month by our generals in Iraq, George Casey and John Abizaid.
Unfortunately, this solution arrived too late. There are signs that the insurgency's goal is shifting from a withdrawal of the United States military to the collapse of the Iraqi government. So, even if American troops withdraw now, violence will probably continue to escalate.
What's left? It's possible, as Microsoft has found, that there is no good monopolistic solution to a mature open-source effort. In that case, the United States might be better off adopting I.B.M.'s embrace of open source. This solution would require renouncing the state's monopoly on violence by using Shiite and Kurdish militias as a counterinsurgency. This is similar to the strategy used to halt the insurgencies in El Salvador in the 1980's and Colombia in the 1990's. In those cases, these militias used local knowledge, unconstrained tactics and high levels of motivation to defeat insurgents (this is in contrast to the ineffectiveness of Iraq's paycheck military). This option will probably work in Iraq too.
In fact, it appears the American military is embracing it. In recent campaigns in Sunni areas, hastily uniformed peshmerga and Badr militia supplemented American troops; and in Basra, Shiite militias are the de facto military power.
If an open-source counterinsurgency is the only strategic option left, it is a depressing one. The militias will probably create a situation of controlled chaos that will allow the administration to claim victory and exit the country. They will, however, exact a horrible toll on Iraq and may persist for decades. This is a far cry from spreading democracy in the Middle East. Advocates of refashioning the American military for top-down nation-building, the current flavor of the month, should recognize it as a fatal test of the concept.
For me, this is as interesting for its flat assertions about the nature of the Open Source ("[F/] OSS"] movement as it is for his clarification of the term as it applies to warfare. There's some very interesting -- perhaps revealing -- language, here. I can remember reading John Robb a few years back, but I don't remember anything in particular that made him stand out from the other tech-bloggers I was reading at the time. Here, he's saying some things that are different, that not everyone else (in the tech-blogging "community", at least) is saying.
For example, he's acknowledging the success of IBM, and how they got it: By 'letting the enemy win,' or more precisely, by buying the enemy their uniforms. IBM spends a ton of money on Open Source development. No other company with the arguable exception of Google has as strong a claim in Open Source councils.
Another example: While he seems to praise with one hand, he does something very interesting by tossing IBM into the same metaphorical stew with the right-wing Salvadoran and Colombian militias, trained to do the nastiest kinds of dirty work by our own CIA at our own School of the Americas. Folks at IBM who get the allusion might well be pissed off by it; I expect it's intended not as an insult, but rather as a precise analogy. The analogy bears expansion, though, because most Americans are woefully ignorant of their own history -- especially the small and dirty parts of it like what the Salvadoran militias (and, hell, their regular military) actually did to their own people, with our help and encouragement. If Robb is right, we're in the process of doing something very similar, again, and this time on a far larger scale.
Over at The Danger Room, they've posted a reminder of a thin whispering voice from the 1980 zeitgeist -- a very funky live rendition of "Life During Wartime":
My first thought was that I'd forgetten how hot that song was. My second was that this could be Bagdhad they're singing about. Or Gaza. Or Beirut.
Heard of a van that is loaded with weapons
Packed up and ready to go
Heard of some gravesites, out by the highway
A place where nobody knows
The sound of gunfire, off in the distance
I'm getting used to it now
Lived in a brownstone, lived in the ghetto
I've lived all over this town
This ain't no party, this ain't no disco
This ain't no fooling around
No time for dancing, or lovey dovey
I ain't got time for that now
Why stay in college? why go to night school?
Gonna be different this time?
Can't write a letter, can't send a postcard
I can't write nothing at all
This ain't no party, this ain't no disco
This ain't no fooling around
I'd love you hold you, i'd like to kiss you
I ain't got no time for that now
Trouble in transit, got through the roadblock
We blended in with the crowd
There was something in the air, or the water, or the synchronistic ether in the late '70s. In the summer of 1980, I began the process of fleshing out an idea for a science fiction novel that would be set in a ruined, riot-torn city. A mysterious agent would enter the city as everyone else fled, set on a mission that he could not fully know. I like to think I know, generally, where I go the ideas. It emerged from a melange of influences, including (but far from limited to) Grahame Green's The Confidential Agent, riots in England (particularly Brixton) in 1981, and this song. I know I had the general idea as early as sometime in 1980; I don't think it achieved anything like final form until the fall of 1985.
I've drawn scenarios involving the collapse of urban civil societies in notes and sketches for many projects between about 1979 and the present, and encountered a great many more in fiction. What's impressed me about the real world in those 27 years is how ready it is to snap back to the norms. The general rule seems to be that when there's trouble, people will help one another out, to the extent that they know how or that they believe they can, without harming their own. England did not disintegrate after the riots of '81 (as I imagined it might, from the naive perspective of a 17 year old American conservative). Beirut eventually settled to a relative stability. I had begun to feel that order was the rule, in human society, not the exception.
But of course there are ways to make the tendency go the other direction, and the first and most important condition for a descent into chaos after disaster is the weakening of what for lack of what I regard as a better term I will refer to as civil society. It's not sufficient that there be poverty or that there be a disconnect between people and their government (be it local, state, or federal) -- there has to have been some kind of basic collaps of the ordinary day to day organizing structures of life. There's probably nothing specific, either, that needs to collapse. In Iraq, we can see many small things that combine to make life unstable; we can see the tacit encouragement to develop on-the-ground, ad hoc civil institutions to deal with issues like insecurity and shortage. Some of those institutions will be deeply cultural; those will be the ones that excite the most devotion, as they become the means by which people define "their own."
John Robb points out continuously that the "perpetual collapse of Iraq" is in direct relation to the failure of the Iraqi state. I would take that a step further, and say that it is in direct relation to the failure of civil society in Iraq. Robb points to Maslow to make his argument that there are some basic needs that need to be met before you can have security; I would point out that even if those needs are met, who meets them and how (i.e., the ideology that informs the new structures that stand in for the state) becomes a critical factor in what it's like to live in those states.
Put another way: Stability is a matter of perspective. Being Sunni or Shi'ite has a completely different bearing on the degree to which your Maslovian needs are being met, depending on whether you live in Mosul or Bagdhad, on whether you you side with or against the Sadrists, and so on. I'm not implying that's lost on Robb. He focuses on the economics and the gross factors, and he's right to do so. By doing so, he can arrive at what he calls the "humpty-dumpty principle":
States are increasingly finding themselves in perpetual disruption or complete failure. One driver of this is globalization. Globalization has diminished state power across the board ("it melts the map"). So,if we want to build a peaceful (and profitable) system that obeys a new rule set (to borrow a phrase from Thomas Barnett), the limits of state power must be a critical factor in its development.
The phrasing is interestingly precise: "the limits of state power must be a critical factor...." Not "Limiting state power," nor "bolstering state power"; instead, what the limits are. Because what they are will have a different result, depending on where you are.
But I digress, as usual. This whole set of ruminations started as a meditation on a song lyric, and there were certain parts of that lyric that haunted me back then more than the rest of them. They haunt me now not so much because I think they might come true, but rather more because they make me ponder what would have to happen to make them come true.
Heard of a van that is loaded with weapons
Packed up and ready to go
Heard of some gravesites, out by the highway
A place where nobody knows
The sound of gunfire, off in the distance
I'm getting used to it now
Lived in a brownstone, lived in the ghetto
I've lived all over this town
This ain't no party, this ain't no disco
This ain't no fooling around
This ain't no mudd club, or c. b. g. b.
I ain't got time for that now
Heard about houston? heard about detroit?
Heard about pittsburgh, p. a.?
You oughta know not to stand by the window
Somebody might see you up there
Burned all my notebooks, what good are notebooks?
They won't help me survive
My chest is aching, burns like a furnace
The burning keeps me alive
Try to stay healthy, physical fitness
Don't want to catch no disease
Try to be careful, don't take no chances
You better watch what you say
In America, I expect it would be a gradual process. We are too big to fail that quickly, and large parts of the country would retain or define their own stability. How they do that, though, is far from pre-determined. It's more likely to take the form of organization through the manipulation of power by a few than by the distribution of power to a many.
The standard cliché is that if you want to break an organization, you chop off its head. People who've made their livings as underlings often dispute that. Network theorists have been implicitly supporting them for years. This just in from a researcher at West Point, via War Is Boring:
Assuming your resources for attacking a network are limited —and in the real world, they always are — who do you hit? Graham asked. Using his own department as an example, he advocated killing just three of the dozens of members. Suprisingly, none were examples of density or centrality, since those were all situated in the meaty middle of the network. The network had enough redundant connections to quickly repair itself after their demise. What Graham wanted to do was hit the network where there were no redundancies, so all of his targets were boundary spanners. By taking out three spanners, Graham showed how you could isolate relatively homogenous chunks of the network, rendering it stupider and less adaptive than before.
Funny thing is, the spanners in Graham’s department’s network were mostly low-ranking members such as cadets. Just goes to show, when attacking networks, the most obvious targets aren’t always the most important.
Again, this is nothing new. I'll add more links later, but the idea that to really hurt an organization you remove the capability to backfill is standard wisdom in any management school that's worth a crap. Which doesn't include Harvard, for sure.
This isn't the first time creative entrepreneurs have gone through one of these transitions. Vaudeville performers had to transition to radio, an abrupt shift from having perfect control over who could hear a performance (if they don't buy a ticket, you throw them out) to no control whatsoever (any family whose 12-year-old could build a crystal set, the day's equivalent of installing file-sharing software, could tune in). There were business models for radio, but predicting them a priori wasn't easy. Who could have foreseen that radio's great fortunes would be had through creating a blanket license, securing a Congressional consent decree, chartering a collecting society and inventing a new form of statistical mathematics to fund it?
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.--John Perry Barlow, 1996
Yet another example of the happy horseshit approach to social activisim: Put an absurd stake in the ground and hope that it makes people come that much closer to what you want.
Of course, Barlow never got what he said he wanted, but there are enough new Web 2.0 toys floating around that let people do superficially cool collaborative things that Barlow's probably pretty assuaged, most of the time. Meanwhile a new post-industrial market has co-opted Barlow's Cyberspace (though, since they've been paying for it, maybe just plain "bought" is a better word), and governments like that of China have been doing a good-enough job of exercising sovereignty where it's cyber-citizens gather.
"You asked me once," said O'Brien, "what was in Room 101. I told you that you know the answer already. Everybody knows. The thing in Room 101 is the worst thing in the world." -- George Orwell, 1984
In the coverage at Wired of the Air Force's new Active Denial System for crowd control, I didn't see any mention of the agonizer. And yet, that's what it is, more or less: A device that induces searing, burning pain that's so intense, subjects cannot help but struggle to get away from it.
It works via millimeter-wave radiation. Wired (courtesy of the Sunshine Project) has thoughtfully provided a rundown of publicly available documentation. On a quick scan, it's hard to tell whether the pain is caused by heating in the skin or by some other interaction between pain-nerves and millimeter-wave radiation. But prolonged exposer to the beam can cause second-degree burns, so heating does definitely occur.
And there's also no mention in Wired's coverage of the applications for torture, which are painfully [sic] obvious to me. An uncreative sadist would leave a victim with second-degree burns after leaving the beam focused for too long in one spot. A creative sadist would hack together something like Gene Roddenberry's agony booth, to move the focus of radiation around to different sets of nerve endings, in order to reduce the effect of neurotransmitter depletion. After an hour or so, I'm quite sure just about anybody would be willing to tell us whatever we wanted to hear as long as it makes the pain stop. In room 101, the man who works the latch on the rat cage is your god.
A vehicle-mounted version is apparently being tested in Iraq right now. I'm very, very curious to know what Iraqis will make of it. I think they'll get the torture angle right away. And since the technology is pretty easy to replicate, I can envision disreputable dictatorships throughtout the world deploying copycat devices in the near future.
Most comedy is bullshit, at some level. That is, it doesn't matter whether it's true, so long as it's funny. "If it ain't the truth, it oughta be." The point of comedy isn't to be honest -- the point of comedy is to make people laugh (at you).
Or, in the words of Peter "Wait, Wait, Don't Tell Me" Sagal, some stories are just "too good to check." Like the one about how women talk twice as fast and three times as much as men, and men think about sex 300 to 1000 times as often as women.* It's only funny if we don't point out that, as far as any actual evidence has ever shown, it's just not true. And it really stops being funny as soon as we point out that the main reason people think it's so funny is that it's a convenient reinforcement of existing stereotypes.
And it really, really stops being funny when you put those two facts together and come up with the realization our stereotypes aren't actually based on evidence. How inconvenient. And we were having such fun with this new wave of reactionary "innate differences" nonsense.
I guess when all is said and done, it really is "just a story, dude." And it's not as though anybody ever did anything bad by making up stories. Right? Right? But still, I was fairly disappointed when I learned that Peter Sagal and his crew had swallowed the load of crap that Dr. Brizendine is dishing out in her new-ish book The Female Brain. I guess it was a bit much to expect, that they might, you know, have a view that wasn't a lockstep endorsement of the same old bullshit.
But then, that probably wouldn't be funny.
*According to Dr. Brizendine, men think about sex about every 52 seconds, while women think about sex one to three times per day; I extrapolated based on a 16 hour day, assuming that at least some men don't dream about sex every 52 seconds....
Sitting in the pondering place, I pondered this: Where does vegetable oil come from?
The answer, of course, is that plants make it.
We have an oil-based economy, and we're running out of oil. But that's just the "mineral" petroleum, the stuff that's prehistoric. What about the stuff that the plants make?
Sure, plants can't make enough. It would be just like some nay-sayer somewhere to point out the number of acres we'd have to plant in Canola in order to make enough oil to fuel a single fleet of city buses. They'd probably say it's not cost effective, and they'd probably be right. But what about bio-engineering? How does the Canola plant make it? Or the Hemp plant, or the Olive tree, or any other plant? And what's to stop us from bio-engineering an organism to do just that?
Plenty of things, I'm sure, but most of them are moral or entail engaging foresight, and western capitalism doesn't have much history of respecting moral reasons. Or of thinking beyond the end of the depreciation cycle.
In any case, it's true that plants are very good at processing natural materials into more complex and very different natural materials. For example, they can make oil from organic waste. Or from cellulose. But plants are clearly not efficient enough. To even begin to feed the demand for fuel and synthetic plastics, we would need to operate at fairly high levels of efficiency. Fields of canola, regardless of how verdant, would not cut it.
But foetid swamps full of bacteria just might. To get the volumes we need, we would need to use open spaces, like swamps. We could digest whole forests, whole biomes, of cellulose, turn them into swamps, to get the hydrocarbons we want.
Gaseous hydrocarbons or light alcohols would probably be better for generation purposes, to drive our fuel cells, but we'd still need long-chain petrochemicals to make plastic. So I could envision different "crops," including even some semi-refined plastics.
Some of those crops would be quite hostile to life. The biological processes would most likely generate some rather toxic byproducts. And at the point where this type of production becomes necessary, I have to wonder whether the people who did it would care. These would, after all, be people arrogant enough to farm oil in an open swamp. If the global climate is sufficiently broken, all care might be thrown to the hot, dry winds. Or the fuming, damp winds, as the case may be, as we loose our hydrocarbon-synthesizing organisms onto the world and let them digest its organic waste matter into fuels.
I could envision great, sealed cities on the edge of seething hydrocarbon swamps habitable only by the most adaptable of organisms, and tended by fleets of fragmentarily sentient fuel-cell powered robots. Eventually, the robots might form their own cities (or be organized into them by a retreating humanity), existing only to tend (and perhaps contain) their swamps.
These robot cultures would evolve; they would not remain static. Evolution would apply to them as it does to us. This is where the admonitions of the Singularitarians would apply, because eventually our machines, once we are no longer an active influence upon them, will have to find their own reasons for living.
This morning on Weekend Edition, The Singularity rears its ugly head in the persons of Vernor Vinge (who coined the concept) and Cory Doctorow. It's another manifestation of our increasing dread in the face of technological change, and the increasing degree with which we approach that change in irrational ways: in the Vingean scenario, as a rescuing parent; in the Doctorovian vision, .
Doctorow posits the scenario of a modern human interacting with a pre-literate human: That they would be "in a sense, different species." That they and we would have "nothing to talk about." Maybe he was clearer in the un-aired portions about what's meant by "literate", but unless it means "without language" (and one would expect the word chosen for that to be "pre-linguistic"), he's clearly overstating his case. We can easily talk with "pre-literate" or even "illiterate" people, because there remain between us basic human consistencies that will not be removed by any extropian enhancements which we can plausibly predict.
It's a badly chosen analogy, to be sure, and surely one can be forgiven for choosing analogies badly, no? No. Because the craft of science fiction as gedankenexperiment is all about precision -- or at least, insight -- in your analogies. We need to remember that the beings making the singularity are humans. The aspects of the singularity that are truly, deeply a-human, are not likely to persist in that form. They're likely to get reshaped, recrafted, in some kind of human image.
I think Doctorow's analogy illustrates the most fundamental problem with Singularity Theory, in that it is often a failure of a certain kind of imagination: Empathy.
Vinge posits a more traditional scenario, in a way, as a revisitation of the Jack Williamson nightmare -- but with Williamson's logical problems fixed. Vinge's singularity-intelligence is more of a savior than a successor. A lost parent, restored, if you will. Clarke's technological god. Maybe it can save us from global warming.
Doctorow's singularity-beings are replacements, successors. They are what we are not -- they overcome our weaknesses, and supersede us. There's a sense of mingled dread and fascination in the prospect. I'm still trying to understand how to talk about the impulse. I feel it, myself, to be sure, but I don't have a pat name for it.
Sterling's critique still seems sound. (See his short essay in Wired; longer talk at the Long Now Foundation, as MP3 or OGG or as a summary.) He points out (among other things) that the singularity-being will not come about entirely by accident. It will come about through our choices, and some of those choices will tend to constrain the singularity-being.
As a phoenix once said: It's not the damage that hurts so much as knowing you'll go through it again.
So, here's the site. Again. I took it down in February because of problems with my hosting provider, and immediately got so busy with my new job that there wasn't time to put it back up again.
Since then, there have been major security flaws revealed in Drupal 4.6; so I knew that I'd have to upgrade the codebase. Fine, that's fine, I thought. No problem.
Well, I've been hacking away off and on at this for about two weeks. Just yesterday morning afternoon, I finally got a clean database update after many and various failures. Still missing a few things (like my old theme and the quotes module), but it shouldn't be too difficult to get that working. The trick will be to do it without screwing up the database. But more on that another time...
Originally written yesterday morning, about 9am Icelandic time.
Comedy as a medium for social commentary is grossly over-rated. There is a reason for that fact. Permit me to offer an example.
News stories cite (but, curiously, never by name) the 'cry wolf' syndrome w.r.t. failure to evacuate Florida Keys in advance of Hurricane Wilma. The storm is relatively mild (only 100 kt last known at this writing), but that's a recent change and in any case, it could still generate surges over 13 feet. Considering that the keys are 16 feet above sea level at their highest point, that means they go under. So of course, it could be a disaster. Of course, a lot of people could die. A lot of property could be damaged.
And if it is, there will be some insightful comedian(s) railing about American stupidity and American hubris; he'll say that we're so dumb we can't learn lessons when cities get drowned and a thousand people die, that we keep living below sea level and in flood plains and on top of fault lines because we're stupid, arrogant, moronic people. That we have no sense of personal responsibility. That we deserve what we get. He'll get more laughs than I will, of course, because that's what he gets paid for. People will laugh, they will feel superior, and they will move on.
On to the next character in the lineup, who will rant about American bullishness and our can-do attitude. He'll say in effect that the fact we're not scared of a little storm (or flood or earthquake) means we tough. He'll say that's what makes us great. He'll get more laughs than I will, of course, because that's what he gets paid for. People will laugh, they will feel superior, and they will move on.
And the same people will be laughing both times. And they won't stop to think about the fact that they just contradicted themselves.
They won't do that because comedy works on a type of reductio ad absurdem process. It works by taking us outside of the logical realm. It takes us to a realm where analytics don't really hold, though the more skilled practitioners of the comic arts can make them appear to. Really, though, comedy is not typically good for much beyond the initial mind-stretch.
Someone has finally noticed [via SmartMobs] that any PocketPC or Palm OS 5 PDA has the power to become a VOIP phone. But have they noticed yet what the consequences are? I think they have, and they're just keeping quiet about it because they're hoping that their competitors won't figure it out first and out-manouvre them.
But let's play this out. Let's say I go into my local coffee shop with free WiFi, whip out my PDA, fire up the softphone, and start talking. I'm not paying anybody for anything, except my coffee refills.
So something's gotta give:
Ultimately I'm thinking we see a flattening of offerings; everything being done via IP (or its equivalent). Phones only actually use "phone" technology in areas where it's not cost-effective to switch over. Phones become a flexible concept in this scenario, so something would have to be done about that. (The beauty of the phone as a communciations medium is the individualized, static "Phone Number": You want someone, you call their Phone Number. Elegant. Simple. Took generations to evolve to its current form and market dominance, and is likely to be the driving metaphor for whatever replaces it.)
Anyone in the corporate world knows that keeping up with email (or voicemail) can be a problem. I'm not talking about spam; I'm talking about ordinary business-related emails. During peak times like product implementations, I've occasionally gotten hundreds of non-trivial emails per day for up to several weeks at a time. In situations like that, sometimes, things get ignored. But if you are any good at your job at all, you find ways to prioritize those emails to ensure that the genuinely important ones don't get ignored.
That job gets much easier if someone summarizes all the really important stuff for you into one email. As someone did for Michael Chertoff and Michael Brown. Every day.
Some people don't get it. In emailed responses to NPR's interview with FEMA official Leo Bosner, several writers complain about the unsuitability of email as a means to communicate vital information. Email boxes get filled up with junk, they reason; Leo Bosner should have picked up the phone. One correspondent even argued that Bosner is the one who should be blamed, for his own dereliction of duty in relegating something so important to a mere email.
They're right about this much: Email can be a poor medium for reliably communicating vital information on an ad hoc basis (though certainly no worse than voice mail). But they also betray a profound lack of understanding of institutional processes and chains of responsibility.
And they make some basic assumptions about the email that they should not be making. This wasn't ad hoc. It was standard procedure. This is the way it was supposed to work.
And here's the really important thing: This wasn't just any email. It was an email that Chertoff and Brown got every day, and that they needed to read, and to understand, every day. That email was their job, writ fine: Know the danger, and be prepared to act. The danger was there; they knew about it; they did not act.
What this might in fact reveal is that there's a poor prioritization in practice. As a friend is fond of saying, "If everything is top priority, then nothing is." So it might reveal that there were too many top priority things in that memo.
But more likely, it reveals a criminal lack of attentiveness to job responsibilities on the part of Chertoff and Brown, as suggested by an earlier report. Critical places like FEMA are not places for political functionaries on the lookout for résumé padding. They're places for serious people who are willing to wear their pagers to bed and never ever turn off their Blackberries. That's what their subordinates -- people like Leo Bosner -- would do.
There was a point in last night's speech that struck me, that I haven't heard anyone else mention -- the part where the president said this:
I also want to know all the facts about the government response to Hurricane Katrina. The storm involved a massive flood, a major supply and security operation, and an evacuation order affecting more than a million people. It was not a normal hurricane -- and the normal disaster relief system was not equal to it. Many of the men and women of the Coast Guard, the Federal Emergency Management Agency, the United States military, the National Guard, Homeland Security, and state and local governments performed skillfully under the worst conditions. Yet the system, at every level of government, was not well-coordinated, and was overwhelmed in the first few days. It is now clear that a challenge on this scale requires greater federal authority and a broader role for the armed forces -- the institution of our government most capable of massive logistical operations on a moment's notice. [emphasis added]
This is interesting in two ways:
What I see happening already is Bush reframing the issue in terms of rigid, top-down corporate hierarchies like the ones he learned about in Harvard's b-school in the 1970s. In the current context, that means "to solve the problem, send in the Army", because the 1970s Harvard b-school model for corporate organization is based on military-style hierarchies.
Ironically, the US military establishment would most likely no longer support that analysis. They've spent much of the past thirty years trying to open up their command structures (to the degree that it's feasible given their mission) and re-instill the sense of initiative and dedication that are necessary for morale in tough going.
The relationship of business to the military, at least in America, is curious and interesting. In the run-up to the Second World War, the Army was particularly ill-prepared: Poor discipline, entrenched corruption in the enlisted ranks, poor organization. Business models and metaphors contributed to the reorganization. Through the course of the war, men like Robert Macnamara introduced systems theory and other b-school concepts into the management hierachy. The result was the Vietnam-era Army that failed its soldiers and its nation in no small part through hubris and arrogance and lack of attention to detail.
Meanwhile, American business was being reshaped on a more hierarchical, more military model. In other words, the two trends were mutually reinforcing. As the American military system failed in the 1960s and 1970s, so the American systems of business organization failed in the 1970s and 1980s.
George W. Bush was out of the loop both times. His dabblings in business in the '70s and early '80s were mostly in non-representative areas like oil wildcatting; his "successes" were in non-representative areas like real estate and professional sports. He has no actual experience with business success; he doesn't know that successful businesses don't work by enforcing rigid top-down hierarchies.
When you start certain Apple applications (such as iTunes and Safari), they check to see if they have a shortcut in the Dock. If they don't, they automatically make one. If Microsoft did that, it would be regarded as incredibly rude; if Apple does it, it's "friendly."
Similarly, the Finder comes configured by default to favor Apple applications, like iLife, iTunes, and FinalCut, by virtue of the fact that it defaults to creating "libraries" of media types that are tailored to those applications.
In case the rationale isn't clear: iTunes makes Apple money. Wherever there is a way to "monetize" the uses to which a personal computer is put, Apple will take every opportunity to put themselves in the front of the queue. iPhoto has hooks to pay services; FinalCut is an expensive piece of software that Apple hopes to sell as an upgrade to home-videographers; and iTunes, of course, is making millions of dollars for Apple by linking Mac users directly to the Apple music store.
So why is it again that people see Microsoft as megalomaniacal, but don't see Apple that way?
Some design-geek at Frog Design thinks that iPods are "universally" described as "clean" because the iPod "references bathroom materials." It's kind of a silly little think-piece, not least in that it makes a point and then throws out a lot of unrelated arguement in an attempt to hide the fact that it doesn't really make much of a case for what might otherwise be an interesting assertion. But that's not what I'm writing about.
A comment in-thread lead me to this insight: Being a "Mac Person" is a little like being a mason.
Which is to say, to be a "Mac Person" is to feel that you belong to something, while at the same time feeling yourself to be different from other (lesser) people. If you belong to a secret society of some kind, you feel both privileged to belong, and empowered by your connection to that society.
Membership in the secret society comes with a cost: Dues, expenses for robes or other paraphernalia (as Stetson Kennedy wrote in his book about infiltrating the Klan), and any opportunity cost associated with providing expected assistance to other members. Any extra costs are obviously assumed to be at least offset by benefits, by "believers" in the secret society. Those costs are their "dues"; they're what they pay for the privilege of being made special by the organization.
Committing to the Apple Way has similar costs: Software is more expensive and less plentiful; hardware is often proprietary (as with iPod peripherals), or hardware options more limited (if you don't believe it, try to buy a webcam off the shelf at a mainstream store); software conventions are different, and require retraining. Apple users (rationally) presume there to be offsetting benefits, typically cast in terms of usability. My own experience using and supporting Macs tells me that those benefits are illusory, but that's beside the point: Mac users assume them to exist, and act on that assumption.
But they also gain a sense of superiority from it, and they get that reinforced every time they pay more for something, every time they have a document interchange problem with a Windows-using compatriot, every time have a problem figuring out what to do when they sit down at a non-Mac microcomputer.
The extra cost is understood as an investment. They are paying dues. Being a Mac Person is, in that way, a little like being a Mason. Or at least, a little like what we might imagine it's like to be a Mason, since most of us have never actually met one.
"No-one cares about disasters until they happen. That is a political fact of life... "
-- David McEntire, who teaches emergency management at the University of North Texas, quoted in a Reuters story on Yahoo News
Guns matter much more than butter to the Bushites.
For its size, I suspect there are few government agencies that have been more effective than FEMA. They moved fast, and when they said "boo!" people jumped. They were so effective that they even get a footnote in the conspiracy literature (with a little help from Wired Magazine and Chris Carter).
Well, make that "were more effective." That was then, of course -- before the most conservative US administration in over a century got a blank check to reorganize government agencies in the light of their own perceived priorities. And those priorities were much more focused on human-driven risks than on natural ones. (And anyway, the only "natural" cause of anything is God. Right?)
Now, this week in New Orleans, we see how little attention anyone really pays to FEMA now that they're several steps down in the Federal Homeland "Security" bureaucracy. But (of course) I think there's more to it; I think it boils down to the fact that schoolyard bullies don't get nearly as much enjoyment out of helping people recover from disasters as they do out of blowing people up and shooting them. That could just be why, now that there are lots of looters in the streets, the President's got the Army all over it.
Question: How do you tell if someone is going to get help after hurricane Katrina?
Answer: Find out if they were actually in the hurricane. If so, probably not.
I've been thinking about something. There's a very important and simple difference between the people who are getting help and the people who are not: The ones who are getting help were able to drive to safety; the ones who are not, were stuck in harms way.
Put another way: If you're middle class, the probability is that you're getting help; if you're poor, the probability is much higher that you're not.
Very soon after Katrina hit on Monday morning, there were hundreds of Army and National Guard trucks en route to the Gulf Coast, loaded with MREs and fresh water. Where did they go? Why, they went where the refugees were: Places like Baton Rouge.
But they didn't go to New Orleans. Obviously, it would have been harder to get in to New Orleans, but you would think they'd be prepared to mobilize a few Blackhawks and Chinooks to airlift in a few palettes of drinking water and MREs to those highway flyovers poking up above the floodwaters. (That is, if those Blackhawks and Chinooks weren't half a world away enforcing a schoolyard-bully foreign policy.) But no: Instead, they went to places that already had a functioning infrastructure, where, though it would have meant some hardship, locals would have doubtless chipped in to help.
The semiotics of this aren't that simple, of course. There's already an undercurrent of discontent at the idea that people who made the "choice" to stay in a place like New Orleans need to be taught a "hard lesson". (You don't need to look to the web for this -- just keep your ears open.) And then of course there's the symbolism of washing away "Sin City South" in a deluge. Anyone still there, must be part of the mess that God wanted to wash away.