"One should always speak good of the dead. Joan is dead. Good."
The "headless iMac" is the "Mac Mini." (Close-follower branding from Apple? Or synergy from their cooperative projects with BMW -- er, I mean, Cooper? But I digress, as usual...) Of course, they'll sell millions of the buggers. That's what they do: Create cute things that people want to buy, regardless of what it is or really does. But I swear, I'm different: I swear, I actually care what it does.
But is it an earth shattering device? Even without wireless, as it is, it could be, but in and of itself -- no. Everyone I know who's ever thought of getting a Mac wants one -- hell, I want one -- and yet, I don't think it will take over the low-end market the way it could if the price point were, say, $100 lower, or the base RAM were 256KB bigger.
But in another way, it will be revolutionary. Consider the size of the thing: It will now no longer be acceptable for PCs to be as big as they have traditionally been. Ultra-small variations on the ATX form factor, which are common now only among hobbyists and "gear fetishists", will become standard PC form factors, and will at the same time cease to command a premium price. They will drive devices the same size as (or smaller than) a Mac Mini, and aren't inherently much more expensive to manufacture than the larger boards; since Intel and AMD chips clock higher, they'll be faster; and they'll become radically cheaper as demand soars from people who've seen the Mac Mini, but still can't afford the extrapolated $800-$1000 price tag for a really capable, obsolescence-resistent MiniMac.
It's interesting to see where the rumors went wrong. The "iHome" branding turned out to be a red herring; it would be interesting to find out where it came from, because it so effectively skewed the speculative field in the days just before the presentation that it seems as though no one even tried to get spy shots of a Mac Mini. It's a lot smaller than the hoaxed pictures. The hoaxter dubbed it 'iHome', and various rumour millers reported with confidence that it would be "branded" as an iMac; neither turned out to be true. It was said to include WiFi in its base configuration; WiFi ("Airport Extreme") is an add-on, as is Bluetooth. Performance numbers were more or less right, though the rumors missed the fact that there'd be two base processor speeds. And to illustrate just how far off the original rumor was, the "headless iMac" was said to "share the 1.5" [1U, or "one rack unit"] height of the latest Apple G5 server; it's actually 2" tall. A picky detail, but it demonstrates how completely off-mark we all were.
It's tempting to speculate (as I'm sure someone has) that Apple planted rumors to throw people off the scent. But I don't think they need to. For what other PC brand would people bother to create physical hoax models? Whatever the explanation, the community of Mac users has a hardened core of Macintosh and Apple fetishists. In fact, I think they don't really try, for the most part, to get real rumors; they just make stuff up, because it's more satisfying than the truth. Anyway, true wisdom, to the Mac zealot, is received wisdom: It issues forth every January from the Dark Steve, from a well-lit stage at the MacWorld keynote address...
How can I remark on digital convergence without remarking on the forthcoming "headless iMac"?
More to the point, what the hell does a "headless Mac" have to do with digital convergence?
I'll explain. Gizmodo facilitated leaking a bunch of really convincing (to me) product unpacking shots of a device called "iHome", which has a buttload of ports on the back and a CD-ROM slot on the front. Alas, there's lots of smoke and steam on the Apple rumor forums to the effect that these must be fake, because the box is just so ugly. Apple's legendary industrial design staff surely couldn't have produced something so "fugly". (Um...right. Something about this presentation really offends Mac-heads, as is clear from the Engadget comments, but I'm not sure what.) But consider that any box unveiled now is most likely not a production version, and might well be camoflaged the way Detroit camoflages their long-range test models.
Be that as it may, and leaving aside the authenticity of the photos, the name would tell us volumes about how Apple sees the market-positioning of this device, and I belive they do not see it the way that 'Bob Cringely' sees it:
.... The price for that box is supposed to be $499, which would give customers a box with processor, disk, memory, and OS into which you plug your current display, keyboard, and mouse. Given that this sounds a lot like AMD's new Personal Internet Communicator, which will sell for $185, there is probably plenty of profit left for Apple in a $499 price. But what if they priced it at $399 or even $349? Now make it $249, where I calculate they'd be losing $100 per unit. At $100 per unit, how many little Macs could they sell if Jobs is willing to spend $1 billion? TEN MILLION and Apple suddenly becomes the world's number one PC company. Think of it as a non-mobile iPod with computing capability. Think of the music sales it could spawn. Think of the iPod sales it would hurt (zero, because of the lack of mobility). Think of the more expensive Mac sales it would hurt (zero, because a Mac loyalist would only be interested in using this box as an EXTRA computer they would otherwise not have bought). Think of the extra application sales it would generate and especially the OS upgrade sales, which alone could pay back that $100. Think of the impact it would have on Windows sales (minus 10 million units). And if it doesn't work, Steve will still have $5 billion in cash with no measurable negative impact on the company. I think he'll do it.
I see it different[ly].
Nobody's talking yet about what the iHome actually does have. Rumors abound, and they mostly assume it's basically an iBook without a display. I don't buy it.
The very name of the device indicates to me that iHome is not intended to be used as a general purpose computer in any really sophisticated way. It's intended as a media hub, and any other functions it fulfills are incidental, and what's more, Apple won't be enthusiastic about helping it fulfill those other uses: It will most likely be a mediocre platform for applications work. It will be somewhat more than a set-top box, only because it would cost more to dumb it down. (If I'm proven wrong, I'll certainly be taking a look at iHomes for my own use, but I don't think I'm wrong here. We'll see in a few days.)
I think it will be somehow substantially crippled, and I think I know how: It will have limited display capability, ouputting by S-Video and composite only (and the latter through an extra-cost converter from S-Video); and it will not have expandable RAM. Both decisions will be defended on the basis of price, but they'll really have been taken to prevent cannibalizing iBook, iMac and eMac sales. By the way, I essentially agree with Cringely's analysis of the market impact of a fully-capable and cheap iHome, but I think he's applying a much too rational (and charitable) thought process to Apple's senior management.
I think Jobs doesn't know what to do with iTunes. It's a juggernaut he doesn't know how to stop; it's prompting people at his company to actually think about ideas that could shake up the personal computing marketplace, like, say, a genuinely cheap computer with a powerful OS and operating environment. Baseline Macs are built with remarkably inexpensive electronic components: Many still use relatively slow and old versions of the PowerPC chip (the "G4" generation), which by virtue of their vintage are dirt cheap; the "G5" models mostly use relatively slow versions of that chip, and below the most expensive levels, they all use graphics subsystems that are last year's news on PCs. Macs are cheap, cheap, cheap to build. And yet, they're hideously expensive on a bang:buck basis.
If Jobs wanted to really go big, he could have done it years ago. Opportunities like the one that Cringely describes are always there for Apple, all the time. And they never take them. Why? The only answer that's compelling to me is that Steve Jobs does not want Apple to be successful, because that would mean that Apple was no longer about him. Sure, the cult of personality would flourish for a while, but I think he understands that part of his bizarre public loveability is the fact that his exposure is limited. He'll never be as much of a self-charicature as Steve Ballmer or Larry Ellison, but the tarnish would settle pretty quickly and Apple would quickly become beset by the woes of any company that moves beyond a customer base comprised primarily of true believers.
So Cringely's right, I think, about the opportunity, and he's right about what iHome is, but I think he's wrong about what Apple will do with it. And though I predict that Jobs will be accused of not taking these steps out of greed, I think his motivation will be darker: Ego. Though I suppose the Dark Steve's flavor of ego could be cast as a kind of greed....
I'm sitting here in Spot Coffee looking out over the scene. I'm blogging from a coffee shop: I'm officially ... something. Not a geek, anymore, because convergence activities like logging on to the net though a coffee shop's hotspot are now officially mainstream and mundane, at least if you believe that TV reflects reality.
Which is my point, as I remind myself not to bury my lede: Convergences that actually lead somewhere tend to come not from planning toward goals, but from the accidental confluence of opportunity with desire. As a case in point, consider the Archos PMA 400.
This whole coffee-house laptop thing... how did I miss out? It was a matter of not having converged the right equipment. I've puttered at doing this for a long time -- my friend Pete's laundromat even has a hotspot -- but have tended to feel a little sour-grapish over the whole deal, since my equipment has made it a challenge: My laptop has a tiny keyboard (I've gotten used to that) and a small, dim screen; if I brightened the screen to make it readable, the battery life was relatively poor. Battery life already suffered because with only 128MB of memory, the laptop was constantly thrashing the hard drive to swap in and out of RAM. And I always seemed to have problems connecting to the WiFi hotspots.
Well, thanks to eBay I now have a bigger, stronger battery and another 128MB of RAM (both a third of less of last years's price), with updated software for my WiFi card, and I'm blogging from a coffee shop. I've leapt squarely and soundly into 1999. Or something like that.
Which brings me, roundabout, to my point. This was really a convergence issue. It was always an high-status behavior, hooking up to the net from open hot spots, but like most high-status activities, not many people really did it. Which is, of course, what's made it a high-status behavior, at a certain level.
Well, now the barriers to entry are much lower: Most open networks don't charge for connections (at least not at the moment), which we can chalk up to the proliferation of cheap bandwidth. (That will change, but we've got tons of dark fiber out there still going unused.) Good quality portable computing hardware is cheaper and lighter, and the social acceptability of hauling out a laptop has increased; now it seems relatively benign next to loud mobile-phone conversations. Networked communication from a hotspot has become easy and cheap enough that lots of people can do it, bit it still hasn't outgrown its chic-factor. (And it will be slow to do so, by the way, due to latent education factors -- but that's another story for another time.)
This is the crux of it, I think: Convergence will only happen below a certain fairly low price-point, and will be driven by desire, not by need. Blogging got big when it broke $10/month (or thereabouts), and nobody really needs to blog; WiFi got big when it got free and you didn't have to buy a card for your laptop. And of course, nobody really needs to network from a coffee shop.
Convergence devices like wireless handhelds will break through, too, and soon. It will happen when you can buy the device at little (or no) additional cost over what you would have spent anyway: It happen when you can get a thing that you wanted for some completely other purpose, and have it bring along wireless connectivity or email or word-processing as a bonus.
My thoughts turned to this train a few days ago when Gizmodo posted a note from CES about the new "convergence device" from Archos, their "Personal Media Assistant [PMA] 400" -- a Linux-based variation on their AV400 "pocket video recorder". It's a toy calculated to make geeks salivate, hitting almost all of the key requirements for a high-end PDA (color screen, built-in 802.11g wireless, color, browsing and email capability) along with one thing that no conventional PDA has, yet: a 30GB hard drive.
And the best part, from Archos's perspective, is that most of this capability would be there whether they wanted to make this thing into a PDA or not. Because it's not primarily a PDA. It's primarily a multimedia time-shifting device, a la TiVO, but without many of TiVO's restrictions. It includes WiFi because WiFi would make it easier to integrate into 802.11g-based home multimedia networks, not because it would make it a killer toy for the coffee shops set. And yet, that's what it will be.
There have been lots of chances for convergence, and they've mostly appeared to founder on the cost of mass storage or on battery life. Well, mass storage is now absurdly cheap; and low-power components have met improved batteries halfway to more or less solve the power problem. And battery life shouldn't have been an inhibitor to convergence for the most likely candidates, the game platforms. Any NES or PS2 or GameCube has much more computing power than most PCs, at a much lower cost. Why not hook them up to hard drives and keyboards and have a computer? Why, indeed; it's a mystery. So, here we have a device (a multimedia time-shifter) that's basically a general purpose computer; and contrary to the usual trend, its makers decide to go the distance and make it, of all things, a general purpose computer. Why should this be different from the miss-starts from Sony or Nintendo?
Perhaps because this one is personal; perhaps because this one is "adult." Games are still socially marked as "juvenile", even though the majority of players are adult. But music, TV, movies: Those are adult past-times.
There have been lots of attempts to make a "computer for the masses." They've ranged from the geeks-only Sinclair 100, back in the dawn of the personal computing era, to more recent efforts driven by Microsoft and others. Perhaps the most radical attempt was the Simputer, which re-thought not only the user interfaces but the form factors and the assumptions about use.
The first commercial Simputers are nice, elegant device; but they're still too expensive, and don't come near addressing their designed audience. They're toys for well-off Indian technophiles, not the village computer they were designed to be. The PMA400 is in many ways much like a Simputer with a hard-drive and with much less noble goals. This device isn't intended to bring computing to the masses; it's intended to bring this week's "Survivor" or "ER" or "Six Feet Under" to the departure lounge. It didn't come from any high and noble goals. Instead, it came from a desire to be entertained.
And yet, the PMA400 has everything, literally everything, that's needed in a basic -- and even a bit more than basic -- personal computer. It's networked; it's based on an open platform with standard and open APIs, so there's already a lot of software that will run on it; it's got (LOTS of) mass storage; it can take keyboard (and presumably mouse) input; it can accept removable mass storage. It can probably even be hooked up to a printer via USB.
I don't have any illusion that Archos will make a huge success out of this; that's just not in their corporate DNA. But this device can be the model for the true "people's PC" that IBM, Microsoft, and others have been jousting at for years. The question is whether a company as clever as, say, HP or Creative Labs or Nintendo can be clever enough to see the opportunity and seize it. Don't look to Apple or Sony or Microsoft for this device by the way: They have a vested interest in keeping the personal computing devices big, relatively costly, and relatively non-convergent.
SixApart have announced they're acquiring LiveJournal in a friendly takeover. This is actually bigger news at a cultural level than Microsoft breaking in with "MSN Spaces" or even than Google acquiring Pyra.
Whether the merger can be successful at all will hinge largely on how seriously the "bloggers" at SixApart take the "LiveJournalers", but there are powerful synergies to be achieved here that I'm not sure either SixApart or LiveJournal really understand. There are significant cultural differences between the two "communities" that are commonly parsed as socioeconomic (by the LiveJournalers) and generational (by the MoveableType-focused bloggers). There are lots of dimensions to the cultural split, and of course it's often an error to speak of statistical humans, but the more salient long-range divide is really hands-dirty versus hands-clean: Do you open the hood, or do you rely on your mechanic? Do you mod your vehicle (or PC case or backpack), or do you leave it as-is? And when you mod, are you picking from a menu, or thinking up ideas on your own?
And that's the dimension on which the new, merged SixApart-LiveJournal entity will attain success or not: The continuum from commodity to customization -- from people who are content to buy and use off-the-shelf to the country of the hard-core modifiers. LiveJournal is off the shelf, with essentially menu-driven site customizations that are still very branded as "LiveJournal" sites. MoveableType, and TypePad to a lesser extent, are under-the-hood affairs, which are capable of driving rich visual and functional customization. They're right that they don't need to merge the products or the codebases -- the merger of the two organizations will succeed at a basic level if they can overcome cultural biases. But if they can learn to move fluidly (and cost-effectively) along that continuum from commodity to customization, they will morph into a truly powerful challenger to established players, and maybe even a cultural force to be reckoned with.
This is more than mass-customization redux; it's really the first true-coming of a model that was heralded by Saturn in the '90s, but it goes beyond the product delivery to the customers desire to make the "product" their own. Penn Jillette sang an early paean to this desire back in 1990, and Toyota recently started a whole division based on the idea that what you might really want to do is plug stuff in after the fact. But hey, they'll be happy to let the dealer do some value-adds for you, too...
But back to the merger. Technical issues are certainly important. Mena Trott plays up LiveJournal's experience with scalability, and that's important for SixApart: TypePad is probably as scalable as MoveableType could be made in the relevant timeframe, but my sense is that it doesn't achieve the economies of scale they'll need to accommodate 30 million new bloggers a year, and I'm sure this will have occurred to Ben Trott. They'll need to be cautious, though, about taking an overly-architectural tack; considering recent advances in automation and system virtualization, it's probably more cost effective (and almost certainly quicker-to-market) to build a big, comprehensive automation and virtualization infrastructure than it is to re-architect MoveableType for scalability. (Incidentally, that approach would also give them better traction while moving back and forth on that critical commodity-customization continuum.)
All this having been said, I think it's an even bet whether or not SixApart will "get it" enough to really synergize their merger. They're really good with feedback, as their quick response to last May's license fiasco demonstrates. But they also have a history of making exactly the mistake that precipitated that problem: They try to retain too much control over their user base. I would have been a big fan of MoveableType in its early days, except for one little detail: Their license forbade any licensee from charging for customization services. "That's our business," they explained. "We make money doing that." I saw that as short-sighted, and time proved me right: There are now no such restrictions, and part of the reason is that people went out and went nuts modifying MoveableType, and probably in many cases in violation of those license terms.
My point is that even though they corrected, they did made that same mistake twice, and now they're saying things that lead me to believe they're missing some crucial points. So the real bottom line on the success of this merger might be whether people of more expansive vision will be guiding the course of the company, or whether they'll still be taking protectionist gut-checks at every step.
Still another nerd quiz. This one seems slightly tougher on some axes -- for example, you need to be able to recognize a couple of old dead scientists based on their picture, and you're supposed to notice the pointed ears on a pretty girl before you notice her face -- so I thought I'd end up looking normal. Yet I'm still 73% nerdy by their reckoning.
Not that I'm complaining. Last time I checked, nerds for some reason are still regarded as cool -- the world of my childhood is still upside down in at least that regard.
Why does anyone still trust Apple? I suppose it could just be that they don't pay attention. Maybe it's that they love a bully, especially when the bully speaks and looks so fair. Apple is one of the great counter-arguments to the wisdom of the Cluetrain: They keep their customers in the dark and feed them nothing but cheap wine and communal wafers, and yet they're worshipped for it.
Last week, ThinkSecret fronted a rumor that Apple would be announcing a sub-$500 "headless" Macintosh at Macworld Expo on 1/11. They also slipped in a mention, which I somehow missed, that Apple was working on an office suite to compete with MS-Office for OS X.
So, naturally, Apple is sueing them [via Gizmodo]. Said ThinkSecret was revealing "trade secrets". Seem to think that the stuff ThinkSecret is putting up on their website might somehow cause Apple harm. For example, maybe Microsoft doesn't already suspect that Apple is canoodling with KDE to produce an OS X customized fork of KOffice. (KDE are already got the whole suite working natively under Aqua.) Maybe now that Microsoft knows, they'll conjure some nefarious plot to destroy Apple once and for all. Or not.
And as though suing ThinkSecret didn't just confirm at least one of the rumors.
Now, if the Cluetrain Manifesto told the whole story, Apple would be toast thanks to hijinks like these. Their hardware is expensive and slow, the software is more expensive and there's less of it. And on top of that, they treat their customers like marks to be manipulated and jerked around. On the other hand, Apple products come in whatever color you want. As long as it's white.
Friends and acquaintences know that I've considered buying a Mac for a while now, so I can move away from Windows while still having access to high-quality design and graphics tools like PageMaker and Flash. Much as I love the idea of scoring a slightly used PowerBook, a $599 desktop Mac would be a nearly ideal solution. But the Dark Steve just keeps making it hard for me to switch. At least Bill Gates and Steve Ballmer don't pass themselves off as nice guys.
Whence this mania for secrecy stems, I could only speculate. It's apparently new since Jobs rejoined in the late 90s, and since Apple more or less exists for the sole purpose of making Steve Jobs feel like a big man, my first guess would be that it sources back to him. In any case, it's a brilliant piece of crazy-making. They have to grok very deeply that their true believers will love them even more for this, and that once a convert has drunk their koolaid, going over to Windows is unthinkable. (Why, that would mean feeling uncool....)
The Business Blogging Boot Camp (@ Windsor Media) provides a more bottom-line perspective on the growth of blogging, driven by Fortune's 1/10/2005 feature story on technology trends; their observations came to me as part of an email thread related to the BBC story I mentioned last night. They stress the importance of blogging for business, and furthermore the importance of blogging earnestly. They cite the Kryptonite affair and moves toward blog-monitoring by Bacon's Information -- the latter characterized as tentative, "inane", 'Not Getting It.' (I'm usually leery when a huge quant-marketing shop is characterized as Not Getting It. Often it's true, yes; but as far as I can see they often understand a lot more than they bother to explain to us proles. But I digress.)
There are two things I feel compelled to point out before going further: First, blogs are qualitatively analogous to specialist newsletters, which are nothing new to savvy marketers. As with specialty newsletters, the influence of a blog hinges on a subtle balance between the publisher's access to information, their (perceived) personal integrity, and the volume (direct or indirect) of their readership. What's new is the speed of blogging. I'm leery of pointing out emergent qualities, but it's hard to argue that a ten-day cycle time doesn't indicate that (a lack of) quantity may indeed, in this case, have a quality all its own.
The second thing I feel compelled to point out -- and this is both much more and much less important than it sounds -- is that the Kryptonite business not only didn't start on blogs, but didn't get its first traction there. It started on the cycling boards, and that's where it was hashed out, refined, debugged, and researched, and where the first instructional videos were posted, before it was ever reported on a blogospherically-integrated weblog. Some of these bicycling boards are almost as old as the web, and most have many members who trace their net-cred back to Usenet days. My point being that anyone focusing only on blogs as such is setting themselves up for obsolescence. Blogs as they are, are almost certainly not blogs as they will be.
Anyway, Windsor Media's take is largely blogospheric orthodoxy. And in practical terms, it's probably right: The important thing for businesses to do right now is to make it part of some people's jobs to go out, and read and post like humans. But there's a second thing that not only needs to happen, but will happen, and what's more will be enabled by the first: Smart businesses will take steps to understand how the blogosphere works, and how it can be gamed, and then they will go forth and game it. And it will work. The knowledge required will come from a few main sources: From big outfits like Bacon and free-range old-school marketing pundits (who will keep it to themselves and share out bits of wisdom to key clients); and from less old-school marketing pundits like Darren Barefoot and BL Ochman, and from product evangelism folks at big companies (who as a group will tend to share it on their blogs, undercutting Bacon et al's old-school attempts to make money off consulting). And, perhaps most important of all, it will come from research in social network analysis. More on that another time.
Blogging will be gamed by corporate and business interests, make no mistake about that. Because it can be, and is being, gamed. It happens every day. And, contrary to the blogospheric orthodoxy, the broader the cross-section of people who get involved in blogging, the easier it gets to game the system without looking like a weasel. And if the golden rule of capitalist systems is that money wants to make more money (and I'm pretty sure it's something like that), and if blogging has an impact on the growth and flow of money, then money will drive blogging, and blogging will get gamed.
Now I'm getting into blogging heterodoxy. The conventional wisdom on the blogging ethos is very cluetrain, and in fact, the Kryptonite affair does indeed show a real "cluetrain" cause-effect loop. I missed it at the time because I just didn't tune in to the story, but the folks at Fortune and Windsor Media are right about that: The ten-day problem-to-product recall cycle at Kryptonite was characterized by all the corporate communications failures criticized in the Cluetrain Manifesto. It just took a lot longer for this first clear case to emerge than either they or, frankly, I thought it would.
The orthodox position is that the more people get involved in blogging, the harder it gets to game the system. It's a variation on the open-source golden rule of debugging ("Given enough eyes, all bugs are trivial"): "Given enough eyes, all misinformation will be found." But open-source debugging works (when it works, which it often doesn't, but that's another story) because the "eyes on the code" belong to people who know how to spot a problem, and have the capability to affect it more or less directly. In blogging, the "eyes on the information" are often people with little or no real expertise. Much of what they spout is nonsense.
And yet, it's effective.
The blogosphere shifts like a body of water: Fast, and irresistibly. Part of the reason that happens is that the blogging community is comprised largely of small communities with large enough membership to make an impact, and what's more, those communities overlap: PoliBloggers are tight with techbloggers who are tight with lifestyle bloggers who are tight with polibloggers.... So when the loop has looped a few times, we find that a relatively small group of people can pretty reliably and rapidly shift the character of the blogosphere. But as the blogosphere becomes larger, it grows more statistically homogeneous, and small communities of movers will not have the same kind of predictable results anymore. Then it will seem less like water, and more like mud.
But I digress, again. I started this to talk about gaming the blogosphere, and that this will happen, I do not doubt for an instant. There's a lot of money riding on this, after all. Some people will figure out how to game the blogosphere -- to game the cluetrain. It will be a painful process with lots of false starts, but we are well beyond the beginning of the process. It started long before the Kryptonite affair; if I had to pick a point in time, I'd pick the consolidation of successful blogs like Wonkette, Gizmodo and ... under the Gawker Media banner. Gawker sells lots of ads, gets lots and lots of daily eyeballs, and their more overtly commercial blogs (like Gizmodo and Jalopnik) have pull with the product managers by virtue of the fact that they can say things like:
What consumers wantâ??an out-of-box way to share and transmit files between different storage media and computers (and users)â??is exactly what manufacturers don't want to give them, but they'll tease us a little. So, if you're really rich, DigitalDeck Entertainment Network is busting out an in-home network PC to gear to DVD sharing system that costs $4000 - $5000. It probably consists of a bunch of cables and a universal remote that your geeked-out younger brother could hack together himself.
And so, we've come back around again to the specialist newsletter: I take Gizmodo seriously (and I confess, I do read it more or less every day) because I see things like this that indicate to me that they bother to think a bit about what they're reviewing. They have credibility for me because they speak not merely in a human voice, but in one that says credible things. And they have the benefit of comprehensiveness because somebody (namely, Gawker Media) is paying them to do nothing but that.
And by the way, at some point does it stop being "blogging" and start being journalism? Open question, IMO.
It's just that relatively few people have realized it, yet. As I so often say: When there's big money involved, the alternate modalities will be co-opted. (Or crushed.) Even more than information wants to be free, money wants to make more money. We're now sitting in that fragile cusp (oh, hell, we may be past it) where the "winners" of the next gold-rush will be decided. It's not a huge gold rush -- not yet -- but in its own way, it will be just as hokum-driven as the dotcom boom.
I know this because I bothered to do some simple math with numbers in a news story about American blogging habits. From Britain, of all places. A friend pointed me to the BBC's obligatory popular rundown on what a blog is and why their readers should care, combined with a little bit of exoticism regarding us cousins. The article relies heavily on a report from Pew Internet and American Life Project; it's thin on details, but the do provide us with a helpful bullet list in their sidebar:
- Blog readership has shot up by 58% in 2004
- Eight million have created a blog
- 27% of online Americans have read a blog
- 5% use RSS aggregators to get news and other information
- 12% of online Americans have posted comments on blogs
- Only 38% of online Americans have heard about blogs
By implication (according to the sidebar), of Americans who've heard of blogs (38% of online Americans), 71% have read read them (27% of online Americans -- 27/38=.71); and a bit less than half of those have gone on to post comments (12% of online Americans -- 12/27=.44). (Less interesting, but more impressive: about 30% of people who've heard of blogs have posted comments...) Interesting. If taken at face value (which wouldn't be a good idea), that means almost half of people who've read blogs have posted comments to them. Before we even start to think about commercial applications, that may well represent a radical increase in the population of people participating in online forums.
But here's the real meat: When they saw those numbers in the sidebar, direct marketing people in the reading audience (who eat, sleep and breathe much more complex math than that) were drooling on their keyboards. Consider that a direct mail campaign is regarded as doing very, very well at 5% response. These are not numbers to swing elections as a constituency; but they are well into "thought-leader" territory. These blog readers are high-throughput nodes. They're the folks who spread Jib-Jab movies and forwarded the Kick Osama Butt song. At least, that's how the consultants will spin it.
Also quite interesting: Almost a fifth of people who've "read a blog" (5% of online Americans) use RSS readers to aggregate blog content. RSS readers by their definition identify regular readers, so something in excess of about 20% of blog-readers are regular blog-readers. And the stream of drool intensifies.
You have to actually do some math to sort all that out, mind; I think they're probably better at it over there, but I wonder if they weren't actively hiding those numbers by not crunching the numbers. (In America, I'd just go for ignorance -- I don't have much faith that our reporters have the math skills to calculate a proportion.)
I can honestly say that I never thought blogging was a fad. But I will go out on a limb (not that I have to go very far) and say that "podcasting" was dead before it started. Or, at least, the meaning of the term will change. "Podcasting" will come to be the audio equivalent of "TiVO", as we start to see those forthcoming gizmos that let folks TiVO-ize satellite-radio broadcasts. They'll start as special attachments for iPods. (Perhaps even as an iPod itself -- though I don't think Apple will go that far. It would hurt iTunes sales.) Then they'll spread to other music players ("there are music players besides the iPod?!"). Podcasting as we currently know it will die a quick and inglorious death, mourned only by the people who hoped to have their name forever attached to the term.
Blogging has previously never really been at an equivalent risk. The technical barriers to entry are low: A decent secondary education and enough disposable income to afford $10/mo or less in hosting fees. They face very little competition. (Well, except for newspaper columnists. What are those? Well, um, they're these folks who'd regularly get their "blogs" printed in newspapers. See, these newspapers, they're printed on really big paper, so everything is in columns, and a columnist would get one column out of six on the page... ... Newspapers. They print them, on paper, and sell them to people so they can carry them around and read them.... How do they know how many to print? They don't. A lot get wasted. Yes, I know that's a waste...)
Edge.org have posed an interesting question [courtesy MeFi] to a collection of "scientists and science-minded thinkers": "WHAT DO YOU BELIEVE IS TRUE EVEN THOUGH YOU CANNOT PROVE IT?" (It's just the latest in a series of annual questions.) Many of the answers are thought-provoking, or instructive (even though most are simply restatements of that thinker's area of interest in the form of an "unprovable" "assertion"). The zeitgeist implicit in their answers is interesting, too. John Brockman writes:
This year there's a focus on consciousness, on knowing, on ideas of truth and proof. If pushed to generalize, I would say it is a commentary on how we are dealing with the idea of certainty.
We are in the age of "searchculture", in which Google and other search engines are leading us into a future rich with an abundance of correct answers along with an accompanying naÃ¯ve sense of certainty. In the future, we will be able to answer the question, but will we be bright enough to ask it?
This is an alternative path. It may be that it's okay not to be certain, but to have a hunch, and to perceive on that basis.
Maybe it says that. Maybe it says that this is how science actually works: Having hunches, then trying to prove them, which is really what most of the answers are about. Some of them get more fundamental, as when Richard Dawkins answers:
I believe, but I cannot prove, that all life, all intelligence, all creativity and all 'design' anywhere in the universe, is the direct or indirect product of Darwinian natural selection. It follows that design comes late in the universe, after a period of Darwinian evolution. Design cannot precede evolution and therefore cannot underlie the universe.
... which is a remarkably blunt and honest thing for him to say, since it faces head-on the core weakness of his anti-ID positions. I personally think ID is a load of horse-hockey, but I don't think it can be countered with "proof" that it can't work any more than we can solve the first-mover conundrum. I'm glad Dawkins doesn't shy away from that. I'm not always crazy about the way he formulates ideas ("selfish gene" theory still seems too simplisticly reactionary to me, nearly 20 years after I first heard of it), but he is nevertheless one of the most able and vigorous opponents of ID, so it behooves me to pay attention to what he's saying out there.
In any case, while the Q&A is intriquing, in many cases (and as I've noted) it's largely a matter of researchers restating their research-focus as though it were a controversial idea. [bonehead @ MeFi observes, "... scratch post-docs or hungry assistant profs for real wild-eyed speculation. Of course, most of them will be wrong (entertainingly so), but that's where the future Nobels are too."] And I don't think Brockman is really giving credit to scientific process: Believing something you can't prove is usually how anything valuable and previously unknown gets to be learned. Call it a hunch, call it belief; the process whereby that belief is substantiated (though hardly evern "proved" in a strict logicalist sense) is what we know as science. And I'm not altogether sure that Brockman groks that.
Brockman also seems to think there's a new way of being an intellectual:
... There is also evidence here that the[se] scientists are thinking beyond their individual fields. Yes, they are engaged in the science of their own areas of research, but more importantly they are also thinking deeply about creating new understandings about the limits of science, of seeing science not just as a question of knowing things, but as a means of tuning into the deeper questions of who we are and how we know.
It may sound as if I am referring to a group of intellectuals, and not scientists. In fact, I refer to both. In 1991, I suggested the idea of a third culture, which "consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are. "
I believe that the scientists of the third culture are the pre-eminent intellectuals of our time. But I can't prove it.
This idea of "Third Culture" scientists is worth exploring, but it's a topic for another time. Suffice for now to say that I don't see anything sufficiently new that a new organizing principle is required; in fact, I think a concept like "third culture" has more potential to alienate thinkers from cross-pollination than it does to encourage them. A bit like "brights" in that regard.
But that's an issue I haven't got time to take on right now....
Despite such a dazzling career, the business world has always been ambivalent toward Britain's best-known entrepreneur. He launches trendy companies the way Trump builds casinos. But a farsighted innovator like Steve Jobs or Jeff Bezos or even Southwest Airlines' Herb Kelleher he is not. Branson traffics in opportunism. He spots a stodgy, old-line industry, rolls out the Virgin logo, sprinkles some camera-catching glitter, and poof - another moneymaker. While that formula has kept him in champagne and headlines, no Virgin business has ever changed the world.
It's a murky window, though: Leaving aside the fact that Trump is in debt to his eyeballs and Branson wallowing in lucre, I'm not getting the distinction between Branson's "opportunism" and the "innovation" of Reiss's exemplars. What's the qualitative difference between Herb Kelleher's epiphany that there was room at the bottom of the air travel market with Branson's epiphany that there was room at the top? Or between Job's gut instinct about what would make pseudo-counterculturals perk up and drool and Branson's feel for the bleeding edge of music? Or between the fortuitous combination of timing and investor-expectations-management that allowed Jeff Bezos to make a success of Amazon, and the fortuitous combination of timing and investor-expectations-management that allowed Richard Branson to start a full-service transatlantic airline from scratch at the height of the discount boom, with prior portfolio in nothing more mission-critical than music retail?
The thing is, Virgin businesses have changed important bits of the world. Virgin Records (later Virgin Entertainment), for example, started as an independent label before the first wave of Punk, and after early flirtation with the Sex Pistols hitched their cart to the artsier, less purely rebellious side of new music (think XTC, Human League, Simple Minds) with great commercial success. In so doing, they paved the way for countercultural co-optation efforts like Warner's "independent" IRS label. Virgin Air challenged the conventional analysis on how to make money in the airline business (cut operating cost to the bone, get more butts on the plane even if you have to make the seats smaller to do it, and so on). Virgin Mobile found a way to bring European pay-as-you-go models to the less technically advanced American market, and in so doing forced American mobile phone companies to rethink their contract-lockin approach to the business. It's arguable that Branson has had a bigger impact on the American mobile telephony market than even the recent, much-ballyhood phone number mobility legislation.
As an aside, I don't actually have anything in particular against Herb Kelleher or Jeff Bezos, but describing Jobs as a "far sighted innovator" really does gall. As someone who's actually familiar with the history of computing, I'm unclear on exactly how Apple (a company who barely ever cracked 2 percent of the personal computing market) "changed the world" prior to the launch of iTunes in 2003. After all, the truly synergic applications for windowed operating environments -- Excel and Word -- both came from Microsoft, and Apple was neither the first, last, nor (objectively judged) the best. What they were, was the most optimal, and that had little if anything to do with Jobs per se. I defy anyone to name an original idea from Steve Jobs that became successful; despite his reputation for micromanagement (he penciled in changes to comps of the original iMac's design and signed off on the colors in the original lineup), his real genius lies in understanding and manipulating human vanity. That the success of virtually every successful Apple product since the launch of the Lisa is due to appeal to vanity is something that shouldn't be seriously challenged; that Apple is and always has been a niche player can't be seriously debated; that the cultural impact of a niche player seems out of proportion with their market cap shouldn't be surprising to anyone who's ever considered wearing paisley (the common term for this phenomenon is "fashion"). Above the hard technical level (which was more the other Steve's purview, in any case) Steve Jobs and Apple's impact on the world resembles that of Ralph Lauren more than that of even, say, Richard Stallman and the FSF.
Part of the problem may be that, at the core, Branson's "vision" is simply not technophilic enough for a Wired feature writer. His admission to the "vision club", after all, comes via Virgin Galactic:
Until now. Mojave Airport isn't just where aging jets wait to die; it's where the dusty dream of commercial space travel is finally coming alive. Last summer, a tiny winged wonder called SpaceShipOne spiked 62 miles into the desert sky on its way to nailing the $10 million X Prize for the first sustainable civilian suborbital flight. The world's stuffed-shirt airline chiefs took one look and went back to worrying about fuel prices. Branson took one look at the gleaming white carbon-fiber spaceship and said, Beam me up.
The music business isn't about gadgets, after all, and Virgin Mobile's phones were always technically on the low-end. (How else could they afford to make them cheap without requiring a contract?) Too, Virgin Galactic just smells of "big vision", though the real scope is small: Like Apple, it will directly affect only a very small population of users. To be sure, Branson will almost certainly make money on it, but its primary impact will be a matter of fashion -- and inspiration -- than of actual market effects. In business terms, its direct impact will be much smaller than that of Virgin Atlantic Airways or probably even of Branson's new V2 music label.
The latest leading-edge thinking in traffic-calming is that we should remove traffic controls, not add them. Passive controls, that is, like signage; active controls, hard controls, like traffic circles (rotaries, roundabouts), merge lanes -- those can stay. But Yield signs at the traffic circle entrance, "lane ending" indicators, even curbs, stop signs and traffic lights: Those should go.
The thinking is that without them, we think more. With them, we give over our control over our fates to the signage. At the same time, we can do things that, superficially, make a road more dangerous: We allow parking where we'd previously barred it; we make the road-beds narrower instead of wider; we remove turn lanes and traffic lights; we remove explicit barriers between people and traffic. (Note that this doesn't mean eliminating sidewalks altogether: "Instead of a raised curb, sidewalks are denoted by texture and color.")
Results are counter-intuitive: Traffic moves more slowly, and yet trip times are reduced. It's the kind of result that a simplistic understanding of systems can't predict, but that an ecological understanding can.
I have to admit that I was resistent to the idea when I first read it. It reminded me of a trip to Seattle in February of 2000, when I noted the conspicuous absence of stop signs at intersections in many residential neighborhoods. But as I reflect on it, it strikes me that, at the least, bad signage really is worse than no signage. Signage, after all, plays to our conscious, rational mind, which is easily stymied by contradiction and inconsistency in ways that our sub-conscious, a-rational mind is not. And I recall that, when I approached those intersections, I stopped and looked very carefully. I paid attention to what I was doing (driving) instead of to other things.
As I think through it further, I find myself thinking of least three other ideas: The human factors design concept of affordance; Jane Jacobs's "eyes on the street"; and the zen/taoist/buddhist tightrope of mindfulness:mindlessness. The common thread is that they all tap into aspects of humanity that are essentially sub-conscious, in the sense of being as tied to our animal nature as to our human nature. They are rational in the sense that sense can be made of them; they are also a-rational, in the sense that we seldom bother to try. (And also in the sense that when we do bother to try, we often screw it up.) Most imporatantly, they are ecological, not based on a simplistic, modernist understanding of systems theory.
We still need to be able to inculcate awareness of self-interest at a low level of consciousness. We can only rely on our natural accident-avoidance to carry us so far, especially with as many distractions as the world affords.
Mindfulness may be our least zen state. Or it may be our most.
We are not mindful, most of us. And when we try to be, we tend to mess it up. That's what zen teaches us, as I see it: Not mindfulness, as such, but how to do it right. How to be aware of what we're doing, without letting our awareness get in the way. How to be here, now, without forgetting that we were here (or there) yesterday, and will be there (or here) tomorrow.
How to choose, without letting the choice own us.
How to be mindful of our circumstances in the moment, without becoming creatures of the moment.
How to be animal and human at the same time.
It's an unsafe world. Inherently dangerous. If I don't act to avoid danger, I'll die from one of any number of causes: Hypothermia, poisoning, infection, getting hit by a car. And the end of my jaw is always only ever a whim away from someone else's fist.
But these things don't happen. I don't walk into traffic, I don't drink spoiled milk. I don't go out without a coat on in the winter or pick fights with strangers.
And I hardly ever think about any of this. I hardly ever have to.
The 'Net is quietly abuzz with chatter about Santy. It's a worm -- an old-school worm, that travels server-to-server, running a single exploit against one of the server's exposed services. But there's something "new" that scares people about Santy, Santy.B, and all its forthcoming incarnations: It can "discover" likely targets using internet search engines. Santy's success [re-]proves two points that have been made over and over again over the years: The more services you expose, the more vulnerable you are; and as you make it easier to code software, you also make it easier to code malware.
Security in computing, as in the non-virtual world, ends up being largely a matter of how many ways there are to get in and out: If you've got lots of doors and windows, you have poorer security. You can qualify the analogy somewhat, but that's pretty much how it works. Santy works because there are not only lots of doors and windows, but also because some of them aren't as well secured as they should be -- and because some of them advertise too much about themselves. (But that's another topic for another time. And none of this should be taken as excuses for following the "one strong door" model.)
Santy's first manifestation used Google to locate likely targets; now it also uses AOL and Yahoo search interfaces. It finds its way onto a system, patches itself into vulnerable code in certain versions of phpBB, and then proceeded to run Google searches for certain strings that would appear in those vulnerable versions of phpBB. This worked because Google has a stable and easy to use API -- really, from the perspective of Santy's author, just a standard format for input and output -- and it doesn't make any distinction between clients that have a person looking at the results and clients that have a machine looking at the results. As much as I'm wary of Google in general, this is a perfectly right and proper way for their software interfaces to operate, and they're not any different in this regard from any other search engine. Or any open bookmark repository, for that matter.
I repeat, this is old-school, albeit updated for new times: In 1994, someone might have written something analogous to this to use Gopher or WAIS or Archie. They'd have had to be smarter coders, and because some techniques hadn't been pioneered yet, the codebase would be larger and clumsier. But it could have been done, and probably was. I knew guys who thought that way, back in The Day. But the community of people who'd have been impressed was smaller, and frankly, servers were inherently more secure by virtue of simply having fewer exposed services.
The main thing that's actually changed is that it's now a lot easier to code this stuff. A mediocre scripter could hack together Santy in a week or less of spare time; a good coder in just a few hours. Back in the day it would have taken much more skill and time. The target would have been more esoteric, and the task would have taken much more technical expertise. There was no PHP or phpBB; there were no bulletin board systems to exploit. But it would have been possible to do something analogous to Santy in 1994. Not to slight the effort: What's needed in both cases is to have had the ideas about how to design it, which is a non-trivial point.
There has been a call for Google to "shut down" this vulnerability -- for it to block Santy's searches. That call was, frankly, ignorant: Yes, it would be possible for Google to play a game of cat and mouse with Santy coders (and it appears as though they may have been bullied into doing just that), but that would be bad for two reasons: First, because it would create an "artificial selection" environment in which Santy-coders would be forced to evolve things like source IP masking, metamorphic user-agent strings, and Bayesian pattern-matching for target identification thus indirectly causing the other side to get better; and second, because it's an unwinnable battle, and anyone smart enough to get hired at Google already knows that.
To be sure, poor security practices are also partly to blame. If I read the advisories correctly (and I may not), you have to configure phpBB in a fairly non-secure way to be vulnerable: Namely, you have to make your comment-posting forms world-accessible. That's common practice on boards that allow anonymous posting; there are obvious downsides to dis-allowing anonymous posting. It's a baby:bathwater conundrum, in that you throw away some of the liveliness and spontaneity of your board if you don't allow anonymous posting. (BTW, to forestall concern, Antikoan.net is not vulnerable to the exploit used by Santy, since our hosting provider has already installed the relevant PHP security patches.)
And the commodification of hosting plays a huge role in exposing vulnerabilities, though it can also help alleviate them. From the mid-'90s through the present, hardware and software commodification have conspired with consumer expectation and economies of scale to thin the margins on hosting to a hair's breadth; if a hosting provider spends two minutes a day of his own energy on a single customer, that's two minutes too long for him to be making a profit. So corners get cut, processes get automated using home-brewed scripts that aren't debugged. Best-practices for security get ignored because they make things harder to manage. The same commodification, though, has driven the development of highly automated hosting maintenance systems that bake-in things like security best-practices and software version control. There's distance to go, but it's getting better very rapidly.
Final aside: Santy is, in a way, the very worst kind of robotic exploit tool, because it's scripted, not compiled. Its instructions are available to any reasonably competent sysadmin who happens to get infected. And since it's PERL exploiting PHP, the development and execution environments are nearly ubiquitous, and the exploitable platforms still richly plentiful. phpBB may be the first exploit target, but it won't be the last; my quick research indicates that the most common family of OSS CMS systems may be vulnerable at its core, and almost certainly is in some of its more popular third-party modules. (The more frightening prospect is that one of those modules is now without a maintainer, and so if exploited, would remain exploitable. I'll leave my conjectures vague for the time being because they're still just conjectures, and anyway I wouldn't want to give anyone ideas.)
"And if I don't ever get married or have a baby, what -- I get bupkes? Think about it: If you are single, after graduation, there isn't one occasion where people celebrate you. I am talking about the single gal. I mean, Hallmark doesn't make a 'Congratulations, you didn't marry the wrong guy' card." -- 'Carrie Bradshaw', Sex and the City
"I started to get notes the next week that said that single women were starting to register, at stores, for their birthdays. And I thought, 'That's great, because we put something out there.'" -- Jenny Bicks on Morning Edition [listen]
Yeh. Right. You put something out there, alright: Another quasi-official reason to spend, and spend in a store-register-validated, label-appropriate manner. Sex and the City is really all about social activism and culture-jamming, after all.
As far as I'm concerned, the time is well-overdue to re-examine the idea that human existence is solely for procreation -- if there's one thing that Humans consistently do that other animals don't, it's make their own rules about what their existence is for -- but relating that to Carrie Bradshaw's sense of loss over her $400 shoes really, truly advances that particular cause not one whit, and it's insultingly disingenuous for the script's author to argue otherwise.
In 1993-94, Ron Avitzur and Greg Robbins exploited poor plant security and bureacratic buck-passing to sneak into the Apple offices in Cupertino day after day, month after month, producing a product the company didn't want, for no pay. By the end, they were working 16 hour days, seven days a week, and had been assigned engineering, QA and Human Factors resources. The product they built, Graphing Calculator, was subsequently included as a standard applet through MacOS System 9.
Apple at that time had a strong tradition of skunkworks projects, in which engineers continued to work on canceled projects in hopes of producing demos that would inspire management to revive them. On occasion, they succeeded. One project, appropriately code-named Spectre, was canceled and restarted no fewer than five times. Engineers worked after hours on their skunkworks, in addition to working full time on their assigned projects. Greg and I, as nonemployees who had no daytime responsibilities, were merely extending this tradition to the next level.
Why did Greg and I do something so ludicrous as sneaking into an eight-billion-dollar corporation to do volunteer work? Apple was having financial troubles then, so we joked that we were volunteering for a nonprofit organization. In reality, our motivation was complex. Partly, the PowerPC was an awesome machine, and we wanted to show off what could be done with it; in the Spinal Tap idiom, we said, "OK, this one goes to eleven." Partly, we were thinking of the storytelling value. Partly, it was a macho computer guy thing - we had never shipped a million copies of software before. Mostly, Greg and I felt that creating quality educational software was a public service. We were doing it to help kids learn math. Public schools are too poor to buy software, so the most effective way to deliver it is to install it at the factory.
Beyond this lies another set of questions, both psychological and political. Was I doing this out of bitterness that my project had been canceled? Was I subversively coopting the resources of a multinational corporation for my own ends? Or was I naive, manipulated by the system into working incredibly hard for its benefit? Was I a loose cannon, driven by arrogance and ego, or was I just devoted to furthering the cause of education?
I view the events as an experiment in subverting power structures. I had none of the traditional power over others that is inherent to the structure of corporations and bureaucracies. I had neither budget nor headcount. I answered to no one, and no one had to do anything I asked. Dozens of people collaborated spontaneously, motivated by loyalty, friendship, or the love of craftsmanship. We were hackers, creating something for the sheer joy of making it work.
.... On March 11, 1994, the front page of the Times business section contained an article on the alliance among Apple, IBM, and Motorola, picturing Greg and me in my front yard with a view of the Santa Cruz Mountains. Someone I knew in Apple Public Relations was livid. I had asked if she wanted to send someone for the interview, but she had said that engineers are not allowed to talk with the press. It's hard to enforce that kind of thing with people who can't be fired. It was positive press for Apple, though, and our parents were pleased.
We wanted to release a Windows version as part of Windows 98, but sadly, Microsoft has effective building security.
Personally, I think it's a dessert topping and a floor wax: At no point were Ron and Greg doing anything out of fear for their livelihoods; yet they've surely done their small part to further Apple's "Think Different" mythology, so it can be argued that Apple had value from their efforts for no measurable return. And the squad of supporters that accreted to them over the months of their effort were effectively stealing cycles from other Apple processes, to debatable end. See, I would tend to side with a business analyst if s/he decided that Graphic Calculator really didn't contribute much to the Apple bottom line.
But I don't work for Apple, and most likely never will. I can't stop thinking it's great. And it's damn near an American myth.
There's a darker side, of course: This isn't far different from the maverick spirit that fueled Ollie North's blood-money circle-jerk; the same initiative and gung-ho spirit can drive both dreams and nightmares.
As always, the only real resort is to calls it as we sees it....
.... One myth that I find interesting, but which has nothing to do with Linux or even the IT sector in particular, is the myth of how a single person or even a single company makes a huge difference in the market. It's the belief that things happen because somebody was visionary and "planned" it that way. Sometimes the people themselves seem to believe it, and then the myth becomes hubris.
I have to continually try to explain to people that no, I don't "control" what happens in Linux. It's about having an environment that is conducive to development, not so much about any particular leader. And I think that is true in most cases, be it the "great sport coach" or the "great spiritual leader."
Sometimes, the primary driver in social change is the spread of technology. And the primary driver in the spread of technology is usually falling prices (not falling costs, that's another matter altogether).
And the primary driver in falling prices is usually theft.
If you're a business user, US$50 might get you just what you need in a desktop computing environment: A strong productivity suite, with Outlook/Exchange-like email and scheduling and centralized administration (very important in controlling the IT costs). Sun and Novell currently have enterprise-level offerings in that price range, and that's generally being taken as a sign that Microsoft is in trouble. But Dave Berlind at ZDNet argues that at that level of commodification, the underlying infrastructure doesn't really matter:
When software delivers a specific utility, that utility or "layer of value" is often referred to as "the contract." Like a real contract, a software contract sets the expectations of the external entities that will interface with the software. Those entities can be other systems or software, or they can be humans. ... If software interacts with users, then the rubber meets the road at the user-interface level where users feed something in and get something out in whatever format they want it (think documents and communications like instant messaging).
In the case of desktop Linux, the contract is in the user interface (which includes the applications). After all, a lot of the attraction to desktop Linux is due to the fact that it does things out of the box that Windows does not. For example, there's no need to run out and buy a productivity suite or install an instant messaging client. Most distributions of desktop Linux include fairly robust software for each. This model is remarkably similar to that of PDAs. As with PocketPC or PalmOS-based devices, the targeted users of JDS, NLD, and whatever Red Hat comes up with next will mostly interact with the applications and not with the operating system, which in turn reduces the OS to a mostly embedded and, not coincidentally, rather trivial commodity status. [emph added]
A minor point that Berlind misses: Commodity productivity only works as long as interoperability is rock solid. Ten years of domianance by MS Office have gotten us hooked on being able to freely trade editable documents with anybody, anywhere, anytime, with no format translation necessary. Not that I think Berlind misses this point; it would probably just confuse this issue, but it does end up being important, nonetheless.
But Berlind's main point is that this doesn't look as bad for Microsoft as we might think:
Anybody who thinks that Microsoft is just going to lie down and die as a result of this revolution in what $50 gets you is dreaming. If Novell, Sun, or any other company can turn a profit off of a $50 soup-to-nuts desktop offering, there's no reason Microsoft can't do it, too. It's just that the result may not be Windows and Office as we know them in their entirety. For example, Microsoft already has plans to offer a $36 Windows XP Starter Kit in India and will be offering copies of Office to certain schools at $2.50 per copy.
Berlind's right to say that MS wasn't driven to these tactics by Linux, per se. Linux has played a big role, particularly in the emerging nations and in India, and to a lesser extent in the EU. Especially in South America, new offerings to Governments often have to be Linux or nothing, more or less. And Berlind is right that hardware commodification and per seat pricing pressures in the corporate IT realms are prime proximate drivers for this kind of offering.
The real key driver, though, as I see it, is piracy.
These cheap Windows packagings that Microsoft is hawking aren't really intended to compete with Linux distros. Linux really isn't competition for this market. Much as places like Russia, the Ukraine, India, China, and Brazil are hotbeds of software innovation (and they are), the bulk of users in those places are still "home" and "office" users: They're even more unsophisticated, in other words, than the home users in the US market. It pains me to say this, but Linux is simply not a viable option for them. (Through no fault of the OS, let's be clear. It's a packaging and UI issue. Period.)
The alternatives aren't "buy MS" vs "install Linux for free"; they're "buy MS" or "steal MS". MS has understood this for years, and have taken localized stabs of this sort at it for a long time. What they seem to be realizing now is that their strategy needs to be global. After all, government purchasers in Brazil and Hyderabad can now easily communicate and compare notes on what they're hearing from their MS reps. Again, to be fair, this is probably not something that Berlind missed, so much as something that didn't fit in.
But in de-emphasizing the primary root cause (piracy) and over-focusing on the proximate cause (price wars in conjunction with hardware commodification), Berlind misses a very interesting point about information flow in the new digital world order.
"We should take care, in inculcating patriotism into our boys and girls, that it is a patriotism above the narrow sentiment which usually stops at one's country, and thus inspires jealousy and enmity in dealing with others... Our patriotism should be of the wider, nobler kind which recognizes justice and reasonableness in the claims of others and which lead our country into comradeship with...the other nations of the world. The first step to this end is to develop peace and goodwill within our borders, by training our youth of both sexes to its practice as their habit of life, so that the jealousies of town against town, class against class and sect against sect no longer exist; and then to extend this good feeling beyond our frontiers towards our neighbors." -- Robert Baden-Powell
[courtesy Rickie Lee Jones / FurnitureForThePeople]
Man, the psychic landscape in those distant Red States is looking a little weirder every day.
"Yeah folks, our simple, cornpone sincerity is so heartfelt and transparent that we have to arrange a glass-roots [sic]crypto-military PR campaign to get some market traction."
What you will hear in the song, The Bumper Of My SUV, is the absolute truth. No exaggerations, no poetic license, and truly how it made me feel. I had no intention of ever playing this song for anyone.
-- Chely Wright
It's old and big news by now, but Chely Wright wrote a song [clip] that's apparently a big indy hit, about the time a Woman In A Mini-Van gave her the finger because of the Marine Corps bumper sticker.
I'll be blunt: I have to wonder if it actually happened.
I'm willing to believe that Chely Wright believes it happened as she tells it, but I can't say I do. After all, her accounts of the incident have gotten more dramatic as she goes along, to the point where as of mid-December, many months after first telling a shorter version of the story as a stage-rap, the Woman In The Mini-Van is now honking and weaving and damn near foaming at the mouth. My Search-Fu hasn't uncovered an instance of this enhanced version of the story prior to 12/13; the CNN, Reuters, all the instances I've found so far source back to a Billboard profile that I can't track down directly, and the Tennesseean, in it's account of Ms. Wright's recent woes, goes with the earlier, less-rabid version of the Woman In The Mini-Van.
But let's say that every word of the later version of her story is true. This song is still the most insidious kind of jingoism, in that it more or less amounts to saying "once the shooting starts, you don't get to protest unless you're part of it." Or your family is part of it. Or you're willing to lie about your family being part of it. The song itself can be read as innocuous. Certainly there's lip service paid to balance ("I don't have all the answers I need", "But that doesn't mean that I want war / I'm not Republican or Democrat"). But it's interspersed with none-too-subtle coding of Ms. Minivan as a Liberal Elitist ("So I hope that lady in her mini-van / Turns on her radio and hears this from me / As she picks up her kids from their private school...").
"Does that lady know what I stand for / Or the things that I believe?" Well, if we believe this story: Yes. Yes she did. Perhaps by accident, but even if so, it sounds like she got Chely Wright, right.
Songwriters, poets, novelists, artists have always imputed technically unwarranted importance to small things. It's part of the troubador's art to embellish and to generalize to a larger context. But the best of them admit it; or, better yet, admit that they're not sure it means what they think it does. And even when they let passion grip them, they don't insult us by pretending to be fair. Bruce Cockburn doesn't pretend to be balanced when wishes for an RPG. Pete Seeger has painted "This weapon kills fascism" around the edge of his banjo-skins for decades. Ani DiFranco has never minced words. Chely Wright should follow good examples and come out and say, or at least figure out, what the hell she means.
"You go to war with the army you have, not with the army you would like to have." -- Donald Rumsfeld
Richard Perle thinks that Don Rumsfeld's gotten a raw deal [listen]. He says that Rummy was in the middle of his "transformation" campaign, and had to go to war with the 'army he had': "Shinseki's armor, Bill Clinton's military establishment...." And thus, criticisms for his lack of preparation are "not only wrong, but perversely wrong."
Perle's criticisms are not entirely wrong, but he nevertheless exhibits the core flavorings of Vulcan thought: Wishful thinking, salted liberally with arrogance. Which is to say, too much faith in what ought to be true, and too little attention to what is.
It's hard to avoid the conclusion that Don Rumsfeld has gotten a raw deal in some ways. Aspects of American military doctrine did need to be re-thought, and he was right to push for that re-thinking. American force structures did, and do, need to be altered to deal with a post cold war environment. But that doesn't change the fact that, with the army we have, there are certain obvious things that would have to be done if you chose to invade and pacify a fairly large country with a fractious population. And it doesn't change the fact that we spend a lot of money and time training these senior uniformed officers to understand the parameters of conflict.
"I think it's important to understand what planning means," says Perle. Based on the resources available, you do what you can to prepare for the situation. But isn't that what the civilian Pentagon failed to do? Or, more accurately: What they barred the uniformed Pentagon from doing.
So, Rummy went to war with the army he had, and not the one he wanted: But he also went to war as though he had the military he wanted. He went to war with a plan built for the military he ought to have had -- not the one he actually did have.
Which is to say, Rummy went to war with the army he wished he had. He treated his ideas about what's right and proper as reality; he executed to plan, instead of adapting to the situation. He forgot that the most important thing about a plan isn't the plan itself, but the discipline of planning it out.
There's a split in the Neo-Con/Vulcan ranks on this point. The hard-core of the "Vulcans" are sorting themselves out, so that even as William Kristol turns away, and Armitage floats between loyalty to Powell and duty as a soldier, the loyal core -- actors like Perle, Condoleezza Rice, Paul Wolfowitz -- continue to support Rumsfeld, as though understanding that to acknowledge any weakness in their war-planning rationale is to challenge the very premise of the war. They should follow Kristol's lead: He's smarter than they are, or at least cleverer. He understands (learned from Bill Buckley, no doubt) that he can win debates before they start by speaking softly. He mouths categorical dismissals of entire schools of thought -- but he does it quietly and without apparent malice, as thought they're just accepted truths.
Interesting aside: Wikipedia seems to have removed all references to the "vulcans" as a reference to a group of Neoconservative thinkers and operatives who advised George W. Bush in the early days of his first presidential campaign. This illustrates a weakness in their version-control system: It's impossible to non-manually trace the changes to find when and where, or even whether, it happened.
The onslaught of comment-spam has stopped, for now; it could pick up again tomorrow. It continued without abating for about five and a half hours, with what looks to be an average of about one message every three and a half minutes. And after the first fifteen or so -- which is to say, after I'd had a chance to mark a single one as spam -- not one single message made it in front of a site user other than myself, and all without any further intervention on my part.
One more ironic fact about this: The site being advertised by all of this comment spam is actually not accessible at the URL given in the comments. The server doesn't respond. That means either: their spamming was so effective that the servers have failed under the load; or they comment spammed the wrong folks, and earned a Denial of Service attack in return. One can only hope...
... and still, it managed to slip my mind what I started to write about in the first place.
When Drupal's spam module sends me notifications, it tells me what the text of the message was. Here are a few samples:
Injustice, poverty, slavery, ignorance - these may be cured by reform or revolution. But men do not live only by fighting evils. They live by positive goals, individual and collective, a vast variety of them, seldom predictable, at times incompatible.
Women are systematically degraded by receiving the trivial attentions which men think it manly to pay to the sex, when, in fact, men are insultingly supporting their own superiority.
The most radical revolutionary will become a conservative on the day after the revolution.
Time heals what reason cannot.
They're all in the same vein: garden-variety profundities harvested from some encyclopedia of received wisdom. If I believed in hell, I'd believe there was a special place there for people who shit on other people's front lawns while dispensing nauseating bromides. It smacks of the used car salesman who pushes the swanky beast through insincere appeals to your fears for the safety of your family....
I wasn't getting any spam right after I deleted that last bunch. But this afternoon, it started flooding back in again. Well, trickling, I guess, though one every four minutes (like clockwork) is still a flood to a flesh and blood human with other things to do.
But I got my spam filter installed yesterday, and was eager to try it out. And about a half hour ago, Peggy sent me a note to let me know that the Online Casino boys were back.
So I hopped over to the comments management page, and discovered that there were about ten "Online Casino" comments; and that's when I discovered that the spam filter didn't have bulk management controls. I understood, then, why I needed to use the comments module patch that the spam module author had so thoughtfully included.
So, ten minutes more spent digging around looking for a patch utility that would work under Win2K, then upload, and I suddenly had the capability to select and mark messages as spam in large numbers. Which I promptly did.
Shortly, I started receiving email from the spam filter: Now that the Bayesian engine had samples to analyze, it was happily flagging each of the Online Casino messages as they came in. Every four minutes. Like clockwork. If history is any judge, there will be four hundred of them before the bot is done with me.
And not one of them will do the spammer a damn bit of good.
It makes me feel warm and tingly: I've managed to consume their resources, returning a "success" code to their bot, while at the same time effectively subverting their purpose.
... enough about Drupal to write a spam-bot for it. I just deleted about four hundred spam-comments, directly through the database -- all advertisements for an online casino. All hammered in within a very short period of time. I have no idea whether it will affect site function that I did it through MySQL, but what the hell: The alternative was to sit here and click away for hours...
And I wasn't the only one or even nearly the first to get hammered like this, either. Someone must have stumbled onto a directory of Drupal sites. Which highlights the problems of distributed login networks like IXPS and the Drupal distributed login system. But I digress....
So I've installed Drupal's spam module. It adds a Bayesian filter, configured to weight URLs more heavily than plain text. It also checks to see if the poster is coming from a known email relay and scans URLs to see if they've appeared in a previous spam message.
All very gameable checks, but I won't complain that the heuristics aren't any good because I can't think of better ones. But I can think of a sure-fire way to stop comment spam dead in its tracks: A captcha. Captchas are those graphical codes that you have to enter on Yahoo and PayPal and some other secure sites; the letters and numbers are often warped or mangled to make them machine-unreadable. Inserting a captcha into the comment-posting interaction would make Bayesian spam filtering redundant. At least, for now.
And I'm not the first person to think of it. Apparently, though, there are some technical issues with the comments API that make it difficult to insert captchas using the existing comments module. For now, at least, I'll be able to take pleasure in tagging the new comments as spam.
The site will be a bit disorganized -- for a few days, probably -- while I'm working out the kinks in the Drupal 4.5 upgrade. We'll be missing the nifty display of taxonomy terms until I can rework my old themes for compatibility with the new environment. Until then, please enjoy the simplicity of the generic "Chameleon" theme...
Update: Quotes are back, and I slid in a logo -- maybe I'll keep this theme for a while. Grey was getting a bit heavy, anyway...
Update 2004-12-12: I discovered that Fantastico's version upgrade feature deleted most of the content of my website, including all my old blog posts and a bunch of other stuff I had on the site. (Like my '99 April Fools gag, which is a little too funny and takes a few too many liberties with certain trademarks and copyrights for me to post a link in the clear -- suffice to say that experiments with adding "/af99/index.htm" might yield a few moments of nostalgic amusement...) All of that is back, now; not sure how I'll deal with that in the future. It's a damn good thing I took a backup before I upgraded....
Andrei Codrescu is thinking of taking up golf.
"I used to be philosophically opposed to golf, and said mean things about it. I said that the homeless of the world could be housed on America's golf courses. I proposed moving Calcutta to Palm Springs. That was wrong. ... It's just that, at the time, in the late '90s of the last century, I didn't understand the game: It's played it in the daytime, with your pals, not at night with loose women."
-- Andrei Codrescu (on All Things Considered, 2004-12-02) (Listen via RealAudio)
I respect a man who can change his mind. Especially when I know he also appreciates games played at night, with loose women. And hey, after all, he's considering the Kabul Golf Club for his first game.
On Morning Edition this morning, Ken Turan describes Closer as lacking the emotional depth it seems bound to portray; I wonder both if he's not missing the point, and if those who would accuse him of missing the point aren't missing one of their own.
Turan describes characters whose "adult problems [torrid and tortuous confessions of adultery, intercepted emails, evesdrop't IM sessions, etc.] are fake adult problems"-- but one could argue (and I'm sure some will) that all such problems are "fake adult problems" in any number of senses: In that they're not really "adult"; in that the (implicit) distinction between "adult problems" and "childish problems" is faulty; and most to the point here, in that there's a good likelihood that "real world" problems of this sort are just as fake as these, in their own way.
It seems to me that the most profound truth about humans is that we keep going on -- we keep "living through it". The horror of the moment will pass. The great love that we'd die without....we live without. Life is not as Young Werther would have experienced it -- but Young Werther had the great fortune of being a fictional character. The rest of us have to live with the failed expectation of getting over our one true love, of waking up in the morning and not thinking first of our absent loved ones; of wanting things we ought not want.
The rest of us, in short, have to live with living through it. And of then not feeling guilty about it.
Or not. One way that people manage to avoid that guilt is to never get "through" it. Creating and perpetuating new dramas all the time, to keep life interesting. Drumming up a sense of guilt for feeling better.
It can be seen as adaptive: Without the drama, without the interest, some people just lose interest in continuing.
Problem: You're a school administrator who's being judged on your graduation rate; at the same time, you're having your resources depleted, and you're losing the capability to improve that graduation rate. What do you do?
Solution: Persuade the bad students to drop out. That seems to be the method of choice in Orange County, Florida, at least according to a special report [streaming video avalable at transcript page] on Tuesday's News Hour. Here's how it works:
What really matters, of course, is whether this works for the kids. That's not as easy to tell as one might hope: Apparently there aren't any ready statistics on how many kids who are "referred to" GED programs actually enroll in them, and since attendance isn't mandatory, there's no way to tell how many of the kids who do enroll, actually go. The best evidence available, though, indicates that most referred, don't enroll; and of those who enroll, most don't attend classes.
In the neo-Calvinist ethos that drives programs like No Child Left Behind, there's no problem here: The kids "referred" to the GED program are exercising free choice, and paying the consequences for their actions. In my experience, though, it's a very rare 16 or 17 year old who's fully qualified to assess the pros and cons of stepping off the main rails of The System and onto the free and open road. If someone is there telling them a happy tale designed to serve their own ends -- whether it's a guidance counselor, recruiting officer, MLM huckster, what have you -- we can't be surprised if they fall for it.
Our judgement, our understanding, are not adequate. We kill people, unnecessarily. Wilson said, "We won the war to end all wars." I'm not so naïve or simplistic to believe we can eliminate war. We're not going to change human nature any time soon. It isn't that we aren't rational -- we are rational -- but reason has limits. There's a quote from T.S. Eliot that I just love: "We will not cease exploring. And at the end of our explorations, we will return to where we started, and know the place for the first time." And that's in a sense where I'm beginning to be.
-- Robert Strange McNamara, The Fog of War: Eleven Lessons from the Life of Robert S. McNamara