You are here

SocioTech

Viridianism, technology & social change -- tech as it affects human life, etc.

Feral Residents of the Panopticon

What's it like to live inside the panopticon? Here's what I just wrote at the 'bot.

What's the effect of this kind of life? No doubt the people who brain-farted the idea for htis kind of a system in the first place would respond at this point that they are putting eyes on the street, they're addressing "lifestyle crime" (littering, loitering, miscellaneous minor malfeasance), and that the net effect is to get, through technology, what Jacobs asked for in the 1960s. But an honest appraisal would have to recognize that response as disingenuous. The voice is detached, judgemental, and doesn't brook response -- doesn't even afford it, since there are no pickups (that the security company is admitting to) on the cameras. It can't possibly work to provide the kind of human-scale, person-to-person interaction that happens in in the relatively messy but relatively safe neighborhoods of the real world.

Residents of the Panopticon | FeralRobots.net

What Is Open Source Warfare?

From John Robb, who seems to have coined the term "open source warfare":

[The Iraqi] insurgency isn't a fragile hierarchical organization but rather a resilient network made up of small, autonomous groups. This means that the insurgency is virtually immune to attrition and decapitation. It will combine and recombine to form a viable network despite high rates of attrition. Body counts - and the military should already know this - aren't a good predictor of success.

Given this landscape, let's look at alternative strategies. First, out-innovating the insurgency will most likely prove unsuccessful. The insurgency uses an open-source community approach (similar to the decentralized development process now prevalent in the software industry) to warfare that is extremely quick and innovative. New technologies and tactics move rapidly from one end of the insurgency to the other, aided by Iraq's relatively advanced communications and transportation grid - demonstrated by the rapid increases in the sophistication of the insurgents' homemade bombs. This implies that the insurgency's innovation cycles are faster than the American military's slower bureaucratic processes (for example: its inability to deliver sufficient body and vehicle armor to our troops in Iraq).

Second, there are few visible fault lines in the insurgency that can be exploited. Like software developers in the open-source community, the insurgents have subordinated their individual goals to the common goal of the movement. This has been borne out by the relatively low levels of infighting we have seen between insurgent groups. As a result, the military is not going to find a way to chop off parts of the insurgency through political means - particularly if former Ba'athists are systematically excluded from participation in the new Iraqi state by the new Constitution.

Third, the United States can try to diminish the insurgency by letting it win. The disparate groups in an open-source effort are held together by a common goal. Once the goal is reached, the community often falls apart. In Iraq, the original goal for the insurgency was the withdrawal of the occupying forces. If foreign troops pull out quickly, the insurgency may fall apart. This is the same solution that was presented to Congress last month by our generals in Iraq, George Casey and John Abizaid.

Unfortunately, this solution arrived too late. There are signs that the insurgency's goal is shifting from a withdrawal of the United States military to the collapse of the Iraqi government. So, even if American troops withdraw now, violence will probably continue to escalate.

What's left? It's possible, as Microsoft has found, that there is no good monopolistic solution to a mature open-source effort. In that case, the United States might be better off adopting I.B.M.'s embrace of open source. This solution would require renouncing the state's monopoly on violence by using Shiite and Kurdish militias as a counterinsurgency. This is similar to the strategy used to halt the insurgencies in El Salvador in the 1980's and Colombia in the 1990's. In those cases, these militias used local knowledge, unconstrained tactics and high levels of motivation to defeat insurgents (this is in contrast to the ineffectiveness of Iraq's paycheck military). This option will probably work in Iraq too.

In fact, it appears the American military is embracing it. In recent campaigns in Sunni areas, hastily uniformed peshmerga and Badr militia supplemented American troops; and in Basra, Shiite militias are the de facto military power.

If an open-source counterinsurgency is the only strategic option left, it is a depressing one. The militias will probably create a situation of controlled chaos that will allow the administration to claim victory and exit the country. They will, however, exact a horrible toll on Iraq and may persist for decades. This is a far cry from spreading democracy in the Middle East. Advocates of refashioning the American military for top-down nation-building, the current flavor of the month, should recognize it as a fatal test of the concept.

The Open-Source War By JOHN ROBB

For me, this is as interesting for its flat assertions about the nature of the Open Source ("[F/] OSS"] movement as it is for his clarification of the term as it applies to warfare. There's some very interesting -- perhaps revealing -- language, here. I can remember reading John Robb a few years back, but I don't remember anything in particular that made him stand out from the other tech-bloggers I was reading at the time. Here, he's saying some things that are different, that not everyone else (in the tech-blogging "community", at least) is saying.

For example, he's acknowledging the success of IBM, and how they got it: By 'letting the enemy win,' or more precisely, by buying the enemy their uniforms. IBM spends a ton of money on Open Source development. No other company with the arguable exception of Google has as strong a claim in Open Source councils.

Another example: While he seems to praise with one hand, he does something very interesting by tossing IBM into the same metaphorical stew with the right-wing Salvadoran and Colombian militias, trained to do the nastiest kinds of dirty work by our own CIA at our own School of the Americas. Folks at IBM who get the allusion might well be pissed off by it; I expect it's intended not as an insult, but rather as a precise analogy. The analogy bears expansion, though, because most Americans are woefully ignorant of their own history -- especially the small and dirty parts of it like what the Salvadoran militias (and, hell, their regular military) actually did to their own people, with our help and encouragement. If Robb is right, we're in the process of doing something very similar, again, and this time on a far larger scale.

Cyberspace as Woodstock Nation

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.--John Perry Barlow, 1996

In-thread at Web 2.0 ... The Machine is Us/ing Us | MetaFilter, at 12:17 PM

Yet another example of the happy horseshit approach to social activisim: Put an absurd stake in the ground and hope that it makes people come that much closer to what you want.

Of course, Barlow never got what he said he wanted, but there are enough new Web 2.0 toys floating around that let people do superficially cool collaborative things that Barlow's probably pretty assuaged, most of the time. Meanwhile a new post-industrial market has co-opted Barlow's Cyberspace (though, since they've been paying for it, maybe just plain "bought" is a better word), and governments like that of China have been doing a good-enough job of exercising sovereignty where it's cyber-citizens gather.

Behold the Agonizer

"You asked me once," said O'Brien, "what was in Room 101. I told you that you know the answer already. Everybody knows. The thing in Room 101 is the worst thing in the world." -- George Orwell, 1984

In the coverage at Wired of the Air Force's new Active Denial System for crowd control, I didn't see any mention of the agonizer. And yet, that's what it is, more or less: A device that induces searing, burning pain that's so intense, subjects cannot help but struggle to get away from it.

It works via millimeter-wave radiation. Wired (courtesy of the Sunshine Project) has thoughtfully provided a rundown of publicly available documentation. On a quick scan, it's hard to tell whether the pain is caused by heating in the skin or by some other interaction between pain-nerves and millimeter-wave radiation. But prolonged exposer to the beam can cause second-degree burns, so heating does definitely occur.

And there's also no mention in Wired's coverage of the applications for torture, which are painfully [sic] obvious to me. An uncreative sadist would leave a victim with second-degree burns after leaving the beam focused for too long in one spot. A creative sadist would hack together something like Gene Roddenberry's agony booth, to move the focus of radiation around to different sets of nerve endings, in order to reduce the effect of neurotransmitter depletion. After an hour or so, I'm quite sure just about anybody would be willing to tell us whatever we wanted to hear as long as it makes the pain stop. In room 101, the man who works the latch on the rat cage is your god.

A vehicle-mounted version is apparently being tested in Iraq right now. I'm very, very curious to know what Iraqis will make of it. I think they'll get the torture angle right away. And since the technology is pretty easy to replicate, I can envision disreputable dictatorships throughtout the world deploying copycat devices in the near future.

Robots Of The Oil Swamp

Sitting in the pondering place, I pondered this: Where does vegetable oil come from?

The answer, of course, is that plants make it.

We have an oil-based economy, and we're running out of oil. But that's just the "mineral" petroleum, the stuff that's prehistoric. What about the stuff that the plants make?

Sure, plants can't make enough. It would be just like some nay-sayer somewhere to point out the number of acres we'd have to plant in Canola in order to make enough oil to fuel a single fleet of city buses. They'd probably say it's not cost effective, and they'd probably be right. But what about bio-engineering? How does the Canola plant make it? Or the Hemp plant, or the Olive tree, or any other plant? And what's to stop us from bio-engineering an organism to do just that?

Plenty of things, I'm sure, but most of them are moral or entail engaging foresight, and western capitalism doesn't have much history of respecting moral reasons. Or of thinking beyond the end of the depreciation cycle.

In any case, it's true that plants are very good at processing natural materials into more complex and very different natural materials. For example, they can make oil from organic waste. Or from cellulose. But plants are clearly not efficient enough. To even begin to feed the demand for fuel and synthetic plastics, we would need to operate at fairly high levels of efficiency. Fields of canola, regardless of how verdant, would not cut it.

But foetid swamps full of bacteria just might. To get the volumes we need, we would need to use open spaces, like swamps. We could digest whole forests, whole biomes, of cellulose, turn them into swamps, to get the hydrocarbons we want.

Gaseous hydrocarbons or light alcohols would probably be better for generation purposes, to drive our fuel cells, but we'd still need long-chain petrochemicals to make plastic. So I could envision different "crops," including even some semi-refined plastics.

Some of those crops would be quite hostile to life. The biological processes would most likely generate some rather toxic byproducts. And at the point where this type of production becomes necessary, I have to wonder whether the people who did it would care. These would, after all, be people arrogant enough to farm oil in an open swamp. If the global climate is sufficiently broken, all care might be thrown to the hot, dry winds. Or the fuming, damp winds, as the case may be, as we loose our hydrocarbon-synthesizing organisms onto the world and let them digest its organic waste matter into fuels.

I could envision great, sealed cities on the edge of seething hydrocarbon swamps habitable only by the most adaptable of organisms, and tended by fleets of fragmentarily sentient fuel-cell powered robots. Eventually, the robots might form their own cities (or be organized into them by a retreating humanity), existing only to tend (and perhaps contain) their swamps.

These robot cultures would evolve; they would not remain static. Evolution would apply to them as it does to us. This is where the admonitions of the Singularitarians would apply, because eventually our machines, once we are no longer an active influence upon them, will have to find their own reasons for living.

WESun 0: Mainstreaming Singularitarianism

This morning on Weekend Edition, The Singularity rears its ugly head in the persons of Vernor Vinge (who coined the concept) and Cory Doctorow. It's another manifestation of our increasing dread in the face of technological change, and the increasing degree with which we approach that change in irrational ways: in the Vingean scenario, as a rescuing parent; in the Doctorovian vision, .

Doctorow posits the scenario of a modern human interacting with a pre-literate human: That they would be "in a sense, different species." That they and we would have "nothing to talk about." Maybe he was clearer in the un-aired portions about what's meant by "literate", but unless it means "without language" (and one would expect the word chosen for that to be "pre-linguistic"), he's clearly overstating his case. We can easily talk with "pre-literate" or even "illiterate" people, because there remain between us basic human consistencies that will not be removed by any extropian enhancements which we can plausibly predict.

It's a badly chosen analogy, to be sure, and surely one can be forgiven for choosing analogies badly, no? No. Because the craft of science fiction as gedankenexperiment is all about precision -- or at least, insight -- in your analogies. We need to remember that the beings making the singularity are humans. The aspects of the singularity that are truly, deeply a-human, are not likely to persist in that form. They're likely to get reshaped, recrafted, in some kind of human image.

I think Doctorow's analogy illustrates the most fundamental problem with Singularity Theory, in that it is often a failure of a certain kind of imagination: Empathy.

Vinge posits a more traditional scenario, in a way, as a revisitation of the Jack Williamson nightmare -- but with Williamson's logical problems fixed. Vinge's singularity-intelligence is more of a savior than a successor. A lost parent, restored, if you will. Clarke's technological god. Maybe it can save us from global warming.

Doctorow's singularity-beings are replacements, successors. They are what we are not -- they overcome our weaknesses, and supersede us. There's a sense of mingled dread and fascination in the prospect. I'm still trying to understand how to talk about the impulse. I feel it, myself, to be sure, but I don't have a pat name for it.

Sterling's critique still seems sound. (See his short essay in Wired; longer talk at the Long Now Foundation, as MP3 or OGG or as a summary.) He points out (among other things) that the singularity-being will not come about entirely by accident. It will come about through our choices, and some of those choices will tend to constrain the singularity-being.

Rise of the soft-phone?

Someone has finally noticed [via SmartMobs] that any PocketPC or Palm OS 5 PDA has the power to become a VOIP phone. But have they noticed yet what the consequences are? I think they have, and they're just keeping quiet about it because they're hoping that their competitors won't figure it out first and out-manouvre them.

But let's play this out. Let's say I go into my local coffee shop with free WiFi, whip out my PDA, fire up the softphone, and start talking. I'm not paying anybody for anything, except my coffee refills.

So something's gotta give:

  • The activity will drive up traffic on the coffee shop's connection. They pay more; maybe they need to switch to a paid service. This induces a competetive spiral with other coffee shops that leaves Starbucks standing.
  • Carriers will raise rates on their bandwidth. Yes, it's a plentiful commodity right now, but if everyone uses VOIP instead of wireless, then we have plenty of broadband traffic all of a sudden and bandwidth isn't plentiful anymore. Price goes up.
  • Softphone networks like Skype add such a load to the net that something has to be done to curb them, or to recoup losses. New fees arise.
  • Mobile phone vendors switch to softphones to preserve market share, and we finally see that shift to high-speed wireless we've been promised.
  • Municipal WiFi networks currently in development gain a whole new significance.
All of this is terribly destabilizing. The softphone, while itself a minor innovation, will drive structural changes across the entire infrastructure that are far out of proportion to its direct impact.

Ultimately I'm thinking we see a flattening of offerings; everything being done via IP (or its equivalent). Phones only actually use "phone" technology in areas where it's not cost-effective to switch over. Phones become a flexible concept in this scenario, so something would have to be done about that. (The beauty of the phone as a communciations medium is the individualized, static "Phone Number": You want someone, you call their Phone Number. Elegant. Simple. Took generations to evolve to its current form and market dominance, and is likely to be the driving metaphor for whatever replaces it.)

Airplane Chatter

SF Gateâ??s Mark Morford is one columnist who can make me chuckle even before reading his column. Sometimes just reading the title of his article will make me laugh out loud.

Todayâ??s gem was â??Hang Up Or Get Off The Plane, Using cell phones on flights: Great idea, or the last, horrible gasp of human decency?â? Add to that two advertisements flanking his article, one for cell phones and one selling great deals on airline tickets, and I needed more than a moment to compose myself before reading the article.

I can imagine many people relate to Morfordâ??s words. In fact, per the survey on another page also surrounded by cell phone ads, at the moment 80% of respondents feel that phones on planes are obnoxious.

It would be interesting to have another poll to see how many people agree with Mark Morford that â??we as a species are just so wondrously, incredibly -- what do you call it? Oh yes: boring.â?

Topics: 

Keeping America Safe From British Novelists

There's a perfectly good explanation for why US Customs refused Ian MacEwan entry to the country. It wasn't because he was dangerous; nor was it because he was deemed to pose some kind of terror threat. Nor was it because someone thought he might be a journalist instead of a tourist. It apparently wasn't even because he disagrees with US climate policy and doesn't mind saying so in public.

It was because he was going to make too much money. It seems the honoraria for his series of Seattle-area speaking ingagements totaled a wee bit too much. So he needed to have a work visa, not a tourist visa.

So they stamped his passport "Refused Entry." "Once that stamp gets in a passport, it's difficult to get it out," said Britain's Consul General for Vancouver, James Rawlinson. "The process of reversing that is not merely a matter of crossing that out. Reversing that requires referrals to Washington, D.C., and the headquarters of the State Department and Homeland Security. It gets rather heavy."

Oh, well -- at least when you're a well-known novelist like Ian McEwan, people [who matter] might miss you.

[via Bruce Sterling's Viridian Note 00440]

Convergence Through Desire

I'm sitting here in Spot Coffee looking out over the scene. I'm blogging from a coffee shop: I'm officially ... something. Not a geek, anymore, because convergence activities like logging on to the net though a coffee shop's hotspot are now officially mainstream and mundane, at least if you believe that TV reflects reality.

Which is my point, as I remind myself not to bury my lede: Convergences that actually lead somewhere tend to come not from planning toward goals, but from the accidental confluence of opportunity with desire. As a case in point, consider the Archos PMA 400.

This whole coffee-house laptop thing... how did I miss out? It was a matter of not having converged the right equipment. I've puttered at doing this for a long time -- my friend Pete's laundromat even has a hotspot -- but have tended to feel a little sour-grapish over the whole deal, since my equipment has made it a challenge: My laptop has a tiny keyboard (I've gotten used to that) and a small, dim screen; if I brightened the screen to make it readable, the battery life was relatively poor. Battery life already suffered because with only 128MB of memory, the laptop was constantly thrashing the hard drive to swap in and out of RAM. And I always seemed to have problems connecting to the WiFi hotspots.

Well, thanks to eBay I now have a bigger, stronger battery and another 128MB of RAM (both a third of less of last years's price), with updated software for my WiFi card, and I'm blogging from a coffee shop. I've leapt squarely and soundly into 1999. Or something like that.

Which brings me, roundabout, to my point. This was really a convergence issue. It was always an high-status behavior, hooking up to the net from open hot spots, but like most high-status activities, not many people really did it. Which is, of course, what's made it a high-status behavior, at a certain level.

Well, now the barriers to entry are much lower: Most open networks don't charge for connections (at least not at the moment), which we can chalk up to the proliferation of cheap bandwidth. (That will change, but we've got tons of dark fiber out there still going unused.) Good quality portable computing hardware is cheaper and lighter, and the social acceptability of hauling out a laptop has increased; now it seems relatively benign next to loud mobile-phone conversations. Networked communication from a hotspot has become easy and cheap enough that lots of people can do it, bit it still hasn't outgrown its chic-factor. (And it will be slow to do so, by the way, due to latent education factors -- but that's another story for another time.)

This is the crux of it, I think: Convergence will only happen below a certain fairly low price-point, and will be driven by desire, not by need. Blogging got big when it broke $10/month (or thereabouts), and nobody really needs to blog; WiFi got big when it got free and you didn't have to buy a card for your laptop. And of course, nobody really needs to network from a coffee shop.

Convergence devices like wireless handhelds will break through, too, and soon. It will happen when you can buy the device at little (or no) additional cost over what you would have spent anyway: It happen when you can get a thing that you wanted for some completely other purpose, and have it bring along wireless connectivity or email or word-processing as a bonus.

My thoughts turned to this train a few days ago when Gizmodo posted a note from CES about the new "convergence device" from Archos, their "Personal Media Assistant [PMA] 400" -- a Linux-based variation on their AV400 "pocket video recorder". It's a toy calculated to make geeks salivate, hitting almost all of the key requirements for a high-end PDA (color screen, built-in 802.11g wireless, color, browsing and email capability) along with one thing that no conventional PDA has, yet: a 30GB hard drive.

And the best part, from Archos's perspective, is that most of this capability would be there whether they wanted to make this thing into a PDA or not. Because it's not primarily a PDA. It's primarily a multimedia time-shifting device, a la TiVO, but without many of TiVO's restrictions. It includes WiFi because WiFi would make it easier to integrate into 802.11g-based home multimedia networks, not because it would make it a killer toy for the coffee shops set. And yet, that's what it will be.

There have been lots of chances for convergence, and they've mostly appeared to founder on the cost of mass storage or on battery life. Well, mass storage is now absurdly cheap; and low-power components have met improved batteries halfway to more or less solve the power problem. And battery life shouldn't have been an inhibitor to convergence for the most likely candidates, the game platforms. Any NES or PS2 or GameCube has much more computing power than most PCs, at a much lower cost. Why not hook them up to hard drives and keyboards and have a computer? Why, indeed; it's a mystery. So, here we have a device (a multimedia time-shifter) that's basically a general purpose computer; and contrary to the usual trend, its makers decide to go the distance and make it, of all things, a general purpose computer. Why should this be different from the miss-starts from Sony or Nintendo?

Perhaps because this one is personal; perhaps because this one is "adult." Games are still socially marked as "juvenile", even though the majority of players are adult. But music, TV, movies: Those are adult past-times.

There have been lots of attempts to make a "computer for the masses." They've ranged from the geeks-only Sinclair 100, back in the dawn of the personal computing era, to more recent efforts driven by Microsoft and others. Perhaps the most radical attempt was the Simputer, which re-thought not only the user interfaces but the form factors and the assumptions about use.

The first commercial Simputers are nice, elegant device; but they're still too expensive, and don't come near addressing their designed audience. They're toys for well-off Indian technophiles, not the village computer they were designed to be. The PMA400 is in many ways much like a Simputer with a hard-drive and with much less noble goals. This device isn't intended to bring computing to the masses; it's intended to bring this week's "Survivor" or "ER" or "Six Feet Under" to the departure lounge. It didn't come from any high and noble goals. Instead, it came from a desire to be entertained.

And yet, the PMA400 has everything, literally everything, that's needed in a basic -- and even a bit more than basic -- personal computer. It's networked; it's based on an open platform with standard and open APIs, so there's already a lot of software that will run on it; it's got (LOTS of) mass storage; it can take keyboard (and presumably mouse) input; it can accept removable mass storage. It can probably even be hooked up to a printer via USB.

I don't have any illusion that Archos will make a huge success out of this; that's just not in their corporate DNA. But this device can be the model for the true "people's PC" that IBM, Microsoft, and others have been jousting at for years. The question is whether a company as clever as, say, HP or Creative Labs or Nintendo can be clever enough to see the opportunity and seize it. Don't look to Apple or Sony or Microsoft for this device by the way: They have a vested interest in keeping the personal computing devices big, relatively costly, and relatively non-convergent.

SixApart Plus LiveJournal: The New Elephant In The Room

SixApart have announced they're acquiring LiveJournal in a friendly takeover. This is actually bigger news at a cultural level than Microsoft breaking in with "MSN Spaces" or even than Google acquiring Pyra.

Whether the merger can be successful at all will hinge largely on how seriously the "bloggers" at SixApart take the "LiveJournalers", but there are powerful synergies to be achieved here that I'm not sure either SixApart or LiveJournal really understand. There are significant cultural differences between the two "communities" that are commonly parsed as socioeconomic (by the LiveJournalers) and generational (by the MoveableType-focused bloggers). There are lots of dimensions to the cultural split, and of course it's often an error to speak of statistical humans, but the more salient long-range divide is really hands-dirty versus hands-clean: Do you open the hood, or do you rely on your mechanic? Do you mod your vehicle (or PC case or backpack), or do you leave it as-is? And when you mod, are you picking from a menu, or thinking up ideas on your own?

And that's the dimension on which the new, merged SixApart-LiveJournal entity will attain success or not: The continuum from commodity to customization -- from people who are content to buy and use off-the-shelf to the country of the hard-core modifiers. LiveJournal is off the shelf, with essentially menu-driven site customizations that are still very branded as "LiveJournal" sites. MoveableType, and TypePad to a lesser extent, are under-the-hood affairs, which are capable of driving rich visual and functional customization. They're right that they don't need to merge the products or the codebases -- the merger of the two organizations will succeed at a basic level if they can overcome cultural biases. But if they can learn to move fluidly (and cost-effectively) along that continuum from commodity to customization, they will morph into a truly powerful challenger to established players, and maybe even a cultural force to be reckoned with.

This is more than mass-customization redux; it's really the first true-coming of a model that was heralded by Saturn in the '90s, but it goes beyond the product delivery to the customers desire to make the "product" their own. Penn Jillette sang an early paean to this desire back in 1990, and Toyota recently started a whole division based on the idea that what you might really want to do is plug stuff in after the fact. But hey, they'll be happy to let the dealer do some value-adds for you, too...

But back to the merger. Technical issues are certainly important. Mena Trott plays up LiveJournal's experience with scalability, and that's important for SixApart: TypePad is probably as scalable as MoveableType could be made in the relevant timeframe, but my sense is that it doesn't achieve the economies of scale they'll need to accommodate 30 million new bloggers a year, and I'm sure this will have occurred to Ben Trott. They'll need to be cautious, though, about taking an overly-architectural tack; considering recent advances in automation and system virtualization, it's probably more cost effective (and almost certainly quicker-to-market) to build a big, comprehensive automation and virtualization infrastructure than it is to re-architect MoveableType for scalability. (Incidentally, that approach would also give them better traction while moving back and forth on that critical commodity-customization continuum.)

All this having been said, I think it's an even bet whether or not SixApart will "get it" enough to really synergize their merger. They're really good with feedback, as their quick response to last May's license fiasco demonstrates. But they also have a history of making exactly the mistake that precipitated that problem: They try to retain too much control over their user base. I would have been a big fan of MoveableType in its early days, except for one little detail: Their license forbade any licensee from charging for customization services. "That's our business," they explained. "We make money doing that." I saw that as short-sighted, and time proved me right: There are now no such restrictions, and part of the reason is that people went out and went nuts modifying MoveableType, and probably in many cases in violation of those license terms.

My point is that even though they corrected, they did made that same mistake twice, and now they're saying things that lead me to believe they're missing some crucial points. So the real bottom line on the success of this merger might be whether people of more expansive vision will be guiding the course of the company, or whether they'll still be taking protectionist gut-checks at every step.

Office IM As A License To Bully

Workplace IM is one of those ideas that just won't die. It's made the Red Herring, and it's officially made it into my corporate workplace, so I'm afraid we're not going to see this one just fade away like the fad it should have been.

"The real questions will be whether supervisors seek to employ IM as a monitoring tool," said Jonathan Zittrain, professor and co-director of the Berkman Center for Internet and Society at Harvard Law School.

With IM, employers would be "able to ping employees at any moment, with a very low threshold since no one has to pick up a phone or wander over to a desk," he said. "Employees who would object immediately to a camera monitoring their desk feel IM is far less intrusive."

[Red Herring, "I work, therefore, IM"] [via SoulSoup]

I work for a Major Staffing Services Company, and some folks around here use IM all the time. And they do it for one simple reason: It lets them make people jump. That's why people like it: It gives them power over others. So IM is really just another manifestation of the schoolyard-bully meme that's becoming so prevalent in modern American business. Previously, IM use has been limited to the population of people servicing one particular large client who has an expectation that we use their IM software; now, Corporate IT has pushed MS Messenger out to all of the Corporate-imaged computers, so I know there will be an expectation that we start using it. (Fortunately, my Corporate-imaged computer is so piss-poor that I don't use it, so I get a pass, for a while, at least.)

I had to interact with that sub-population a lot during a recent launch phase, and I can tell you with great confidence that getting the message to me via IM did not improve their chances of a quality resolution. In fact, it arguably decreased them, because I would continually have to shift focus to deal with new problems.

On the average, it takes something like 7 minutes to recover context and return to the previous state on a complex task after you take an unexpected phone call; the numbers for IM can't be a hell of a lot better. So IM is terribly disruptive and time-wasting.

Put succinctly: IM is a thoughtfulness-killer. There truly are very very few business needs that are so important that you have to IM, but not so important you can't pick up the phone. Yes, the phone takes more prep and concentration. That's a good thing; it means you use it less.

Since it's so commonly a power-trip, IM also drives the workplace further in the direction of being a war of all against all. That's not how work gets done. Work gets done through a combination of cooperation, and letting people get work done.

Wikipedia: The Latest Threat To American Civilization

It's fashionable in many circles to trash on Internet information resources. And worst is any information resource that's driven by "community." Take the recent story from the Syracuse Post-Standard by would-be technopundit Al Fasoldt.

Wikipedia, [Liverpool High School Librarian Susan Stagnitta] explains, takes the idea of open source one step too far for most of us.

"Anyone can change the content of an article in the Wikipedia, and there is no editorial review of the content. I use this Web site as a learning experience for my students. Many of them have used it in the past for research and were very surprised when we investigated the authority of the site."

"I was amazed at how little I knew about Wikipedia," Fasoldt continues. I'm amazed at how little he still does. For example, he doesn't correct Ms. Stagnitta's fallacious assertion that there's "no editorial review". In fact, Wikipedia articles do, absolutely, receive editorial review. All the time. Twenty-four-by-seven.

That's how Wikipedia works.

The research required to correct this misapprehension wouldn't be difficult: Fasoldt (or Stagnitta) could start by scanning the Wikipedia Community Portal, look at the Wikipedia Village Pump for discussions of policy questions, or look at their Policies and guidelines entry. If he wanted to be really adventurous, and really interested in testing how reliable Wikipedia is, he could experiment by trying to hack the system and drive an inaccurate edit; if he did that, he'd discover that there is, in face, editorial review -- it's just not performed by an anointed editor, but rather by people who might have some kind of actual knowledge on the subject. (Mike at Techdirt.com suggested such an experiment, and was rebuffed.)

But there's more at play here than sloppy research. In correspondence with Mike at Techdirt.com, Fasoldt used terms like "repugnant" and "outrageous" -- terms which are clearly driven by fear or anger (the latter in any case usually being driven by fear). So I have to sit here and ask myself: What is it about Wikipedia that inspires such fear and rage? And I think I know what it is. It's the very idea that information not sanctioned by some kind of official authority could be taken as reliable.

Because, after all, if information is "free", then information gate-keepers have empty rice-bowls.

Let's look for a moment at who's complaining: A high school librarian (well, we assume she's a librarian, Fasoldt's piece actually doesn't identify her as such), and a would-be pundit with a penchent for John Stossel-ish ranting. These are both people in eroding professions, most likely looking to avoid challenges from "authorities" who aren't designated as "authoritative" by membership in their guild. Heaven forbid that some student should rely on a Wikipedia article that's the collective work of three or four entomology graduate students in different universities, rather than one from Brittanica that was written by one grad student and then signed by his advisor. Such things will certainly and truly cause the end of civilization as we know it.

This is another one of those false dichotomies that frightened practitioners of marginal professions use as leverage to get their heads screwed still deeper into the sand. Wikipedia is a good thing. It's not a good thing because community-driven content is an inherently good thing (though that last is almost true); it's a good thing because they do it well. That's partly a function of size and critical mass; but it's also partly a function of rigor in management. The rules get enforced, and editorial quality stays generally good, because like most successful "open source" projects, there's really a fairly high degree of central control in the areas that really matter.

It's easy to see why Wikipedia would be very, very threatening to a public school librarian; it's also easy to see why it could suddenly seem very threatening -- or, at least, like a blood-spotted chicken -- to someone who's set himself up to be a mediator for technical information. In the more "elite" echelons of librarianship and technical journalism (visit the reference desk at a good-sized college or public library for examples of the former, or read Dan Gillmour or ... for examples of the latter), the practitioners for the most part have a deep understanding that they are not gate-keepers, but guides. In the margins, that sense seems to get lost. Whether that's primarily due to the general noise of trying to make a living, or due to more petty fear of the future, is hard to tell -- and in any case, they're probably not so often mutually exclusive.

All that said, and as a final word, the free and open creation and maintenace of public information resources by the public that uses them is an inherently good thing, provided the quality of the information remains high. In that sense, Wikipedia could and probably should be a poster child for the proper and proportional application of [American] Libertarian and Anarchist ideas. It's an example of the "direct action" of many participants aggregating into an objectively good result.

One final point: Curiously enough, the quality of information never actually seems to be at issue for Stagnitta and Fasoldt. You'd think that if they're so concerned about reliability of the information, they'd want to actually test the information. But they seem more focused on explaining why it couldn't possibly be reliable, versus testing whether it actually is. Well, I guess I can't expect them to be scientists.

ADDENDUM: I got some of the links wrong, herein. The original story lead was via BoingBoing, and that's where the terms "repugnant", "dangerous", and "outrageous" appeared.

Lesson For the Day: Make Sure They Want It Before You Try To Sell It To Them

When you highlight a community website, it's a good idea to check and make sure that it does actually have a community involved with it.

Case in point: Dan Gillmour heaped praise upon GoSkokie.com [alternate link] as a great example of "hyperlocal online journalism". As The Register UK points out, as of Gillmour's blog entry there hadn't been a posting in three weeks.

He'd have been better served to note a site I mentioned in a while back in a comment on "Open Source Journalism", iBrattleboro. They're active, and they use the site for real community news. Of course, we have no idea how much of iBrattleboro's news might be the product of one feverishly detail-oriented brain, but the point remains that it could actually be useful to someone, where GoSkokie won't be. It clearly has no critical mass. (That Gillmour could cite this as a 'done-right' example without it having critical mass is pretty strange -- that should be the most obvious requirement for a successful community site.)

To me, the key and obvious difference between these two efforts is that the one that has traffic and posting activity was actually created by real, bona fide members of the community -- not by students at a journalism school working from a grand plan [1.5MB pdf]. At risk of seeming anti-intellectual: If you're not from there, it's incumbent upon you to explain why the locals should give a damn what you think.

Addendum: Dan Gillmour pointed out that he's featured iBrattleboro before; now that I think about it, he may have been where I heard of it...I'd say my memory isn't what it used to be, but I fear it never was.

Open Source Journalism

Sometimes I have to turn off my cynicism filter and take things at something like face value. The new experiment in "open source journalism" at Bakersfield's The Northwest Voice is a good example. Starting three months ago (May 2004), they began deriving their news content directly from community members, contributed via the web.

Northwest Voice describes itself as a "community newspaper", but since they're "carrier delivered" to 22,000 homes, they're clearly really a shopper. I.e., their emphasis is on the ads, with actual content only a sweetener to get people to actually leaf through. "Community newspaper" is more commonly applied to takeaway-distributed newspapers like our own City Newspaper, the Ithaca Times, or the venerable Boston Phoenix.

What a move like this does is allow them to easily and inexpensively move upscale from "shopper" territory to the realm of more sophisticated "community newspaper", without the cost of hiring reporters. Editors are more cost-effective, because they can handle many more stories in a day than could a reporter, even if they're doing some rudimentary fact-checking. They're not the only ones to have this idea -- look to Belfast, Camden & Rockland, Maine's Village Soup for a more traditional (i.e., harder to use) rendition of a similar idea.

That cynicism filter sees this as being all about money and business -- and for that matter, the publishers are quite willing to spell that out. They're clearly in this to improve their position and their financials. They've paid good money to buy an integrated content workflow management system (albeit something that appears to use appropriately simple technical solutions).

But at the end of the day, what really matters is that people are being brought back into the news process. This is a move that makes commercial sense for the Northwest Voice, but as they're successful, they can give implicit aid and comfort to non-commercial and less-commercial ventures like Brattleboro, Vermont's Geeklog-based iBrattleboro -- based on a sparsely-configured implementation of commodity, open-source content management software. Aesthetically, the Voice seems closer to iBrattleboro than to Village Soup, and that's a good thing. It will make them more interesting to their customers, for sure, and if they can find a shared win between community involvement, commercial success, it's got to be a good thing.

Another thought: It's important to note that this is not blogging. This is edited news, that happens to be provided by the public. Clearly, it's inspired by blogging, but it illustrates something that many boosters of Bloggism have not been willing to accept: That it cannot at any point claim to be an end-form; that it cannot, in fact, ever certainly claim to be anything but a transitional, enabling form. Blogs will certainly exist in a year or two or three; but the things they spawn will not look like them, and will not care what the community standards of "Blogistan" are. Nor should they.

The Latest Installment of "The Web as Meritocracy"

One of the few lessons I've learned since I was a young boy is that the commerical marketplace is largely a meritocracy, but not a technical one. It's a marketing meritocracy....

[Anil Dash]

Note to self to add this to my list of dangerous memes: "The Web as Meritocracy." Call it the "nigritude ultramarine" meme.

Furthermore, Dash maintained, his victory proves one thing: That the Web is a meritocracy.

"A page that's read by people instead of robots is going to do better," he said.

[Wired News, via BoingBoing]

There are some really good, basic, honest techniques for getting placement, but they take work. What Anil Dash is talking about is one of those techniques, and in his narrow slice of the web it's the best single technique. It's not in the least surprising to me that this worked, especially given the "insanely generous" weighting that Google gives to blog pages; this is the tactic that I outline for people whenever they ask me about how to get Google placement.

And that this kind of technique works does tend to foster something that looks like a meritocracy. But it is not, in fact, a meritocracy at all: It's merely a measure of popularity. And that something is popular does not mean it's true.

I've found it's important to explain the distinction I'm drawing, because there seems to be a really quite strange tendency on the part of many technophiles to believe that appearance is essentially identical with reality. ("If it resembles a duck, it might as well be one.") I think one big reason for this is that in the limited frame of relevance comprised of what's relevant to a software or data interface, appearance is in fact reality. It's fair to say that a deep and conscious grokking of this fact is one of the most essential characteristics of a good net-hacker.

To be fair (and with due reference to the first quote), I'd be surprised if Anil Dash doesn't understand that. Or Doctorow, for that matter. Though sometimes I wonder if people lose appreciation for the finer distinctions after being beaten incessantly over the head with the "Virtual Is Real" squeak-hammer day after day after weak after year...but I digress, as usual.

To Dash's point, you could construe the web as a "marketing meritocracy", but that's really just a way of exposing the ramifications of Metcalfe's Law. The "merit" at hand isn't Anil Dash's personal merit, nor even his technical merit: It's the weight of his reputation, which is a function of how the brand known as "Anil Dash" has been marketed.

Anil Dash didn't receive his "winning" ranking by merit in a personal sense, or even in a real technical sense. Rather, he won it by gaming the system, so if the results of this competition demonstrate anything, in fact, it's that the web is not a meritocracy -- unless by "meritocracy", you are restricting the judgement of merit to social engineering skills.

One stock response to all of this would be: "So what? Systems get gamed. It's all subjective." Which brings us back around to Lysenkoism and intellectual relativism. It seems to me that to argue that reality is the result achieved by the best gamesman is to give up on the idea of knowledge, in a sense.

I can't have it? OK, now I really have to have it...

From the "I don't care to belong to a club that accepts people like me as members" department, it's interesting to note that the hottest tickets in technologically-enabled "personal networking", Orkut and GMail, are invitation only.

"Evil" or not, it's brilliand marketing. The rationale for Orkut could ring true; after all, the highest-quality networking contacts are made by drawing-in to the network, rather than by reaching-in.

The GMail manouver is clearly a pure marketing ploy, though the real aim is obscured. Google understand the importance of image ("don't be evil") perhaps better than any major player in technology. Their feel-good image has bought them fierce loyalty from many of their followers; challenging the integrity of Google (even with sound arguments) is a sure way to get onesself roundly excoriated. At least until the privacy and legal issues can be worked out, it's as beneficial to Google to to have GMail available only on a limited basis as it would be to have it available to all.

Belay that: It's actually better, since current legal challenges are such that Google's ability to advertise on GMail is, at present, severely constrained.

That Pernicious "Search Is King" Meme

There's an ever-waxing meme out there which basically boils down to this: "Forget about organizing information by subject -- let a full-text search do everything for you." The chief rationale is that such searching will help increase serendipity by locating things across subject boundaries.

Here's the problem: It's a load of crap. It throws the baby out with the bathwater, by discarding one time-honored, effective way of organizing for serendipity in exchange for another, inferior (but sexier) one.

This morning, via Wired News:

"We all have a million file folders and you can't find anything," Jobs said during his keynote speech introducing Tiger, the next iteration of Mac OS X, due next year.

"It's easier to find something from among a billion Web pages with Google than it is to find something on your hard disk," he added.

... which is bullshit, incidentally. At least, it is on my hard drive...

The solution, Jobs said, is a system-wide search engine, Spotlight, which can find information across files and applications, whether it be an e-mail message or a copyright notice attached to a movie clip. "We think it's going to revolutionize the way you use your system," Jobs declared.

In Jobs' scheme, the hierarchy of files and folders is a dreary, outdated metaphor inspired by office filing. In today's communications era, categorized by the daily barrage of new e-mails, websites, pictures and movies, who wants to file when you can simply search? What does it matter where a file is stored, as long as you can find it?

Ah, I see -- the idea of hierarchically organizing data is bad because it's "dreary" and "outdated" -- that is, of course, so quintessentially Jobsian a dismissal that we can be pretty sure the reporter took his words from The Steve, Himself.

But this highlights something important: That this is not a new issue for Jobs, or for a lot of people. Jobs was an early champion (though, let's be clear, not an "innovator") in the cause of shifting to a "document-centric paradigm". The idea was that one ought not have to think about the applications one uses to create documents -- one just ought to create documents, and then make them whatever kind of document one needs. Which, to me, seems a little like not having to care what kind of vehicle you want, when you decide to drive to the night club or go haul manure.

But I digress. This is supposed to be how Macs work, but it's actually not: Macs are just exactly as application-centric as anything else, though it doesn't appear that way at first. The few attempts at removing the application from the paradigm, like ClarisWorks and the early versions of StarOffice (now downstream from OpenOffice), merely emphasized the application-centricity even more: While word processors and spreadsheet software could generally translate single-type documents without much data loss, there was no way that they were going to be able to translate a multi-mode (i.e. word processor plus presentation plus spreadsheet) document from one format to another without significant data loss or mangling.

Take for example, Rael Dornfest, who has stopped sorting his e-mail. Instead of cataloging e-mail messages into neat mailboxes, Dornfest allows his correspondence to accumulate into one giant, unsorted inbox. Whenever Dornfest, an editor at tech publisher O'Reilly and Associates, needs to find something, he simply searches for it.

Again, a problem: It doesn't work. I do the same thing (though I do actually organize into folders -- large sigle-file email repositories are a data meltdown just waiting to happen). This is a good paradigmatic case, so let's think it through: I want to find out about a business trip to Paris that was being considered a year and a half ago. I search for "trip" and "paris". If my spam folder's blocked, and assuming we're still just talking about email, I'm probably not going to get a lot of hits on Simple Life 2 or the meta-tags for some other Paris Hilton <ahem!> documentary footage. In fact, unless the office was in Paris, and the emails explicitly used the term "trip", which they may well not, I probably won't find the right emails at all. Or I'll only find part of the thread, and since no email system currently in wide use threads messages, I won't have a good way of linking on from there to ensure that I've checked all messages on-topic. (And that could lead into another rant about interaction protocols in business email, but I'll stop for now.)

By contrast, if I've organized my email by project, and I remember when the trip was, I can go directly to the folder where I keep that information and scan messages for the date range in question.

The key problem here is that search makes you work, whereas with organization, you just have to follow a path. I used to train students on internet searching. This was back in the days when search engines actually let you input Boolean searches (i.e., when you could actually get precise results that hadn't been Googlewhacked into irrelevance). Invariably, students could get useful results faster by using the Yahoo-style directory drill-down, or a combination of directory search and drill-down, than they could through search.

If they wanted to get unexpected results, they were better off searching (at least, with the directory systems we had then and have now -- these aren't library catalogs, after all). And real research is all about looking for unexpected results, after all.

And that leads me to meta data.

Library catalogs achieve serenditity through thesaurii and cross referencing. (Though in the 1980s, the LC apparently deprecated cross-referencing for reasons of administrative load.)

The only way a system like Spotlight works to achieve serendipitous searching -- and it does, by the accounts I've read -- is through cataloged meta-data. That is, when a file is created, there's a meta-information section of the file that contains things like subject, keywords, copyright statement, ownership, authorship, etc. Which almost nobody ever fills out. Trust me, I'm not making this up: from my own experience, and that of others, I know that people think meta-data is a nuisance. Some software is capable of generating its own meta-data from a document, but such schemes have two obvious problems:

  1. They only include the terms in the document -- no synonyms or antonyms or related subjects, and no obvious way of mapping ownership or institutional positioning -- so they're no real help to search.
  2. They only apply to that software, and then only going forward, and then only if people actually use them.

Now, a lot of this is wasted cycles if I take the position that filesystems aren't going away and this really all amounts to marketer wanking. But it's not wasted cycles, if I consider that the words of The Steve, dropped from On High, tend to be taken as the words of God by a community of technorati/digerati who think he's actually an innovator instead of a slick-operating second-mover with a gift for self-promotion and good taste in clothes.

This kind of thinking, in other words, can cause damage. Because people will think it's true, and they'll design things based on the idea that it's true. And since "thought leaders" like Jobs say it's important, people will use these deficient new designs, and I'll be stuck with them.

But there's little that anyone can do about it, really, except stay the course. Keep organizing your files (because otherwise, you're going to lose things, trust me on this, I know a little about these things). The "true way" to effective knowledge management (if there is one) will always involve a combination of effective search systems (from which I exclude systems like Google's that rely entirely on predictive weighting) with organization and meta-data (yes, I do believe in it, for certain things like automated resource discovery).

Funny, who would have thunk it: The "true way" is balance, as things almost always seem to come out, anyway. You can achieve motion through imbalance, but you cannot achieve progress unless your motions are in harmony -- in dynamic balance, as it were. What a strange concept...

Sometimes, Technology Is The Problem

Terrorists with leverage are scary, but I'm much more scared of nutty, cocksure attempts to build "technology" that supposedly keeps us safe. Terrorists get tired, give up, or shoot each other over the spoils, but once the hardware's installed, a lousy technology is harder to kill off than a cockroach.
[Bruce Sterling, speaking with Bruce Schneier]

Via Bruce Schneier's June 2004 Cryptogram, a "discussion" between the Bruces Schneier and Sterling that, though it consists mostly of one-paragraph positionings, does get in a few bon mots.

Cryptogram is worth looking at, too, if only for its revealing analysis of the effect of the superficially unspectacular Witty Worm. Witty was nearly unique in the degree of technical competence exhibited by its creators: If they'd chosen a different target, we could have lost the whole net in 45 minutes, instead of just 12,000 nodes.

Unsung Development of the Moment: Wikipedia Reinvents

Wikipedia is probably the most siginificant, important website on the net right here in May/June 2004. It's the signal success we can point to for bazaar-style projects, and the great white hope for the persistence of free, non-corporate-sponsored information on the web. Not to disregard Wikipedia's smaller cousin, WikInfo; they're just not big enough to be a great white hope, yet.

So, now, Wikipedia has done something intriguing: You can now talk about any article, or view previous versions. These appear to be benefits of upgrade to version 1.3 of MediaWiki, the hyper-extended Wiki implementation that Wikipedia developes and uses to drive the site.

Tired terms like "community portal" don't do this justice. I don't think the great mass of the digerati really have any clue how important Wikipedia (and WikInfo) are. This kind of move, once they notice it, could blow Wikipedia wide open.

My great fear is that it could literally blow it wide open: How will they be able to handle the loads? Will their community software be able to cope with input from every Tom, Dick and Harry with an opinion?

The upside, of course, is that with a project of this broad scope, we'll finally get that "online experiment" that other "communities" have been claiming to be for years.

Addendum: I've posted this on Mefi; let's see if anybody cares.

Second addendum: Mefites assure me that it's always had that functionality, though it wasn't as obvious as it is, now. I wonder if they've made changes that will let them handle the greater load and have decided to front-and-center those features?

Plogs Will Set Us Free

The Happy Tutor points to jonh @ Wirearchy offering thoughts on project blogging ("plogging" [daypop | google | teoma]).

jonh gets part of the way -- the same part of the way that Jeffersonian-tinged net.libertarians usually get: The tech has the power; the tech will cause changes that can cause changes.

I'll bet that in about five years ... by 2010 ... the use of blogs in the workplace will be widespread. This will require the continued spread of "transparency" in the dynamics of networked organizations, and so will continue to create pressure on core issues like leadership, structure and the processes by which people are managed in an interconnected information-based environment.

Just look at the pressures being faced by Donald Rumsfeld and you'll see an early signal - will leaders be able to lie their way through competitive challenges or major change in organizations ?

Powerful ideas, to be sure.

But as usual for the more optimistic heirs of Thomas J, he doesn't close the loop. The Tutor points out an obvious response:

Well, just look at Karl Rove. Yes, they will lie bigger and lie better. And nets will be the Terrorist tool of choice, demonized. Will the guards at GITMO blog when they return home, traumatized? Or will they take Prozac and wave flags? Did the SS write memoires? The story strong enough to extinguish evidence, leaving only the snow, the trees, and one lonely owl against the night. When the truth is ugly, the mind small, bet on lies. Unless our poets get off their postmodern ass. Where is our Mandelstam or Brodsky?

I would add (and add, ad nauseum, as often as I have to) that there are precious few tools that aid freedom which cannot be used to suppress it.

One error here is mistaking transparency for a technology; transparency is merely enabled by the technology.

Transparency can be shut off -- or, more ominously, controlled. Transparency need not be total, or even nearly total, in order to reap its benefits. The real cluetrain will run on rails paid for by people with lots of money or government influence, and those people will be placing restrictions on the passenger manifest: No bolsheviks allowed.

[Imperial] Clothing and the Digerati

Gerrit at SmartMobs agrees with Blue Arnaud: Privacy is a lost cause.

We should forget about trying to keep "our data" private; we should make it public, and take care that it is under our control. "This profile can be the basis for the social networking services," Gerritt summarizes.

But he doesn't do it justice. In truth, for Blue Arnaud, it seems to be as much about commerce as about the humans we're profiling:

This user profile has value for companies. Companies can access this profile under a Personal Commons license in a standardised and legal way. Then they can adapt their interaction with a user accordingly. They might even give discount if an user profile is available, as it makes their live cheaper (less marketing cost). This profile can also be the basis of the various Social Networking Services, which can then focus on their business: networking. A userâ??s wishlist and transaction trail is no longer available to just Amazon, but all book shops.

"So be in control again," Blue Arnaud admonishes, like a good libertarian-tinged digeratum:

A user should make this profile explicit, as some users are already doing in their weblog. Make sure that this profile represents yourself (or one of your personae) or otherwise the world might invent your profile and they might guess wrong. And publish this profile on your own website, weblog, whatever. The user becomes a writer and a publisher. This profile information could be published under some Personal Commons arrangement, i.e. personal information that is available to the world.

Beyond the detail, this is no new idea among the digerati; it's really just another variation on that ripe old technophilic anthropomorphism, "information wants to be free", which seems to get tossed around so glibly by people who utterly fail to understand its consequences. The barriers of the personal are eroding every day; that's a good thing, these folks seem to be saying.

They haven't thought it through.

They seem to believe that there will be some kind of real and fundamental trasnformation in the nature of the human animal -- forgetting, as always, that the human is animal, and thus evolved in the real and not metaphorical sense of the term. And that barring truly godlike capacity to restructure our very genome, biology, ultimately, will win out.

We forget the timescales of evolution at greater peril than threatens us for forgetting the lessons of history. Since, after all, Evolution is the most fundamental history lesson of all.

They haven't really used their imaginations. It's a pity; their far flung imaginings prove it's possible. Much like simplistic advocates of total sexual freedom, they have failed to really look inside themselves to ascertain what it would feel like for this world to be true.

Or perhaps they're just technofetishists.

How About An "Observer Effect" Meme?

"They got this guy, in Germany. Fritz Something-or-other. Or is it? Maybe it's Werner. Anyway, he's got this theory, you wanna test something, you know, scientifically - how the planets go round the sun, what sunspots are made of, why the water comes out of the tap - well, you gotta look at it. But sometimes you look at it, your looking changes it. Ya can't know the reality of what happened, or what would've happened if you hadn't-a stuck in your own goddamn schnozz. So there is no 'what happened'? Not in any sense that we can grasp, with our puny minds. Because our minds... our minds get in the way. Looking at something changes it. They call it the 'Uncertainty Principle'. Sure, it sounds screwy, but even Einstein says the guy's on to something." ['Freddy Riedenscheider', The Man Who Wasn't There]

Sam Arbesman's MemeSpread project aimed to chart the progress of a particular (albeit problematic) meme thoughout the "blogosphere", given known sources. Initially seeded on kottke.org, BoingBoing and Slashdot, Only Kottke picked it up; it apparently fared poorly until it hit MeFi, from whence it boomed across the web like one of those evanescant thunderclaps that wash across the blogosphere like a summer rain in the desert.

A Wired News article summarizes the story (though it fails to link to Arbesman's own writeup [pdf]). Aside from a passing reference to the "Hawthorne Effect", though, it doesn't really deal with the difficulty of studying phenomena such as this. It reminds me of a similar project I pitched to my undergrad advisor in 1992, with the idea of pushing out memes via Usenet. (He was uncomfortable with the human subjects concerns -- my experimental design was constructed to avoid observer effects.)

"On Social Software Consultancy"

Man, the threads just never stop weaving...

An overview piece on "social software" and its high-level requirements, from the perspective of needing to deliver recommendations to a client: Matt Webb, "On Social Software Consultancy", INTERCONNECTED, courtesy Drupal.org.

Worth a detailed read-through; need to come back to this...

On Being A Good 'Security Consumer'

The invasion of Iraq, for example, is presented as an important move for national security. It may be true, but it's only half of the argument. Invading Iraq has cost the United States enormously. The monetary bill is more than $100 billion, and the cost is still rising. The cost in American lives is more than 600, and the number is still rising. The cost in world opinion is considerable. There's a question that needs to be addressed: "Was this the best way to spend all of that? As security consumers, did we get the most security we could have for that $100 billion, those lives, and those other things?"

If it was, then we did the right thing. But if it wasn't, then we made a mistake. Even though a free Iraq is a good thing in the abstract, we would have been smarter spending our money, and lives and good will, in the world elsewhere.

....

We need to bring the same analysis to bear when thinking about other security countermeasures. Is the added security from the CAPPS-II airline profiling system worth the billions of dollars it will cost, both in dollars and in the systematic stigmatization of certain classes of Americans? Would we be smarter to spend our money on hiring Arabic translators within the FBI and the CIA, or on emergency response capabilities in our cities and towns?

As security consumers, we get to make this choice. America doesn't have infinite money or freedoms. If we're going to spend them to get security, we should act like smart consumers and get the most security we can.

[ZDNet -- Bruce Schneier, "The security trade-off"]

Tacit Knowledge

It's often easy to characterize things accurately, without being able to provide an accurate idea of what it's like to experience that thing.

Much of the reading I did over the weekend connected knowledge management to stories. I was excited by that connection, but hadn't clearly articulated it. This morning it takes a sharper form in my mind. The only way I can package, codify, communicate and transfer my tacit knowledge is to tell my stories...the experiences through which I developed my tacit knowledge. [Karen McComas]

Put another way: Narrative evolved as a means of rich communication. We tell stories as a means of metaphor, as much as a means of communicating fact.

Prior to the ascension of technology, pure fact was not nearly as important as good-enough fact. Which is to say, things didn't have to be true, as long as they were good enough for the current purposes.

Science and technology up the ante in two ways:

First, they require a degree of precision that makes "good enough" no longer quite good enough. (Why we're willing to pay for that is another matter for another time.)

Second, technology and science provide tools for enhanced precision. The ability to quantify, precisely, provides new degrees of predictability, among other benefits.

But narratives provides a different kind of truth. Largely, that's due to the fact that knowledge conveyed in narratives isn't easily "...packaged, easily codified, communicable, [readily] transferable." [Applying Corporate Knowledge Management Practices in Higher Education] [pdf] But such characterizations ring hollow; after all, the knowledge conveyed in narratives isn't easily packaged, easily codified, communicable, or readily transferable.

Google is the New Apple

It's often said that Google 'took the time to do it right' or that they 'don't abuse the customer'.

The important part is not that Google actually does these things, but rather that Google appears to do them -- that Google appear to not abuse the customer, much as the important part is that they appear to "get it right".

The likelihood that they'd get called to account on either is more or less inversely proportional to the degree to which "Larry and Sergey" are idolized or mythologized.

So, right now, it's looking like they never would....

Blue-Skinned Beast

ZDNet is brethlessly shilling -- er, informing us -- that "Big Blue is getting into social sciences."

It's really nothing new at all. IBM has been doing this kind of research for years. Decades, actually. Few other companies (perhaps none) have invested as much time and money in research on ergonomics, usability, and software development process. Whether and how they used the information were another matter, altogether.

All this represents is a new way to package it and make it attractive to customers. And frankly, I might feel a little better if they stayed out. Because if they're in the space, it means they've figured out a way to make money off of it, which isn't likely to be good for us.

But to the point of the article: The hard part of computing development and implementation is almost never the technology per se, but how to use it effectively -- how to integrate it with business process. And that is not inherently a technology problem, regardless of the desperate desire of geeks to take control of their lives and do stuff better. Many of the worst excesses of the dotbomb were fine testimony to how well they'd do when put in charge...but I digress.

True, it is often possible to apply technology to the human problems. But:

  1. it's not always a very good idea, and
  2. unless you know what you're doing, you're more likely to make it worse than better.

The fact that the computing industries can't get past their own technofetishism is what opens them up to charges of "irrelevance". Which is wrong, of course; IT does "matter", and the fact that they're so bad at understanding the human dimension only makes it matter more.

Commodifying Trust

I'm still not touching it. It's more than subverting the trust relationship: It's commodifying it.

At base, that's what Friendster and its ilk are facilitating: The commodification of trust relationships. Once those relationships are commodified, trust itself becomes undermined. At least, any trust that digs deeper than the layer of humanity where all experience can be understood in terms of economic transactions.

That is, until some new basis for common trust.

One place to find that is in "faith" -- i.e., in communities of religious practice.

Another is in these "high-tech tribes" that folks like Cory Doctorow talk about without end. But since the tokens for entry to those tribes are relatively facile (blogrolling consistently over a period of time should do it), the "tribal" connections don't go very deep.

And at another level, it's more than a little insulting to even refer to them as "tribes". That's a term we've previously used to refer to very special communities of shared allegiance or experience or -- most often -- blood.

There is a tendency among people, to minimize the complexity of a thing once it has been described. So Sterling or Stephenson can describe new tribes, or Howard Rheingold can write books about them, and they suddenly enter the vocabulary and the consciosness. But the form is grossly simplified, in the kind of way that lets us see the Amish as lovable relics. Anyone giving Rumspringa more than a moment's casual thought should be prepared to make a simple leap: These are not simpletons; they have not recoiled from the world, but rather made a conscious choice to live in their own world, and we have about as much chance of understanding why as we do of chatting with Wittgenstein's lion.

Closed Openness

As "social networking" merely now approaches over-hyped status, already we're warned of spam -- or rather, SNAM ("Social Network Spam"): Members of your networks, once, twice, or more removed, use the connection to market to you.

It's like CutCo or Tupperware, but somehow worse in the way that spam is somehow worse than unwanted phone calls. You can screen the phone, and you just don't get as many of them -- it costs too much.

But worse yet, it subverts something that could be of terrific value -- that, indeed, we're going to desperately need in a virtual future. Which doesn't mean we're going to slow our progress toward that future, of course: It will merely look quite different from the way it could.

A reaction is coming. As the barriers between people come down, and in absence of social controls on action, we are bound to become more closed than before.

I don't know exactly what form this reaction will take, but I have a dark and abiding fear that religion will be involved.

Pages

Subscribe to RSS - SocioTech