"We should always be disposed to believe that that which appears white is really black, if the hierarchy of the Church so decides."
escoles has resurrected antikoan.net with a thought-provoking post about The Singularity, involving technological development of superhuman intelligence. Vernor Vinge posits what might be called intelligence amplification (IA) instead of artificial intelligence (AI). He envisions the possibility of a more cooperative, benevolent world.
I ask myself whether humankind is capable of saving itself. What kinds of human flaws would invariably be translated into this technological ideal? If some proposed superior being’s intelligence would expand creativity and insight, how would these qualities be directed? To bring about a more cooperative society or to creatively discern more effective means of destruction?
Currently, intelligence doesn’t automatically stave off anger management problems or avarice or envy or hubris or various other foibles one would like to add to the list. Would greater intelligence afford more self-knowledge or ability to control irrational parts of humanness? Then again, we might be compelled to redefine what is human. One can only speculate.
There is more in the human package to think about than increasing intelligence. This was brought home to me upon reading an op-ed in today’s New York Times, “He Who Cast the First Stone Probably Didn’t” by Daniel Gilbert ~
Every action has a cause and a consequence: something that led to it and something that followed from it. But research shows that while people think of their own actions as the consequences of what came before, they think of other people’s actions as the causes of what came later.
What seems like a grossly self-serving pattern of remembering is actually the product of two innocent facts. First, because our senses point outward, we can observe other people’s actions but not our own. Second, because mental life is a private affair, we can observe our own thoughts but not the thoughts of others. Together, these facts suggest that our reasons for punching will always be more salient to us than the punches themselves — but that the opposite will be true of other people’s reasons and other people’s punches.
Research also has demonstrated that escalation of force occurs as...
... the natural byproduct of a neurological quirk that causes the pain we receive to seem more painful than the pain we produce, so we usually give more pain than we have received.
Research teaches us that our reasons and our pains are more palpable, more obvious and real, than are the reasons and pains of others. This leads to the escalation of mutual harm, to the illusion that others are solely responsible for it and to the belief that our actions are justifiable responses to theirs.
A physically induced, lopsided “eye for an eye” provoking escalation of violence?
Here’s the kicker:
Until we learn to stop trusting everything our brains tell us about others — and to start trusting others themselves — there will continue to be tears and recriminations in the wayback.
Trust people themselves, not only what your own brain tells you about them. Some people do try to counteract insularity through a process of self-knowledge, understanding their own motivations, and by way of empathy. But once a combative spiral starts and trust gets thrown out the window, who will turn the other cheek first? That’s a tough one sometimes, isn’t it? Would “IA” or “AI” proponents take this into consideration?
So this is a critical moment. We must do all we can to limit the civilizational fallout from this bombing. But this is not going to be easy. Why? Because unlike after 9/11, there is no obvious, easy target to retaliate against for bombings like those in London. There are no obvious terrorist headquarters and training camps in Afghanistan that we can hit with cruise missiles. The Al Qaeda threat has metastasized and become franchised. It is no longer vertical, something that we can punch in the face. It is now horizontal, flat and widely distributed, operating through the Internet and tiny cells.
Because there is no obvious target to retaliate against, and because there are not enough police to police every opening in an open society, either the Muslim world begins to really restrain, inhibit and denounce its own extremists - if it turns out that they are behind the London bombings - or the West is going to do it for them. And the West will do it in a rough, crude way - by simply shutting them out, denying them visas and making every Muslim in its midst guilty until proven innocent.
And because I think that would be a disaster, it is essential that the Muslim world wake up to the fact that it has a jihadist death cult in its midst. If it does not fight that death cult, that cancer, within its own body politic, it is going to infect Muslim-Western relations everywhere. Only the Muslim world can root out that death cult. It takes a village.
What do I mean? I mean that the greatest restraint on human behavior is never a policeman or a border guard. The greatest restraint on human behavior is what a culture and a religion deem shameful. It is what the village and its religious and political elders say is wrong or not allowed. Many people said Palestinian suicide bombing was the spontaneous reaction of frustrated Palestinian youth. But when Palestinians decided that it was in their interest to have a cease-fire with Israel, those bombings stopped cold. The village said enough was enough.
~ excerpt from â??If It's a Muslim Problem, It Needs a Muslim Solutionâ? by Thomas L. Friedman, The New York Times, July 8, 2005
Jakob Nielsen, among others, has remarked that "the network is the user experience." They're all wrong, and they're all right.
Browsing through UseIt.com yesterday left Nielsen's June 2000 predictions of sweeping change in the user experience loaded in my browser when I sat down at my desk this morning:
Since the late 1980s, hypertext theory has predicted the emergence of a navigation layer that would be the nexus of the user experience. Traditionally, we assumed that this would happen by integrating the browser with the operating system to create a unified interface for manipulating remote information and local files. It has always been silly to have some stuff treated specially because it happened to come in over a certain network. Browsers must die as independent applications.
It is counter-productive to have users suffer sub-standard user interfaces for applications that happen to run across the Internet as opposed to the local client-server environment. Application functionality requires more UI than document browsing: another reason browsers must die.
Silly, counter-productive: Sure. I've always thought so. But the tendency in the late 1990s was to assume that document browsing was exactly enough. And though the peculiar insanity of things like "Active Desktop" (which strove to make the Win95 desktop work just like the Web circa 1999) does seem to have passed, it remains true that the bias is toward the browser, not toward rich application-scope UIs.
Which is to say that Nielsen, in this old piece, is failing to heed his own advice. Users are inherently conservative: They continue to do what continues to work, which drives a feedback loop.
But more than that, he -- like almost everyone else I can think of, except myself -- is missing the single most important thing about modern computing life: People don't use the same computer all the time. Working from home, now, I frequently use two: My desktop, an OS X Mac, and my laptop, a Sony Picturebook running Windows 2000. In my most recent full time job (where I sometimes spent 12 hour days on a routine basis), I used two more systems: A desktop running Windows NT and a laptop running Windows 2000. And that's not even counting the Windows 2000 desktop I still occasionally use at home. (And would use more if I had an easy way to synchronize it with my Mac and my Picturebook.)
And so it's interesting to look at each of Nielsen's predictions as of June 2000:
None of this is to say that I don't think the network is the user experience. He's sort of right about that -- or at least, he's right that it sort of should be, that things would work better if we made apps more network aware. After all, in the age of ubiquitous wireless, the network is spreading to places it's never been before. But what the 2005 situation reveals is that relatively low-impact solutions like using cell phone networks for instant messaging or logging-in to websites have trumped high-impact solutions like re-architecting the user experience to eliminate the web. Instead of using the increasingly ubiquitous broadband services to synch all our stuff from a networked drive, we're carrying around USB keychain drives and using webmail. Instead of doing micropayments, we're still living in a world of aggregated vendors a la Amazon and charity (Wikipedia) or ad-/sales-supported services (IMDB, GraceNote).
At a more fundamental level, we have to be mindful that we don't define "the network" too narrowly. Consider the old school term "sneakernet": Putting files on floppies to carry them from one person to another. It was an ironism -- sneaker-carried "networking" wasn't "networking", right? -- but it revealed a deeper truth: "Networking" describes more than just the stuff that travels across TCP/IP networks. At a trivial level, it also includes (mobile) phone networks and their SMS/IM/picture-sharing components. But at a deeper level, it covers the human connections as well. In fact, the network of people is really at least as important as the network of machines.
Understood that way, "the network is the user experience" takes on a whole new meaning.
The Business Blogging Boot Camp (@ Windsor Media) provides a more bottom-line perspective on the growth of blogging, driven by Fortune's 1/10/2005 feature story on technology trends; their observations came to me as part of an email thread related to the BBC story I mentioned last night. They stress the importance of blogging for business, and furthermore the importance of blogging earnestly. They cite the Kryptonite affair and moves toward blog-monitoring by Bacon's Information -- the latter characterized as tentative, "inane", 'Not Getting It.' (I'm usually leery when a huge quant-marketing shop is characterized as Not Getting It. Often it's true, yes; but as far as I can see they often understand a lot more than they bother to explain to us proles. But I digress.)
There are two things I feel compelled to point out before going further: First, blogs are qualitatively analogous to specialist newsletters, which are nothing new to savvy marketers. As with specialty newsletters, the influence of a blog hinges on a subtle balance between the publisher's access to information, their (perceived) personal integrity, and the volume (direct or indirect) of their readership. What's new is the speed of blogging. I'm leery of pointing out emergent qualities, but it's hard to argue that a ten-day cycle time doesn't indicate that (a lack of) quantity may indeed, in this case, have a quality all its own.
The second thing I feel compelled to point out -- and this is both much more and much less important than it sounds -- is that the Kryptonite business not only didn't start on blogs, but didn't get its first traction there. It started on the cycling boards, and that's where it was hashed out, refined, debugged, and researched, and where the first instructional videos were posted, before it was ever reported on a blogospherically-integrated weblog. Some of these bicycling boards are almost as old as the web, and most have many members who trace their net-cred back to Usenet days. My point being that anyone focusing only on blogs as such is setting themselves up for obsolescence. Blogs as they are, are almost certainly not blogs as they will be.
Anyway, Windsor Media's take is largely blogospheric orthodoxy. And in practical terms, it's probably right: The important thing for businesses to do right now is to make it part of some people's jobs to go out, and read and post like humans. But there's a second thing that not only needs to happen, but will happen, and what's more will be enabled by the first: Smart businesses will take steps to understand how the blogosphere works, and how it can be gamed, and then they will go forth and game it. And it will work. The knowledge required will come from a few main sources: From big outfits like Bacon and free-range old-school marketing pundits (who will keep it to themselves and share out bits of wisdom to key clients); and from less old-school marketing pundits like Darren Barefoot and BL Ochman, and from product evangelism folks at big companies (who as a group will tend to share it on their blogs, undercutting Bacon et al's old-school attempts to make money off consulting). And, perhaps most important of all, it will come from research in social network analysis. More on that another time.
Blogging will be gamed by corporate and business interests, make no mistake about that. Because it can be, and is being, gamed. It happens every day. And, contrary to the blogospheric orthodoxy, the broader the cross-section of people who get involved in blogging, the easier it gets to game the system without looking like a weasel. And if the golden rule of capitalist systems is that money wants to make more money (and I'm pretty sure it's something like that), and if blogging has an impact on the growth and flow of money, then money will drive blogging, and blogging will get gamed.
Now I'm getting into blogging heterodoxy. The conventional wisdom on the blogging ethos is very cluetrain, and in fact, the Kryptonite affair does indeed show a real "cluetrain" cause-effect loop. I missed it at the time because I just didn't tune in to the story, but the folks at Fortune and Windsor Media are right about that: The ten-day problem-to-product recall cycle at Kryptonite was characterized by all the corporate communications failures criticized in the Cluetrain Manifesto. It just took a lot longer for this first clear case to emerge than either they or, frankly, I thought it would.
The orthodox position is that the more people get involved in blogging, the harder it gets to game the system. It's a variation on the open-source golden rule of debugging ("Given enough eyes, all bugs are trivial"): "Given enough eyes, all misinformation will be found." But open-source debugging works (when it works, which it often doesn't, but that's another story) because the "eyes on the code" belong to people who know how to spot a problem, and have the capability to affect it more or less directly. In blogging, the "eyes on the information" are often people with little or no real expertise. Much of what they spout is nonsense.
And yet, it's effective.
The blogosphere shifts like a body of water: Fast, and irresistibly. Part of the reason that happens is that the blogging community is comprised largely of small communities with large enough membership to make an impact, and what's more, those communities overlap: PoliBloggers are tight with techbloggers who are tight with lifestyle bloggers who are tight with polibloggers.... So when the loop has looped a few times, we find that a relatively small group of people can pretty reliably and rapidly shift the character of the blogosphere. But as the blogosphere becomes larger, it grows more statistically homogeneous, and small communities of movers will not have the same kind of predictable results anymore. Then it will seem less like water, and more like mud.
But I digress, again. I started this to talk about gaming the blogosphere, and that this will happen, I do not doubt for an instant. There's a lot of money riding on this, after all. Some people will figure out how to game the blogosphere -- to game the cluetrain. It will be a painful process with lots of false starts, but we are well beyond the beginning of the process. It started long before the Kryptonite affair; if I had to pick a point in time, I'd pick the consolidation of successful blogs like Wonkette, Gizmodo and ... under the Gawker Media banner. Gawker sells lots of ads, gets lots and lots of daily eyeballs, and their more overtly commercial blogs (like Gizmodo and Jalopnik) have pull with the product managers by virtue of the fact that they can say things like:
What consumers wantâ??an out-of-box way to share and transmit files between different storage media and computers (and users)â??is exactly what manufacturers don't want to give them, but they'll tease us a little. So, if you're really rich, DigitalDeck Entertainment Network is busting out an in-home network PC to gear to DVD sharing system that costs $4000 - $5000. It probably consists of a bunch of cables and a universal remote that your geeked-out younger brother could hack together himself.
And so, we've come back around again to the specialist newsletter: I take Gizmodo seriously (and I confess, I do read it more or less every day) because I see things like this that indicate to me that they bother to think a bit about what they're reviewing. They have credibility for me because they speak not merely in a human voice, but in one that says credible things. And they have the benefit of comprehensiveness because somebody (namely, Gawker Media) is paying them to do nothing but that.
And by the way, at some point does it stop being "blogging" and start being journalism? Open question, IMO.
"La [société] Mexicaine de la Perforation" (roughly "the Diggers from La Mexicaine") are "...a clandestine cell of 'urban explorers' which claims its mission is to 'reclaim and transform disused city spaces for the creation of zones of expression for free and independent art'...." Earlier this summer, they ran a seven-week film series in an underground cinema complex (including restaurant and bar) of their own construction. [Guardian, "Paris's new slant on underground movies"]
And boy, were the gendarmes pissed.
Not that they've been able to quite figure out what they're pissed about. Paris police still don't know what to charge whem with. After all, the group adheres strictly to its rule of leaving each "crime"-scene "cleaner, if anything, than when we found it".
"They freaked out completely," Lazar, their spokesman, said happily. "They called in the bomb squad, the sniffer dogs, army security, the anti-terrorist squad, the serious crimes unit. They said it was skinheads or subversives. They got it on to national TV news. They hadn't a clue."
[The cinema] was constructed in a series of interconnected caves totalling some 400 square metres beneath the Palais de Chaillot, across the Seine from the Eiffel tower. Former quarries, they were partly refurbished during the 1900 Universal Exposition when one of the galleries was clad with concrete to represent a future Channel tunnel and a wall was artfully terraced.
But the caves were sealed off for the last time at least 20 years ago and subsequently "ceased to exist officially", Lazar said. "We knew them well because we used them to get into the Palais de Chaillot every Bastille Day. The roof is the perfect place from which to watch the fireworks."
Indeed most of the LMDP's underground happenings are organised in places the city authorities are not aware of, he added. "There are so many underground networks - the quarries, the metro, the collective heating, the electricity, the sewers - and each is the responsibility of a different bureaucracy," he said.
"Urban explorers are the only people who, between us, know it all. We move between each network. We know where they link up - often, it's us who made the link. The authorities, the police, town hall, they don't know a hundredth, a thousandth, of what's down there."
There's something really appealing about all of this -- and it only gets more amusing when you learn that the police were so upset about it. It's hard for me to imagine a major police department in a U.S. city getting so upset about such a thing (at least, until some security mom pointed out that they could have been TERRORISTS).
Then again, it does give one pause: Maybe the London Underground really is a terrorist movement....
It's fashionable in many circles to trash on Internet information resources. And worst is any information resource that's driven by "community." Take the recent story from the Syracuse Post-Standard by would-be technopundit Al Fasoldt.
Wikipedia, [Liverpool High School Librarian Susan Stagnitta] explains, takes the idea of open source one step too far for most of us.
"Anyone can change the content of an article in the Wikipedia, and there is no editorial review of the content. I use this Web site as a learning experience for my students. Many of them have used it in the past for research and were very surprised when we investigated the authority of the site."
"I was amazed at how little I knew about Wikipedia," Fasoldt continues. I'm amazed at how little he still does. For example, he doesn't correct Ms. Stagnitta's fallacious assertion that there's "no editorial review". In fact, Wikipedia articles do, absolutely, receive editorial review. All the time. Twenty-four-by-seven.
The research required to correct this misapprehension wouldn't be difficult: Fasoldt (or Stagnitta) could start by scanning the Wikipedia Community Portal, look at the Wikipedia Village Pump for discussions of policy questions, or look at their Policies and guidelines entry. If he wanted to be really adventurous, and really interested in testing how reliable Wikipedia is, he could experiment by trying to hack the system and drive an inaccurate edit; if he did that, he'd discover that there is, in face, editorial review -- it's just not performed by an anointed editor, but rather by people who might have some kind of actual knowledge on the subject. (Mike at Techdirt.com suggested such an experiment, and was rebuffed.)
But there's more at play here than sloppy research. In correspondence with Mike at Techdirt.com, Fasoldt used terms like "repugnant" and "outrageous" -- terms which are clearly driven by fear or anger (the latter in any case usually being driven by fear). So I have to sit here and ask myself: What is it about Wikipedia that inspires such fear and rage? And I think I know what it is. It's the very idea that information not sanctioned by some kind of official authority could be taken as reliable.
Because, after all, if information is "free", then information gate-keepers have empty rice-bowls.
Let's look for a moment at who's complaining: A high school librarian (well, we assume she's a librarian, Fasoldt's piece actually doesn't identify her as such), and a would-be pundit with a penchent for John Stossel-ish ranting. These are both people in eroding professions, most likely looking to avoid challenges from "authorities" who aren't designated as "authoritative" by membership in their guild. Heaven forbid that some student should rely on a Wikipedia article that's the collective work of three or four entomology graduate students in different universities, rather than one from Brittanica that was written by one grad student and then signed by his advisor. Such things will certainly and truly cause the end of civilization as we know it.
This is another one of those false dichotomies that frightened practitioners of marginal professions use as leverage to get their heads screwed still deeper into the sand. Wikipedia is a good thing. It's not a good thing because community-driven content is an inherently good thing (though that last is almost true); it's a good thing because they do it well. That's partly a function of size and critical mass; but it's also partly a function of rigor in management. The rules get enforced, and editorial quality stays generally good, because like most successful "open source" projects, there's really a fairly high degree of central control in the areas that really matter.
It's easy to see why Wikipedia would be very, very threatening to a public school librarian; it's also easy to see why it could suddenly seem very threatening -- or, at least, like a blood-spotted chicken -- to someone who's set himself up to be a mediator for technical information. In the more "elite" echelons of librarianship and technical journalism (visit the reference desk at a good-sized college or public library for examples of the former, or read Dan Gillmour or ... for examples of the latter), the practitioners for the most part have a deep understanding that they are not gate-keepers, but guides. In the margins, that sense seems to get lost. Whether that's primarily due to the general noise of trying to make a living, or due to more petty fear of the future, is hard to tell -- and in any case, they're probably not so often mutually exclusive.
All that said, and as a final word, the free and open creation and maintenace of public information resources by the public that uses them is an inherently good thing, provided the quality of the information remains high. In that sense, Wikipedia could and probably should be a poster child for the proper and proportional application of [American] Libertarian and Anarchist ideas. It's an example of the "direct action" of many participants aggregating into an objectively good result.
One final point: Curiously enough, the quality of information never actually seems to be at issue for Stagnitta and Fasoldt. You'd think that if they're so concerned about reliability of the information, they'd want to actually test the information. But they seem more focused on explaining why it couldn't possibly be reliable, versus testing whether it actually is. Well, I guess I can't expect them to be scientists.
ADDENDUM: I got some of the links wrong, herein. The original story lead was via BoingBoing, and that's where the terms "repugnant", "dangerous", and "outrageous" appeared.
Another clue suggested that Layne was winking to techies: In addition to being Plain Layne's initials, ".pl" is the country domain extension of Polish websites (as in www.poland.pl). Those little enigmas signaled the tipping point, which, by the time it was over, led to a tale of internet intrigue.
The whole shooting match (kottke links much of the relevant stuff, with this excellent summary) is interesting, but to me it's more of a matter of admiration for what it must have taken to pull it off for so long. I recognize the seductiveness of the idea of becoming someone different, and I've known people who did it -- I once dated a woman who confessed to having multiple online personae ranging from a 13 year old sk8r chick to a 40-something (male) BYU psych prof. Hell, I've flirted with it myself.
I probably should be more sympathetic to the community that formed around "her" blog. I've been mulling this term "community" a lot lately, not the least reason being Dave Winer's megalomaniacal invocation of the term every time someone disagrees with him about anything (but that's another story). If I'd invested that much psychic capital, I'd at least want to not find out the person was a fiction. But to the points that many have made, this is really not too different in the details from what folks (other folks, not me) do in MUDs and MMPORGs and the like.
There is one big difference, of course: People thought Layne was real.
Dok, that's actually one of the most accurate sentiments I've yet read about the Plain Layne affair, "The "people that live in Layne's comment box" found each other because of Layne, and apparently now continue on without her."
And that's a nice way to leave things.
As I say, I like to think I could be that generous.
jonh gets part of the way -- the same part of the way that Jeffersonian-tinged net.libertarians usually get: The tech has the power; the tech will cause changes that can cause changes.
I'll bet that in about five years ... by 2010 ... the use of blogs in the workplace will be widespread. This will require the continued spread of "transparency" in the dynamics of networked organizations, and so will continue to create pressure on core issues like leadership, structure and the processes by which people are managed in an interconnected information-based environment.
Just look at the pressures being faced by Donald Rumsfeld and you'll see an early signal - will leaders be able to lie their way through competitive challenges or major change in organizations ?
Powerful ideas, to be sure.
But as usual for the more optimistic heirs of Thomas J, he doesn't close the loop. The Tutor points out an obvious response:
Well, just look at Karl Rove. Yes, they will lie bigger and lie better. And nets will be the Terrorist tool of choice, demonized. Will the guards at GITMO blog when they return home, traumatized? Or will they take Prozac and wave flags? Did the SS write memoires? The story strong enough to extinguish evidence, leaving only the snow, the trees, and one lonely owl against the night. When the truth is ugly, the mind small, bet on lies. Unless our poets get off their postmodern ass. Where is our Mandelstam or Brodsky?
One error here is mistaking transparency for a technology; transparency is merely enabled by the technology.
Transparency can be shut off -- or, more ominously, controlled. Transparency need not be total, or even nearly total, in order to reap its benefits. The real cluetrain will run on rails paid for by people with lots of money or government influence, and those people will be placing restrictions on the passenger manifest: No bolsheviks allowed.
"I'm a spook. I appreciate good work. This was good work..." -- Patrick Lang, in Newsday
"Iranian intelligence has been manipulating the United States through Chalabi by furnishing through his Information Collection Program information to provoke the United States into getting rid of Saddam Hussein," said an intelligence source Friday who was briefed on the Defense Intelligence Agency's conclusions, which were based on a review of thousands of internal documents.
So, let's make sure we understand this: We went to war in Iraq based largely on intelligence provided by an agent of a foreign power.
Conspiracy theories will abound, of course -- we can expect them to proliferate like mushrooms on a wet summer morning. A few I can anticipate:
For the record (and to not be a rhetorical bet-hedger like Christopher Hitchens), I think the story is probably pretty much as Newsday shows it: We got out-smarted by a bunch of guys with darker skins than us who worship in ways we're not comfortable with. Like any good con man, they played us -- well, they played the Rummy-Perle-Wolfie crowd. They recognized that zealots will believe anything that supports their desires.
In a nutshell: We got suckered.
We should forget about trying to keep "our data" private; we should make it public, and take care that it is under our control. "This profile can be the basis for the social networking services," Gerritt summarizes.
But he doesn't do it justice. In truth, for Blue Arnaud, it seems to be as much about commerce as about the humans we're profiling:
This user profile has value for companies. Companies can access this profile under a Personal Commons license in a standardised and legal way. Then they can adapt their interaction with a user accordingly. They might even give discount if an user profile is available, as it makes their live cheaper (less marketing cost). This profile can also be the basis of the various Social Networking Services, which can then focus on their business: networking. A userâ??s wishlist and transaction trail is no longer available to just Amazon, but all book shops.
"So be in control again," Blue Arnaud admonishes, like a good libertarian-tinged digeratum:
A user should make this profile explicit, as some users are already doing in their weblog. Make sure that this profile represents yourself (or one of your personae) or otherwise the world might invent your profile and they might guess wrong. And publish this profile on your own website, weblog, whatever. The user becomes a writer and a publisher. This profile information could be published under some Personal Commons arrangement, i.e. personal information that is available to the world.
Beyond the detail, this is no new idea among the digerati; it's really just another variation on that ripe old technophilic anthropomorphism, "information wants to be free", which seems to get tossed around so glibly by people who utterly fail to understand its consequences. The barriers of the personal are eroding every day; that's a good thing, these folks seem to be saying.
They haven't thought it through.
They seem to believe that there will be some kind of real and fundamental trasnformation in the nature of the human animal -- forgetting, as always, that the human is animal, and thus evolved in the real and not metaphorical sense of the term. And that barring truly godlike capacity to restructure our very genome, biology, ultimately, will win out.
We forget the timescales of evolution at greater peril than threatens us for forgetting the lessons of history. Since, after all, Evolution is the most fundamental history lesson of all.
They haven't really used their imaginations. It's a pity; their far flung imaginings prove it's possible. Much like simplistic advocates of total sexual freedom, they have failed to really look inside themselves to ascertain what it would feel like for this world to be true.
Or perhaps they're just technofetishists.
There's a core rationalization that's shared among many libertarians, hedonists, and a lot of neo-cons and dogmatic conservatives: Everyone is ultimately responsible for their own fate; ergo, I'm not responsible for anything that "happens to" anyone else, as long as it's something they could have avoided. It even shows up in new-agey chestnuts like "we are where we are, doing what we are meant to be doing..."
So the pricks at Enron aren't responsible for the folks who lost their pensions (they chose to take the risks), the U.S. as a country aren't responsible for the fact that people around the world hate us (they can choose to be for evil or for us), and employers aren't responsible for the fact that employees get paid less when their wages are cut (they're "free" to go find a better-paying job, if it be the will of The Market [praise be to The Market, may peace be upon It]).
By extension, the con man isn't responsible for the fact that people lose money in a con. I've backed many libertarians into that corner and they've all gone in smiling like good little sophists -- which is to say that yes, they really do believe that, just like they really believe that only stupid people get conned. (Look at a con-man's victims: It's the vic's vanity, their pride in their own intelligence, that fuels the really big cons. I know. I've been part of one.)
Personal Resposibility dogma (like most or all dogma) ends up being associated with a lot of collateral damage. Oh, well: Gotta break a few eggs to make a perfect world.
And with regard to sex, and the rampant hypersexualization of modern society (a case I'd prefer to make elsewhere, 'cuz it gets long), and particularly with a certain type of hedonism that holds that if everyone were "truly honest", there'd be no relationship problems -- that people should be free to sleep with whoever they want.
It's never happened to me. But I can read between the lines well enough to know that the allegedly-contented parties in these alternative relationships are not all on the same page. That's a fiction, a delusion, that they preserve to keep the ship afloat.
"Open" relationships and marriages and their collateral damage end up like "taking one for the team" wherever you are. The self-delusions that participants perpetuate to keep the ship afloat end up looking like sports team unity, end up looking like military unit cohesiveness, end up looking like unity in a congregation, end up looking like corporate unity, end up looking like cosa nostra....
In the end the basic principle that we're left with isn't "people are responsible for themselves", but "people use other people." But it looks a lot less noble when you put it that way.