"Freedom's just another word for nothin' left to lose."
Why does anybody still think that ridicule is a useful tool for achieving positive ends? And why is anyone still willing to accept the idea that people who claim to use ridicule for positive ends are doing anything other than bullying people to make themselves feel superior?
Design is a religion. Let's just be clear about that. It has so many of the salient characteristics of religion that I find it difficult to understand why people become so offended at the notion that what they're preaching is not objective truth, it's faith. After all, they've expended a great deal of effort, karma and usually money to get their design credentials, and then they have to live in a world that Doesn't Take Design Seriously. (Much like the world doesn't Love Poetry. But that's another subject for another time.)
I feel for them, but I can't quite reach them, as my dad used to say. Here's a hint: Preachers go to grad school, too. There's a difference between VoTech and science, and unless you're formulating and falsifying hypotheses, design students, you're basically in a jumped-up VoTech program. Just like preachers.
Design fascists like the Design Police start very quickly to sound like folks who see oppression of Christians in all aspects of American daily life. What they're really seeing is that their particular religious biases are not honored by everyone who doesn't share them. Designers see stuff they don't like and confuse that with "bad design" in much the same way that extreme religioninsts see attitudes and behaviors they don't like and confuse it with immorality.
Truly hard-core design fetishists have a wonderful and seemingly limitless capacity for arrogance. They can say stuff like "Comic Sans is Evil", can insist that proper kerning and ligature are crucial to truly understanding the meaning of a text, and basically imply that the rest of the world is populated with design-illiterate idiots who are destroying civilization through sloth and ignorance, all with a straight face and all without realizing that they're basically the design-equivalent of Anne Coulter: endlessly blathering that people who don't "get" them just have no sense of humor. (And bad taste, of course, to boot. Because Helvetica on pink bubblegum is the height of design, doncha know. Wait, I forgot: Intention is what matters; they meant it to be ugly, it's a statement....)
Take the Design Police ("Bring bad design to justice"). (Please take them.) They're a couple of design students (ah, they're still in DESIGN SCHOOL, that goes a long way toward explaining their sophomoric arrogance) I got a link to this lovely little bit of high-concept hideousness ("it's ugly on purpose! that makes it clever!") from a designer in my company. She's easily offended and basically a nice person, not given to deep thought about the fact that her attitude basically implies that everyone else is an idiot, so I refrained from pointing out to her that this is actually pretty fucking offensive elitist bullshit. She works in advertising. She doesn't realize or doesn't accept that design is not as important as designers like to think it is, and why should she? Why would she? It would have a negative impact on her ability to do her work. Heaven forbid someone should point out that the high-concept design choice may not communicate as effectively as a simpler, more message-oriented choice.
Many designers seem to have been drilled in the facile mantra that "medium is message", without any real analysis of what that means. So they take a basically insightful concept like Emotional Design and turn it into a justification for the simple subordination of understanding to gut feeling. Most designers are what the President would call "libruhls", but the attitude is the same as his: The gut is king, the emotions rule over all, what I feel is much more important than anything you or I might know, and that's as it should be. That's not, of course, what Don Norman was arguing when he wrote Turn Signals Are The Facial Expressions Of Automobiles, and it's not what guys like Tognazzini profess to mean when they use the term "emotional design." But I've worked and interacted with a lot of designers, and it seems pretty clear to me that in the current design zeitgeist -- at least on the web -- emotional design means "to look good is much more important than to be good." That appearance becomes its own reality. A very neo-conservative attitude.
I've got no illusions about changing the viewpoint of designers any more than I have about changing the viewpoints of militant religioninsts or militant atheists. They'll believe what they believe. I would really just prefer that they stop wasting my attention and lots of people's energy and money with their bullying (pomo) blather about the importance of clearly marginal crap like the "unimaginative" choice of Helvetica.
The Palm Foleo is catching a lot of heat. Some of it is well deserved. (Just what the hell is this device supposed to "assist" a smartphone with? Shouldn't it be the other way around?) But most of it is feeding-frenzy pileon by people who got burned in the first try at thin clients, ten years ago.
Which is to say that AFAIAC most of the most strident critics of the Foleo don't want to admit that they've gotten the point -- they pretend not to understand what the device really is, which is plainly and simply a thin client for web 2.0 applications. But it's a thin client that could actually work: It's got a real web browser to access the real web applications that have sprung up in the interim via the near-ubiquitous broadband that we weren't even close to having the last time around.
Sour grapes like this prevents people from seeing the two real reasons that it will fail: It's not fast enough, and its being sold by idiots. Really, again, that whole 'smartphone assistant' thing: The Foleo should be (and will more likely be) "assisting" the phone, rather than vice versa. It's the thing with the network connectivity, not the phone. It's the thing with the USB connection, not the phone.
Semi-surprisingly, Jakob Nielsen has joined in the fray with a decidedly mainstream take on the specs:
Much too big and fat for a mobile device. At that size, you might as well buy a small laptop, which would run all the software you are already used to. For example Sony's Vaio TZ90 is 10% lighter and thinner...
... and 150% more expensive than the Foleo. Though it does have similar battery life. But that's still kind of a pathetic excuse for a pile-on. Criticize it for something real, why don't you, like, say, what you could do with it, instead of demanding that the device embrace all the weaknesses it's clearly designed to overcome:
- weight: 2.5 pounds (1.1 kg)
- thickness: 1 inch (2.4 cm)
- size: 11x6 inches (28x15 cm) - estimated from photo
Much too big and fat for a mobile device. At that size, you might as well buy a small laptop, which would run all the software you are already used to. For example Sony's Vaio TZ90 is 10% lighter and thinner than the Foleo.
A mobile information appliance should be thinner than 1 cm (0.4 in), weigh less than 1 pound (.45 kg), and be about 6x4 inches (15x10 cm) big. Something with these specs makes sense because it would fit the ecological niche between the laptop and the phone.
So let's get this straight: A mobile device should be too small to easily read on, too small to type on, but still too big to fit easily in slacks pockets? Where's the text entry? Where's the user interface? Seems like a rather strange set of requirements. Let's restate them so they make more sense, in functional terms. A mobile device must:
Conspicuously missing, but important:
In addition, we can make some other generalizations about what a device in the Foleo's class should do:
So the specs that Nielsen (and so many others) have seen as so ripe for critcism are not at all the ones that are important. The ones that are important, and the ones that will end up being technically critical for this device, are:
So at a technical level, I'm actually positive it fails on only one point, and that's run-time. Nielsen does raise a valid point, though:
Palm seems ashamed of its own specs since they are nowhere to be found on the product pages.
This is a blatant violation of all guidelines for e-commerce. I can't believe even the worst designer would suggest making a site where you can't find out how big the product is (especially for a mobile device). It must be a deliberate decision to hide the facts.
I think he's actually right about that. I think the product managers and marketers at Palm were so gun-shy about identifying Foleo as a thin client that they invented this whole "smart-phone companion" nonsense to cover it up. They basically threw the game -- decided the product was a bust before they even started, and concocted a marketing plan that, while it couldn't possibly succeed, at least had good graphics.
But come on -- a "smart phone accessory" that's ten times the size of the phone? Idiots.
Courtesy of the Peoria Chronicle's blog, here are links to a lecture on "New Urbanism" given by Andres Duany in Houston. It's on YouTube in 9 parts of 10 minutes each, and the first several have been posted on the Peoria Chronicle's blog. I'll be working my way through them bite by bite, as I hardly have 90 minutes to spare for anything that requires both visual and auditory attention, simultaneously. I may yet find something objectionable in it, but the basic presentation is quicker than reading Death and Life of Great American Cities.
One comment from the Chronicle blog is interesting:
“New urbanism” is just a facade being used by developers to pack as many people into the smallest footprint as possible, to increase their profits.
In San Diego, older neighborhoods are being transformed into jam packed, noisy, traffic infested cesspools, by billionaires who live on 10 acre estates in Rancho Santa Fe (SD’s Bel Aire).
The 40 year old, 10 unit, low income apt building next to me was converted to $400k “condos” last year. It’s been pure hell, with 15 rude, loudmouthed, morons moving in, several of whom are already about to default on their loans. Several units are now being rented, at 3 times the monthly rent as before. Who wins? A handful of guys sitting around dreaming up their next scheme.
That he misses the point of New Urbanism completely isn't the interesting part -- it's that he's so willing to conflate New Urbanism with a newspeak co-optation of its ideals. He's not necessarily wrong to do so. Like many idealistic movements, it has some foolishness and romanticism baked into and is vulnerable to abuse. There are plenty of people who jump into idealistic movements with a partial understanding of the situation and then end up taking it in whole new, highly rationalized direction.
That's one of my objections to "emotional design": When you choose, as Don Norman, Bruce Tognazzini et al seem to have chosen, to make your evaluation of a design's quality hinge upon its gut, emotional appeal, you're basically opening up the door to tossing out real design and replacing it with pandering. Machines become good if they look cool. By that metric, the AMC Javelin would be one of the coolest, hottest cars ever manufactured. The nigh-indisputable fact that it was a piece of crap would be irrelevant: It had great "emotional design."
Similarly, the fact that PowerBooks are screwed together using 36 (or more) tiny screws of five to six different sizes and head-types, but also force-fit using spring clips, becomes irrelevant: The design feels great, looks great. Never mind that it could cost less to manufacture, cost less to repair and upgrade, and be just as solid, just as sound, if it were designed better. It's still great "emotional design."
Pop quiz -- does this passage describe the present, or the future?
You sit immersed in a wireless cloud, navigating your way through the folders on your hard drive. It is a floating forest of branching tree directories anchored to a root folder buried somewhere deep inside the machine. You are listening to streaming audio whilst a torrent of more music flows into your MP3 player. While it downloads, your system is organising your music library into fields within a database and generating a feed direct to your homepage. Via your Flock browser you twitter to your friends about the latest item on the newsriver then post a few paragraphs to your blog, where they join the complex trail of links and paths going in and out of your site. While you surf, it's easy to forget that beneath you lies a creepy invisible underworld populated by spiders, bugs, crawlers, worms, and microscopic viruses, whilst above ground your transactions are hungrily devoured by sheep that shit grass before being aggregated into the Long Tail. That data trail you're leaving behind stimulates the synapses of the global brain, which is in turn pulled towards the gravitational core of the Web 2.0 solar system...
Answer: It's the present, of course.
Some design-geek at Frog Design thinks that iPods are "universally" described as "clean" because the iPod "references bathroom materials." It's kind of a silly little think-piece, not least in that it makes a point and then throws out a lot of unrelated arguement in an attempt to hide the fact that it doesn't really make much of a case for what might otherwise be an interesting assertion. But that's not what I'm writing about.
A comment in-thread lead me to this insight: Being a "Mac Person" is a little like being a mason.
Which is to say, to be a "Mac Person" is to feel that you belong to something, while at the same time feeling yourself to be different from other (lesser) people. If you belong to a secret society of some kind, you feel both privileged to belong, and empowered by your connection to that society.
Membership in the secret society comes with a cost: Dues, expenses for robes or other paraphernalia (as Stetson Kennedy wrote in his book about infiltrating the Klan), and any opportunity cost associated with providing expected assistance to other members. Any extra costs are obviously assumed to be at least offset by benefits, by "believers" in the secret society. Those costs are their "dues"; they're what they pay for the privilege of being made special by the organization.
Committing to the Apple Way has similar costs: Software is more expensive and less plentiful; hardware is often proprietary (as with iPod peripherals), or hardware options more limited (if you don't believe it, try to buy a webcam off the shelf at a mainstream store); software conventions are different, and require retraining. Apple users (rationally) presume there to be offsetting benefits, typically cast in terms of usability. My own experience using and supporting Macs tells me that those benefits are illusory, but that's beside the point: Mac users assume them to exist, and act on that assumption.
But they also gain a sense of superiority from it, and they get that reinforced every time they pay more for something, every time they have a document interchange problem with a Windows-using compatriot, every time have a problem figuring out what to do when they sit down at a non-Mac microcomputer.
The extra cost is understood as an investment. They are paying dues. Being a Mac Person is, in that way, a little like being a Mason. Or at least, a little like what we might imagine it's like to be a Mason, since most of us have never actually met one.
(I just posted a version of the following over at Drupal.org, in their "Drupal Core" forum. I doubt it will make much of an impact, but I had to try...)
I propose that there is a problem with the ways that program function URLs are written in Drupal, that causes Drupal to be a disproportionate target for trackback and comment spammers.
The problem with comment and trackback spam in Drupal is this: It's too easy to guess the URL for comments and trackbacks.
In Drupal, the link for a node has the form "/node/x", where x is the node id. In fact, you can formulate a lot of Drupal URLs that way; for example, to track-back to x, the URI would be "/trackback/x"; to post a comment to x would be "/node/comment/reply/x". So you can see that it would be a trivially easy task to write a script that just walked the node table from top to bottom, trying to post comments.
Which is pretty much what spammers do to my site: They fire up a 'bot to walk my node tree, looking for nodes that are open to comment or accepting trackbacks. I have some evidence that it's different groups of spammers trying to do each thing -- one group seems to be switching IPs after a small number of attempts, and the other tends to use the same IP until I block it, and then takes a day or so to begin again -- but that hardly matters. What does matter is that computational horsepower and network bandwidth cost these guys so little that they don't even bother to stop trying after six or seven hundred failures -- they just keep on going, like the god damned energizer bunny. For the first sixteen days of August this year, I got well over 100,000 page views, of which over 95% were my 404 error page. The "not found" URL in over 90% of those cases was some variant on a standard Drupal trackback or comment-posting URL.
One way to address this would be to use something other than a sequential integer as the node ID. This is effectively what happens with tools like MoveableType and Wordform/WordPress because they use real words to form the path elements in their URIs -- for example, /archives/2005/07/05/wordform-metadata-for-wordpress/, which links to an article on Shelley Powers's site. Whether those real words correspond to real directories or not is kind of immaterial; the important point is that they're impractically difficult to crank out iteratively with a simple scripted 'bot. Having to discover the links would probably increase the 'bot's time per transaction by a factor of five or six. Better to focus on vulnerable tools, like Drupal.
But the solution doesn't need to be that literal. What if, instead of a sequential integer, Drupal assigned a Unix timestamp value as a node ID? That would introduce an order of complexity to the node naming scheme that isn't quite as dramatic as that found in MT or WordPress, but is still much, much greater than what we've got now. Unless you post at a ridiculous frequency, it would guarantee unique node IDs. And all at little cost in human readability (since I don't see any evidence that humans address articles or taxonomy terms by ID number, anyway).
Some people will immediately respond that this is "security through obscurity", and that it's therefore bad. I'd argue that they're wrong on two counts: First, it's not security through obscurity so much as security through economic disincentive; second, it's not bad, because even knowing exactly how it works doesn't help you very much to game it. The problem with security through obscurity, see, is that it's gameable. Once you know that the path to the images directory is "/roger/jessica/rabbit/", then you can get the images whenever you want; even if you know that the URL to post a comment is "/node/timestamp/reply/comment/", you're not a heck of a lot closer to getting a valid trackback URL than you were before you knew that.
Big targets mean big distractions.
I'm sitting here listening to Whadya Know on the radio while I write. While I do this, I've got a couple of applications and part of my desktop visible on screen, and a cluttery clumsy app launching device pinned to the left edge of my screen. (I move my Dashboard to the left edge because I value the vertical screen space too much. More vertical space means more text on screen, which means more chance at understanding what the hell I'm looking at. Which is to the point, believe it or not, but I'm not going to go there right now.)
And I'm getting distracted by it all. Part of it is that I'm over 40 and wasn't "raised to multi-task" (as so many people raised in the age of multi-tasking OSs and multiple-media-streams seem to think they have been). But part of the problem is all this extraneous visual noise -- stuff I don't need to see right now, like the "drawer" to the left of my application window that lets me see the subject lines of previous journal entries and, more to the point, blocks out a bunch of other distracting stuff in the windows behind this one. Obviously, I could close the "drawer" and widen my editing window to cover them, but then I'd have a line-length that would be difficult to eye-track.
Anyway, the point of this (man, I am getting distracted) is that having all this clutter on my screen distracts me. Presumably that's why MacJournal (like a lot of "journaling" applications) has a full-screen mode that lets me shut out everything else if I so choose.
Fitt's Law is increasingly invoked these days to justify a lot of design decisions, like pinning a menu bar to the top of the screen for all applications, or putting "hot zones" in the corners of the screen. It's invoked as a rationalization for putting web page navigation at the edge of the page (and hence, presumably, at the edge of a window).
Interestingly, it seldom gets used as a rationalization for making navigation large.
Fitt's Law reduces to a fairly simple principle: The likelihood of hitting a target by using a mouse pointer is a function of the size of the target and the distance from the starting point. That is, it's easier to hit a big target with a mouse pointer than it is to hit a small target.
Fitt's Law is also often cited as demonstrating that it's easier to hit targets that are at a constrained edge or corner; it's as valid a principle as Fitt's Law, but isn't really implied by it. So Fitt's Law gets cited to justify things like parking the active menu bar at a screen edge. It's easy to get to the edge of a constrained screen: Just bang your mouse or track-pad finger or pointing-stick finger over to the screen edge and it will stop -- won't go any farther. Bang in the general direction of the corner, and the cursor will behave like water and "flow" into the corner, so the corners become the easiest thing to hit on the screen. Tognazzini, among others, uncritically and inaccurately cites this as an example of Fitt's Law in action. I don't know who came up with this conflation first, but Tog is the most vocal exponent of it that I'm aware of so I'll probably start referring to it as "Tognazzini's Corollary."
(Aside: Obviously this only holds for constrained edges, as on single-screen systems. On two-screen systems, or on systems with virtualized desktop scrolling, it's a little more complex. Less obviously, this principle is more or less meaningless on systems that are actuated with directly-positioned devices like touch-screens, and it requires that people engage in some selective modification of their spatial metaphors. But that's another topic for another time.)
It's interesting to me that Fitt's Law isn't applied to the size of buttons because that's its most obvious implication: You want to make the element easier to hit, the most obvious thing to do is make it bigger. Yet I don't recall ever seeing it invoked as an argument for making screen elements larger, or discussed when people call for making screen elements smaller. Which makes me suspect even more that Fitt's Law is often more a matter of Fittishization than Fittizing.
Because the reason people (read: designers) don't want to make things bigger is obvious: It doesn't look good. Things look better (or at least, cooler) when they're small. That's why designers who'll lecture you endlessly about the virtues of design usability have websites with tiny text that has poor intensity contrast with its background. So Fittism really tends to serve more as a all-purpose way of rationalizing design decisions than it really does as a way of making pages or applications more usable.
In any case, Fitt's Law isn't really nearly as important as its advocates make it out to be. The current rage for Fittism ("Fittishism"?) over-focuses on the motor-experience of navigation, and de-emphasizes the cognitive aspects. The reason for this is that Fitts Law can be very easily validated on a motor interaction level; but the cognitive aspects of the problem tend to get ignored.
And that's not even considering the effect of edge detection in the visual field. This is not strictly a motor issue, and it's not really a cognitive issue -- though it has cognitive aspects.
For example, if the menu bar is always parked at the edge of the screen -- let's say at the top edge -- then it becomes very important that users be able to have confidence that they're using the right menu. If menus are affixed to a window, then know that menu applies to the application to which that window belongs. If menus are affixed to the top of the screen, you are required to do cognitive processing to figure out which application you've got in the "foreground".
(Another aside: That's fairly difficult to do on a Macintosh, which ever since the advent of OS X and Aqua, has very poor visual cues to indicate what application window is active. Title bars change shade and texture a little; text-color in the title bar changes intensity a little; the name of the application is appended to the beginning of the menu bar, in the space that people visually edit out of their mental map of the page in order to limit distractions. In other windowing environments -- Windows and KDE spring to mind -- it's possible to configure some pretty dramatic visual cues as to which windows are in focus, even if you ignore the fact that the menu you need is normally pinned to the damn window frame. It's trivially easy on an OS X system to start using the menu without realizing that you're looking at the menu for the wrong application. I don't do it much myself anymore, but I see people do it all the time.)
But I'm getting off point, here: I started this to talk about distractions, and I keep getting distracted....
As grim and depressing as I can find the automation of spam and the proliferation of bot networks, I like to think I have some perspective on the matter. For example, I recognize that there's a real danger of incredible, profound disruption from bot networks like the one that's driving the spread of the Sober.x worm[s].
But that disruption won't come from "hacking" -- most particularly, it won't come from using the bot networks to crack encryption. As usual, Bruce Schneier has cut through a lot of the nonsense that passes for wisdom on the subject.
The very idea that the main threat from bot networks is cracking is ridiculous -- it displays a basic misunderstanding not only of how end to end security systems are designed, but also some very peculiar and extremely fuzzy thinking about how to defeat those systems. You defeat the systems by gaming them, not by cracking encryption. Sure, you may want to crack encryption at some point to get through some particular locked door -- but the hard part is finding that door in the first place. And more often than not, if you're clever and you know how to game systems, you'll find that you don't need to crack encryption: You can get someone to just give you the key, or even (figuratively) open the door wide and usher you through.
Of course, it is possible, and even likely, that computers will be or even are as I write this being used to game security systems more effectively than humans can. Some clever bloke somewhere might even be writing bots that crack systems. But bot networks -- "herds" of dumb, zombified PCs, even if harnessed into a computational grid -- are more or less irrelevant to that.
Heuristics like that aren't helped by brute force. Anyone who calls himself a security expert ought to know that.
The greatest threat from bot-driven disruption is not hacking or cracking, but denial of service. The person or persons controlling the Sober zombie network alone could, should they so choose, have a significant impact on the operation of the open, civilian internet. It would be easy. It would be pointless, but it would be easy.
But again: it wouldn't be the end of civilization. We'd get by. That's what we do.
And that's the ultimate lesson of security: Unless the system is severely broken (as in Iraq after the fall of Saddam or in Rwanda in '96), people will generally act to preserve structures of civilization (as we see again and again after natural disasters throughout the world).
Speaking of Oklahoma City, my old Okie friend Kelley offered his thoughts on the memorial:
"I still say they should have planted 168 redbuds-a veritable forest that would be blooming now. What a sight that would be, an affirmation of life, a colorful display that cannot be equaled. Instead, they have those silly chairs. Stupid. Modern art gone bad. Yes, they were bureaucrats (mostly) but I think the chair is simplistic and mundane. After all, the noble, tough redbud is the state tree- they're hard to kill and they deal with adversity in a manner I think transcends their environs. Oh yeah, they're the state tree. Duh."
As I sit here, I have a vision of hundreds of ghosts sitting in those cold stone chairs for eternity.... Bureaucrat or no, I find it hard to imagine they wouldn't rather be sitting in a Redbud grove.
I responded that subtlety has become a lost art, accepted only from people (like, say, Roy Blount) who can pretend they're being obvious; and that real local character is passé, like the "southernness" of Atlanta or Houston.
But we've become a monumental culture. We might once have planted trees and let the glory be to God or Nature, and had faith that the tree would one day grow large. But that kind of sentiment died off in the Dutch Elm plague or was suffocated by Cold War techno-optimism. Now, it's no good if it's not human-made. (Ayn Rand smirks from her grave.)
Here in NY, I think the appropriate plant would be blackberry bushes. Let one survive, and you're buried in them forever. My friend Lynne planted blackberries around her fence for some reason a few years back, and now the whole area is a wasp-loud glade all summer long.
Up in Maine, it would be wild roses. Those things grow *as* *weeds* in the cracks between the big wave-smoothed boulders right at the ocean's edge. Even the salt grass has a hard time there.
CORRECTION: I'm chagrinned to be reminded that Lynne's bushes are raspberries, not blackberries. But either will take over in the rockiest, most clay-bound soil, given half a chance. And I'll stand by my Yeats allusion, even if it doesn't represent a literal truth, because I like the way it sounds...
For about 12 hours, I've been getting hit heavily by texas-holdem spam. This, coming just two days after "texas-holdem.rohkalby.net" "spam-whacked" (to coin a phrase) its way to a high position in the Daypop Top 40, one of the key indicators of memetic activity in the blogosphere. It didn't stay there more than a day, but it was there long enough for my 12-hour aggregation cycle on Daypop Top 40 to pick it up.
This wave of comment spam here (all caught by my filter, after the initial training) is conventional comment spam. But my hunch is that the "rohkalby.net" Daypop-whack was done with trackback. I just can't imagine it happening rapidly enough and in a widespread enough form to do so without the assistance of trackback auto-discovery.
BTW, I haven't found anybody actually mentioning this incident, which is very interesting to me. It meas, I think, that they either didn't notice, didn't understand the importance, or didn't want to admit the importance. Which is huge, because this would demonstrate two things -- one very important, the other merely interesting:
We can say safely that SixApart are responsible, by the way, because they initially invented trackback as a manual means of "giving juice" to someone else, and then failed to understand that it needed to stay manual. It was intended to be initiated by human action, not automated. But then they proceeded to automate it; that made trackback geometrically more attractive as a target for spam: It meant that spammers could potentially place themselves into the various automatically-compiled "top"-lists in a completely automated fashion (i.e., at cost to the spammer approaching zero). And with no legal consequences, to boot: They couldn't be prosecuted under email laws, because it wasn't email; they couldn't be charged with theft of service or hacking because -- and this is key -- the spamming was being carried out as a normal designed feature of the "exploited" systems, using their resources.
The great mystery isn't that it happened, but that it took so long to happen.
Shelley et al.'s "tagback" concept might profide a "fix" for this, of a very limited sort, but it still leaves us without trackback. Trackback was a very useful concept; it allowed people to create implicit webs of interest, one connection at a time and without central planning, and -- and this is really important -- without the mediation of a third party like Google or Technorati. And we all know that spammers will find a way to parasitize themselves onto tagback, as well.
And anyway, reliance on third parties for integration gives them power they should not be allowed to have. It's a bad design principle. Trackback, pre-autodiscovery, was a good simple piece of enabling technology. But it was mis-applied, quickly, with the encouragement of people who should have known better. And now it will be forgotten. Which is really, deeply stupid, when instead it could simply be re-invented without auto-discovery.
I'm not sure what's radical about the iPod Shuffle. OK, I'm lying, I know what's "radical" about it, and that's nothing: It has exactly two things that haven't appeared in previous flash-based players, and lacks a lot of things that have. Even in those two things, it breaks no new ground, since they're both attributes of the leading high-capcity product: It comes from Apple, and it integrates with iTunes. ("The Future Is Random"?!)
Those two little non-revolutionary things (Being Apple and Being [Of] iTunes) are pretty important. And the impact of the Shuffle doesn't lie within whether it's actually new or not, or even whether it's actually any good. The impact lies in how it serves to expand the iPod halo.
The random shuffle feature is nothing new -- I can do that on my iRiver. Neither is the integrated USB A-plug (I own a Virgin player, currently on permanent loan to a friend, that has a better-designed implementation of that). Recharging off the USB bus? It's been done. And though I don't troll the flash-player market, I'd be surprised to find it hadn't already all been done in the same player.
Even the "radical" step of "eliminat[ing] the user interface altogether" [sic] has been done before: There have been plenty of flash-based players that eschewed a song title display. Though usually, players that do that are actually cheaper than their competitors, instead of more expensive. But I digress.
As for what it lacks: An FM tuner, and a display. FM tuners have become big differentiators in the flash-player market in recent years; it happened because the circuitry to make them suddenly became really cheap, and not as such because of demand -- a matter of capacity converging with sub-rasa desire, as it were. But I digress, again: Apple apparently doesn't think that matters, and I think I know why. They're planning to horn in on the ground floor with Satellite Radio integration into the Digital Media Center. (Mark this, that's their real next target. The micro-workstation market will expand under its own steam for a while; the next strategic play is getting XM Radio into the iPod Halo.) How they accomplish this is yet to be determined; as iTunes grows, they're increasingly integrated into the DRM fold, and it's a mistake to think that "Rip, Mix, Burn" was any more than a marketing strategy.
I can virtually guarantee that I will never own an iPod Shuffle. But it's important. And by all the accouts I've read so far, it was done contrary to Jobs's better judgement. But again, I digress....
[sic]: Memo to David Pogue at the NYT: Buttons are a user interface.
Whether or not U. S. Army doctors contributed to abusive interrogations at military detention centers is the subject of recent Washington Post reporting by Joe Stephens. Regarding an article in the New England Journal of Medicine:
The article says that David N. Tornberg, deputy assistant secretary of defense for clinical and program policy, confirmed in an interview that interrogation units at Abu Ghraib and Guantanamo Bay had access to detainee medical records. In fact, interrogators "couldn't conduct their job" without such access, Tornberg is quoted as saying.
He and other military officials argue in the article that when a doctor participates in interrogation, he is acting as a combatant, so the Hippocratic oath does not apply.
Tornberg is on leave and was unavailable to comment yesterday. Winkenwerder said that he believes Tornberg's comments were misrepresented in the article, and that they did not represent the Defense Department's views.
My father took the Hippocratic oath and upheld it during his Army service to our country in World War II, and since then there is international law to contend with. Luckily, my father was not castigated for saving lives.
I would hope, if the facts ever should be sorted out and not swept under the rug regarding any medical personnel involvement in U. S. military prison tortures, that international law would be upheld. Of course, the likelihood of that happening is anybodyâ??s guess.
Adam Kalsey has had the temerity to criticize the Kewl Kidz browser, Firefox, and thinks that maybe, just maybe, aggressively marketing it prior to "1.x" isn't such a good idea: "Aggressively marketing Firefox before it is a completely stable product is dangerous. Youâ??re running the risk of having people trying it out and being put off by the bugs, never again to return." [Adam Kalsey, "Why I don't recommend Firefox"]
I agree; in addition, I wonder again why Firefox is being so aggressively marketed in preference to the more stable, more usable, more feature-rich Mozilla. Wait -- I know the answer to that already: It's basically because Firefox is COOL, and Mozilla is NOT COOL. There really are no serious technical reasons -- it's all a matter of how to best herd the cats.
The history on this is worth looking at. Mozilla and Firefox forked sometime in '00, when Firefox was still "Phoenix". The split happened because a small group of developers thought that some of the approaches used in the Mozilla core were wrong-headed, and they thought everything had to be rebuilt from the ground up to improve performance. They were particularly obsessed with load-time and rendering speed.
Fast forward to 2004: Mozilla still loads faster (though it's slightly -- slightly -- bigger), and renders pages faster. Mozilla core has been modified to have more or less all the customization hooks that Firefox has. Mozilla is still significantly more usable out of the box. But those kooky Firefox kids have their own bazaar, now. Oh, and, yeh, they finally did implement extension management.
In a really objective analysis, there's no strong argument for introducing Firefox to novice browsers, and as Adam points out, lots of reasons not to. There are also very few sound technical arguments for basing future product development on the Firefox paradigm of "distribute minimal, expect the user to do all the work." The Firefox kidz want their own kewl browser? Fine -- let them go build it, like the Camino team did. Don't turn their misbegotten hacker-bait into the core product. That's a sure way to fail.
Nevertheless, it's abundantly clear at this point that Firefox is the way of the future with regard to the Mozilla Project's browser product, and it's also abundantly clear why: The kidz wouldn't play unless they got to do things their way, and the project needed them.
Here's how I want to work: I want to be able to just note stuff down, wherever I happen to be at that moment, and have it get stored and automatically categorized, and be available for publication wherever I want from wherever I am, whenever I want to. This has been an achievable dream for nearly ten years -- people are constantly hacking together systems to do just that. But we're stuck in a technologically-determined rut that keeps these solutions from being developed.
I've been thinking about these things a lot, and decided it was time that I wrote it all out, to organize my own ideas as much as anything else. So here's part one, where I try to unpack what it is that I'm really asking for, and start to get a sense for what's not working now, and why. So, as a separate story (because they're long, and would push everything down the page and out of site), here's how I want to work...
[continued from blog entry]
Here's how I want to work: I want to be able to just note stuff down (in my ideal world, wherever I am at that moment) and have it get stored and automatically categorized, organized -- by timestamp, at least, but ideally also in some kind of navigable taxonomy
Dave Winer finally speaks out on the Weblogs.com fiasco, and amongst all the usual stuff I find one thing that really leaps out at me:
One of the things I learned is that just because a site is dormant doesn't mean it's not getting hits. The referer spam problem on these sites was something to behold. Search engines still index their pages, and return hits. They were mostly dead, in the sense that most hadn't been updated for several years.
Something troubles me about this and the interminable HTTP code vs. XML redirect discussions, and that's this: If someone links to the content, it's live by definition.
I'll restate that, so it's clear: Content that is used should continue to be accessible. I don't actually know where Ruby or Winer or Rogers Cadenhead or anybody but the writers stand on this, but it remains a non-negotiable best practice and first principle of web interaction design for usability that links should not go dead.
If that means you have to redirect the user to a new location when the content moves, so be it. If you have to do that with meta-refresh in HTML or XML, so be it. Sure, there are "purer" ways to handle it; but it's just stupid to let the perfect be the enemy of the good by saying that you can't redirect if you can't modify your .htaccess file. Even a lot of people with their own hosting environments aren't allowed to modify their .htaccess.
I'm getting the sense that a lot of the people involved in these debates are forgetting that the web was supposed to be about linking information to related information. Protocols and methods are mere instrumentalities to that end. It's the content that matters; there really, really isn't a web without content.
You want to find the number 216 in the world, you will be able to find it everywhere. 216 steps from a mere street corner to your front door. 216 seconds you spend riding on the elevator. When your mind becomes obsessed with anything, you will filter everything else out and find that thing everywhere. -- "Sol Robeson", Pi
Via MeFi, this morning, I find some skeptical meditations on the Golden Mean. We're told it's everywhere; we're told it (or at least, the Fibonacci Series) can be found thorughout nature, and this is presented as evidence of intelligent design.
But the truth seems to be that it's as good a demonstration of natural selection as it is of design. For example: Leaves tend rotate on a stem in a spiral that can be mapped to a ratio of Fibonacci numbers, known as the "Divergence". Wondrous, that; or perhaps not.
Although many of these observations were made a hundred year or more ago, it was only recently that mathematicians and scientists were finally able to figure out what is going on. It's a question of Nature being efficient.
For instance, in the case of leaves, each new leaf is added so that it least obscures the leaves already below and is least obscured by any future leaves above it. Hence the leaves spiral around the stem. For seeds in the seedhead of a flower, Nature wants to pack in as many seeds as possible, and the way to do this is to add new seeds in a spiral fashion.
As early as the 18th century, mathematicians suspected that a single angle of rotation can make all of this happen in the most efficient way: the Golden Ratio (measured in number of turns per leaf, etc.). However, it took a long time to put together all the pieces of the puzzle, with the final step coming in the early 1990s.
The worst kind of angle for efficient growth would be a rational number of turns, eg. 2 turns, or 1/2 a turn, or 3/8 of a turn, since they will soon lead to a complete cycle. Mathematically, a turn through an irrational part of a circle will never cycle, but in practical terms it could eventually come close. What angle will come least close to a cycle? Maximum efficiency will be achieved when the angle is "furthest away" from being a rational.
Or perhaps so, but not in the mystical way of a super-mathematician divining the world. Rather, wondrous, in that mathematics can help us to understand the real reason that such realizations evolve.
Yet, of course, like most good stories, people still want to believe it. We are story-making animals, after all; it's how we understand the world. Powerful stuff, stories. So long as we remember now and then that we do live in a real world.
Jeff Veen talks about appropriate levels of detail orientation: "It's a balance between paying for a watchful eye, and maintaining some flexibility in the tools I use. I want to hack my templates, but I find it hard to care what modules are compiled into Apache":
So it was a relief to me that a couple more pieces of Web infrastructure moved into the "somebody else can worry" realm. The first is feeds. I spent a few years with the W3C working on HTML and CSS specifications, so I'll likely never bother to read another rant about which idea is more brilliant than the other when it comes to the minutia of standards making. RSS and Atom in particular fit squarely into that category these days. Goodbye to all of that. Rather than fret over the various feed templates on my site, I can now just point to Feedburner. They bravely content negotiate for all known aggregators and spit out the Right Thing. And lots of other stuff. Go look. They're cool.
Along the same lines, Ping-O-Matic will help promote your site for you. When you publish an entry on your blog, the software you use will go tell a couple of sites that you've updated. Typically, blo.gs or weblogs.com will get pinged, and they'll make a record of that. Then, search sites like Technorati and DayPop will come visit you and update their indices. But with the number of pingable sites is constantly growing, how can a Web author keep up? Now, you can just enter Ping-O-Matic into your blog publishing software, and let them keep track of all the new ones....
Sage advice, if you can afford the fees (for now, I can) and site promotion matters to you (... eh... I suppose it should). Veen can, and it does, because he makes his living in part by virtue of being the geek-cred version of "famous."
But then Dave Winer has to go and spoil it, as he so often does:
"Leave the hard stuff to someone else," says Jeff. "It wasn't supposed to be hard stuff," say I. It was supposed to be transparently simple. We're in a bad place, because after the next level of hard stuff it won't be possible for an intermediary to sort it out. Then we'll bemoan the lack of support of "standards" but the problems won't get solved, and eventually we'll give up and move on. Why we can't learn from the mistakes of the past is the mystery of the human hive.
... which, of course, entirely and spectacularly misses Veen's point: It's still "hard stuff" whether it's RSS 2.0 or Atom. It's hard stuff because Winer focuses on the wrong users: Geeks.
Veen's point is that there are levels of detail that it makes sense to pay attention to. For the vast, vast majority of actual users (Grandma, Dad, Uncle Harry, your boyfriend/girlfriend, etc.), Atom vs. RSS is irrelevant at a technical level.
By the way: I find it quite implausible to suppose that it truly "won't be possible for an intermediary to sort it out." That's kind of an absurd thing to say, especially for someone with a lifetime of experience in software development. It just plain doesn't make any sense, frankly. If they're both XML, and unless one or the other of the standards is so wildly extensible that you can't actually discover on the fly what it's supposed to mean -- which is to say, if it utterly ignores the concept of the semantic web -- then it will be possible to abstract between them.
And at the non-technical level that Winer tries to speak to with his talk of single platforms, it still doesn't matter, because it's a relatively simple matter to create abstraction layers. The existence of services like Feedburner prove that; the fact that it is difficult for individual hackers to reinvent the wheel in abstracting from some blogging software's data model to both Atom and RSS is not a technical argument on behalf of either platform.
"Government is not the solution to our problem. Government is the problem" -- Ronald Reagan, prior to increassing the size of the national debt threefold and massively inflating the scope and influence of the Federal government
Wikipedia is probably the most siginificant, important website on the net right here in May/June 2004. It's the signal success we can point to for bazaar-style projects, and the great white hope for the persistence of free, non-corporate-sponsored information on the web. Not to disregard Wikipedia's smaller cousin, WikInfo; they're just not big enough to be a great white hope, yet.
So, now, Wikipedia has done something intriguing: You can now talk about any article, or view previous versions. These appear to be benefits of upgrade to version 1.3 of MediaWiki, the hyper-extended Wiki implementation that Wikipedia developes and uses to drive the site.
Tired terms like "community portal" don't do this justice. I don't think the great mass of the digerati really have any clue how important Wikipedia (and WikInfo) are. This kind of move, once they notice it, could blow Wikipedia wide open.
My great fear is that it could literally blow it wide open: How will they be able to handle the loads? Will their community software be able to cope with input from every Tom, Dick and Harry with an opinion?
The upside, of course, is that with a project of this broad scope, we'll finally get that "online experiment" that other "communities" have been claiming to be for years.
Addendum: I've posted this on Mefi; let's see if anybody cares.
Second addendum: Mefites assure me that it's always had that functionality, though it wasn't as obvious as it is, now. I wonder if they've made changes that will let them handle the greater load and have decided to front-and-center those features?
Man, the threads just never stop weaving...
An overview piece on "social software" and its high-level requirements, from the perspective of needing to deliver recommendations to a client: Matt Webb, "On Social Software Consultancy", INTERCONNECTED, courtesy Drupal.org.
Worth a detailed read-through; need to come back to this...