In 1960, each car entering a central city had 1.7 people in it. By 1970, this had dropped to less than 1.2. If present trends continue, by 1980 more than one out of every 10 cars entering a city center will have no driver!
Usability and Ergonomics
The Palm Foleo is catching a lot of heat. Some of it is well deserved. (Just what the hell is this device supposed to "assist" a smartphone with? Shouldn't it be the other way around?) But most of it is feeding-frenzy pileon by people who got burned in the first try at thin clients, ten years ago.
Which is to say that AFAIAC most of the most strident critics of the Foleo don't want to admit that they've gotten the point -- they pretend not to understand what the device really is, which is plainly and simply a thin client for web 2.0 applications. But it's a thin client that could actually work: It's got a real web browser to access the real web applications that have sprung up in the interim via the near-ubiquitous broadband that we weren't even close to having the last time around.
Sour grapes like this prevents people from seeing the two real reasons that it will fail: It's not fast enough, and its being sold by idiots. Really, again, that whole 'smartphone assistant' thing: The Foleo should be (and will more likely be) "assisting" the phone, rather than vice versa. It's the thing with the network connectivity, not the phone. It's the thing with the USB connection, not the phone.
Semi-surprisingly, Jakob Nielsen has joined in the fray with a decidedly mainstream take on the specs:
Much too big and fat for a mobile device. At that size, you might as well buy a small laptop, which would run all the software you are already used to. For example Sony's Vaio TZ90 is 10% lighter and thinner...
... and 150% more expensive than the Foleo. Though it does have similar battery life. But that's still kind of a pathetic excuse for a pile-on. Criticize it for something real, why don't you, like, say, what you could do with it, instead of demanding that the device embrace all the weaknesses it's clearly designed to overcome:
- weight: 2.5 pounds (1.1 kg)
- thickness: 1 inch (2.4 cm)
- size: 11x6 inches (28x15 cm) - estimated from photo
Much too big and fat for a mobile device. At that size, you might as well buy a small laptop, which would run all the software you are already used to. For example Sony's Vaio TZ90 is 10% lighter and thinner than the Foleo.
A mobile information appliance should be thinner than 1 cm (0.4 in), weigh less than 1 pound (.45 kg), and be about 6x4 inches (15x10 cm) big. Something with these specs makes sense because it would fit the ecological niche between the laptop and the phone.
So let's get this straight: A mobile device should be too small to easily read on, too small to type on, but still too big to fit easily in slacks pockets? Where's the text entry? Where's the user interface? Seems like a rather strange set of requirements. Let's restate them so they make more sense, in functional terms. A mobile device must:
Conspicuously missing, but important:
In addition, we can make some other generalizations about what a device in the Foleo's class should do:
So the specs that Nielsen (and so many others) have seen as so ripe for critcism are not at all the ones that are important. The ones that are important, and the ones that will end up being technically critical for this device, are:
So at a technical level, I'm actually positive it fails on only one point, and that's run-time. Nielsen does raise a valid point, though:
Palm seems ashamed of its own specs since they are nowhere to be found on the product pages.
This is a blatant violation of all guidelines for e-commerce. I can't believe even the worst designer would suggest making a site where you can't find out how big the product is (especially for a mobile device). It must be a deliberate decision to hide the facts.
I think he's actually right about that. I think the product managers and marketers at Palm were so gun-shy about identifying Foleo as a thin client that they invented this whole "smart-phone companion" nonsense to cover it up. They basically threw the game -- decided the product was a bust before they even started, and concocted a marketing plan that, while it couldn't possibly succeed, at least had good graphics.
But come on -- a "smart phone accessory" that's ten times the size of the phone? Idiots.
Courtesy of the Peoria Chronicle's blog, here are links to a lecture on "New Urbanism" given by Andres Duany in Houston. It's on YouTube in 9 parts of 10 minutes each, and the first several have been posted on the Peoria Chronicle's blog. I'll be working my way through them bite by bite, as I hardly have 90 minutes to spare for anything that requires both visual and auditory attention, simultaneously. I may yet find something objectionable in it, but the basic presentation is quicker than reading Death and Life of Great American Cities.
One comment from the Chronicle blog is interesting:
“New urbanism” is just a facade being used by developers to pack as many people into the smallest footprint as possible, to increase their profits.
In San Diego, older neighborhoods are being transformed into jam packed, noisy, traffic infested cesspools, by billionaires who live on 10 acre estates in Rancho Santa Fe (SD’s Bel Aire).
The 40 year old, 10 unit, low income apt building next to me was converted to $400k “condos” last year. It’s been pure hell, with 15 rude, loudmouthed, morons moving in, several of whom are already about to default on their loans. Several units are now being rented, at 3 times the monthly rent as before. Who wins? A handful of guys sitting around dreaming up their next scheme.
That he misses the point of New Urbanism completely isn't the interesting part -- it's that he's so willing to conflate New Urbanism with a newspeak co-optation of its ideals. He's not necessarily wrong to do so. Like many idealistic movements, it has some foolishness and romanticism baked into and is vulnerable to abuse. There are plenty of people who jump into idealistic movements with a partial understanding of the situation and then end up taking it in whole new, highly rationalized direction.
That's one of my objections to "emotional design": When you choose, as Don Norman, Bruce Tognazzini et al seem to have chosen, to make your evaluation of a design's quality hinge upon its gut, emotional appeal, you're basically opening up the door to tossing out real design and replacing it with pandering. Machines become good if they look cool. By that metric, the AMC Javelin would be one of the coolest, hottest cars ever manufactured. The nigh-indisputable fact that it was a piece of crap would be irrelevant: It had great "emotional design."
Similarly, the fact that PowerBooks are screwed together using 36 (or more) tiny screws of five to six different sizes and head-types, but also force-fit using spring clips, becomes irrelevant: The design feels great, looks great. Never mind that it could cost less to manufacture, cost less to repair and upgrade, and be just as solid, just as sound, if it were designed better. It's still great "emotional design."
I've been doing a lot of video blogging on BEYOND THE BEYOND lately, which must be annoying to readers who don't have broadband. But look: outside the crass duopoly of the USA's pitifully inadequate broadband, digital video is gushing right through the cracks. There's just no getting away from it. There is so much broadband, so cheap and so widespread, that the video pirates are going out of business. I used to walk around Belgrade and there wasn't a street-corner where some guy wasn't hawking pirated plastic disks. Those crooks and hucksters are going away, their customers are all on YouTube or LimeWire...
Broadband isn't the problem. Bruce makes his living being a visionary. I make my living doing work for other people. It's truly not the visionaries who actually change things -- it's the people who buy (into) their visions, and those people just don't have the time to look and listen at the same time to continuous "bites" of see-hear content.
Podcasts are bad enough -- I have to listen for as long as someone speaks in order to get their point, I can't really skim ahead or scan around with my eyes. I've got to buy into their narrative construction. And I'm paying for that purchase with my time and attention.
This also goes to Cory Doctorow's point about text bites. He's grazing around, taking in small chunks of text at a go, and the web is fine for that, that's his message. Great. Fine. But text can be time- and space-shifted far more effectively than audio, which in turn can be time-/space-shifted far more effectively than video.
What's really needful as I've noted before is a way to mode shift text into audio without human intervention. Or video, for that matter, if you want to get visionary about it. But I'm not going to worry about video right now, because audio is something that some basement hacker could actually pull off with an evening's work, and refine with the labor of just a few weeks. Or so it seems to me. On my Mac, right now, I can select text and have the machine speak it to me, complete with sentence and paragraph pauses. The Speech service is AppleScript-able, so (if I actually knew AppleScript) I could script it to pick up blog posts and pump them into audio files, that in turn could be pumped onto my audio player for listening in the gym or on the road. If I spent that much time in the gym or on the road. Which I don't.
Some design-geek at Frog Design thinks that iPods are "universally" described as "clean" because the iPod "references bathroom materials." It's kind of a silly little think-piece, not least in that it makes a point and then throws out a lot of unrelated arguement in an attempt to hide the fact that it doesn't really make much of a case for what might otherwise be an interesting assertion. But that's not what I'm writing about.
A comment in-thread lead me to this insight: Being a "Mac Person" is a little like being a mason.
Which is to say, to be a "Mac Person" is to feel that you belong to something, while at the same time feeling yourself to be different from other (lesser) people. If you belong to a secret society of some kind, you feel both privileged to belong, and empowered by your connection to that society.
Membership in the secret society comes with a cost: Dues, expenses for robes or other paraphernalia (as Stetson Kennedy wrote in his book about infiltrating the Klan), and any opportunity cost associated with providing expected assistance to other members. Any extra costs are obviously assumed to be at least offset by benefits, by "believers" in the secret society. Those costs are their "dues"; they're what they pay for the privilege of being made special by the organization.
Committing to the Apple Way has similar costs: Software is more expensive and less plentiful; hardware is often proprietary (as with iPod peripherals), or hardware options more limited (if you don't believe it, try to buy a webcam off the shelf at a mainstream store); software conventions are different, and require retraining. Apple users (rationally) presume there to be offsetting benefits, typically cast in terms of usability. My own experience using and supporting Macs tells me that those benefits are illusory, but that's beside the point: Mac users assume them to exist, and act on that assumption.
But they also gain a sense of superiority from it, and they get that reinforced every time they pay more for something, every time they have a document interchange problem with a Windows-using compatriot, every time have a problem figuring out what to do when they sit down at a non-Mac microcomputer.
The extra cost is understood as an investment. They are paying dues. Being a Mac Person is, in that way, a little like being a Mason. Or at least, a little like what we might imagine it's like to be a Mason, since most of us have never actually met one.
Big targets mean big distractions.
I'm sitting here listening to Whadya Know on the radio while I write. While I do this, I've got a couple of applications and part of my desktop visible on screen, and a cluttery clumsy app launching device pinned to the left edge of my screen. (I move my Dashboard to the left edge because I value the vertical screen space too much. More vertical space means more text on screen, which means more chance at understanding what the hell I'm looking at. Which is to the point, believe it or not, but I'm not going to go there right now.)
And I'm getting distracted by it all. Part of it is that I'm over 40 and wasn't "raised to multi-task" (as so many people raised in the age of multi-tasking OSs and multiple-media-streams seem to think they have been). But part of the problem is all this extraneous visual noise -- stuff I don't need to see right now, like the "drawer" to the left of my application window that lets me see the subject lines of previous journal entries and, more to the point, blocks out a bunch of other distracting stuff in the windows behind this one. Obviously, I could close the "drawer" and widen my editing window to cover them, but then I'd have a line-length that would be difficult to eye-track.
Anyway, the point of this (man, I am getting distracted) is that having all this clutter on my screen distracts me. Presumably that's why MacJournal (like a lot of "journaling" applications) has a full-screen mode that lets me shut out everything else if I so choose.
Fitt's Law is increasingly invoked these days to justify a lot of design decisions, like pinning a menu bar to the top of the screen for all applications, or putting "hot zones" in the corners of the screen. It's invoked as a rationalization for putting web page navigation at the edge of the page (and hence, presumably, at the edge of a window).
Interestingly, it seldom gets used as a rationalization for making navigation large.
Fitt's Law reduces to a fairly simple principle: The likelihood of hitting a target by using a mouse pointer is a function of the size of the target and the distance from the starting point. That is, it's easier to hit a big target with a mouse pointer than it is to hit a small target.
Fitt's Law is also often cited as demonstrating that it's easier to hit targets that are at a constrained edge or corner; it's as valid a principle as Fitt's Law, but isn't really implied by it. So Fitt's Law gets cited to justify things like parking the active menu bar at a screen edge. It's easy to get to the edge of a constrained screen: Just bang your mouse or track-pad finger or pointing-stick finger over to the screen edge and it will stop -- won't go any farther. Bang in the general direction of the corner, and the cursor will behave like water and "flow" into the corner, so the corners become the easiest thing to hit on the screen. Tognazzini, among others, uncritically and inaccurately cites this as an example of Fitt's Law in action. I don't know who came up with this conflation first, but Tog is the most vocal exponent of it that I'm aware of so I'll probably start referring to it as "Tognazzini's Corollary."
(Aside: Obviously this only holds for constrained edges, as on single-screen systems. On two-screen systems, or on systems with virtualized desktop scrolling, it's a little more complex. Less obviously, this principle is more or less meaningless on systems that are actuated with directly-positioned devices like touch-screens, and it requires that people engage in some selective modification of their spatial metaphors. But that's another topic for another time.)
It's interesting to me that Fitt's Law isn't applied to the size of buttons because that's its most obvious implication: You want to make the element easier to hit, the most obvious thing to do is make it bigger. Yet I don't recall ever seeing it invoked as an argument for making screen elements larger, or discussed when people call for making screen elements smaller. Which makes me suspect even more that Fitt's Law is often more a matter of Fittishization than Fittizing.
Because the reason people (read: designers) don't want to make things bigger is obvious: It doesn't look good. Things look better (or at least, cooler) when they're small. That's why designers who'll lecture you endlessly about the virtues of design usability have websites with tiny text that has poor intensity contrast with its background. So Fittism really tends to serve more as a all-purpose way of rationalizing design decisions than it really does as a way of making pages or applications more usable.
In any case, Fitt's Law isn't really nearly as important as its advocates make it out to be. The current rage for Fittism ("Fittishism"?) over-focuses on the motor-experience of navigation, and de-emphasizes the cognitive aspects. The reason for this is that Fitts Law can be very easily validated on a motor interaction level; but the cognitive aspects of the problem tend to get ignored.
And that's not even considering the effect of edge detection in the visual field. This is not strictly a motor issue, and it's not really a cognitive issue -- though it has cognitive aspects.
For example, if the menu bar is always parked at the edge of the screen -- let's say at the top edge -- then it becomes very important that users be able to have confidence that they're using the right menu. If menus are affixed to a window, then know that menu applies to the application to which that window belongs. If menus are affixed to the top of the screen, you are required to do cognitive processing to figure out which application you've got in the "foreground".
(Another aside: That's fairly difficult to do on a Macintosh, which ever since the advent of OS X and Aqua, has very poor visual cues to indicate what application window is active. Title bars change shade and texture a little; text-color in the title bar changes intensity a little; the name of the application is appended to the beginning of the menu bar, in the space that people visually edit out of their mental map of the page in order to limit distractions. In other windowing environments -- Windows and KDE spring to mind -- it's possible to configure some pretty dramatic visual cues as to which windows are in focus, even if you ignore the fact that the menu you need is normally pinned to the damn window frame. It's trivially easy on an OS X system to start using the menu without realizing that you're looking at the menu for the wrong application. I don't do it much myself anymore, but I see people do it all the time.)
But I'm getting off point, here: I started this to talk about distractions, and I keep getting distracted....
The late Isaac Asimov famously resisted computers for many years. With good reason: Until relatively late in his life, they couldn't have kept up with him. His workspace was infamous. He kept several long tables in the attic of his town house, arranged in a big "U", with an IBM Selectric (the fastest typewriter available then or since) every few feet. Each smaller workspace was set up to work on a different project, or part of a project. When he got bored working on one thing, he'd simply roll over to another project.
I got into computers to use word processors. That's not true: I got into computers to manage prose. That was really my dream: To manage prose, which meant managing ideas, managing text, searching seamlessly through stuff that I'd written, changing on the fly, getting rid of hard copy, automating tedious tasks.... I imagined a day when I'd be able to store large amounts of text and search through them easily. I imagined a day when I'd be able to effortlessly switch back and forth between projects the way that Asimov would wheel from one Selectric to the next.
That was in the mid-80s; I'm part of the way there. I use (and have used for something around ten years) a multi-tasking computer that lets me keep multiple projects in progress (dare I say "a la Asimov"?); with wireless networking, I can get connected to the Internet in a surprising and growing number of places; I have a small, light, powerful laptop that lets me do real work when away from an "office."
But I still don't have the text tools that I really want. OS X 10.4 has nice meta-data-aware indexing, implemented in a fairly efficient way; it also has good solid multitasking and power management. But it's still lacking one thing:
It doesn't have a decent word processor.
What would a word processor need to have for me to regard it as "decent"? At a high level, it needs to fulfill three basic criteria:
It's a fact -- and this is not seriously disputable by any honest and experienced user of both platforms -- that Windows (and to lesser extent Linux) beat all but one (arguably two) of the available Mac word processors hands down on all these counts.
I leapt into using a Mac with the assumption I'd be able to find what I needed when I got here, and for the most part, that's been true. Some glaring exceptions: There really aren't any good music players (iTunes is a sad and cynical joke), and -- most glaringly -- there are no (repeat, no, repeat, no capable, stable, usable, general purpose word processors.
The field of modern word processors is pretty small to begin with. On Windows you've basically got Word, OpenOffice, and WordPerfect, with a few specialist players. Down the feature ladder a bit you've got AbiWord lurking in the shadows: It's pretty stable on Windows, and does most of what you'd need to do for basic office word processing, but it has some problems translating Word docs with unusual features like change tracking.
On *nix, you've always got OpenOffice and AbiWord. In addition, you've got kWrite, which is about on feature-par with AbiWord, but tends to remain more stable version to version.
To be fair, there are a lot of word processors available for the Mac. But few of them really fill the minimal requirements for a business word processor, and those few fail in critical to borderline critical extended requirements. And what's most frustrating for me is that it's been that way for years, and the situation shows no real signs of changing.
Here are the players on the Mac:
The Good: It supports all the basic, required business features.
The Ugly: Performance sucks, and so does price. I
The Good: Price -- it's free. Features -- it's got all the basic features, just as OpenOffice 1.2 does. By all accounts, it's more stable and performs better than OOo 1.1.2 does on a Mac. This is what I use every day, for better or worse. It's very impressive for what it is; I'd just like it to be more.
The Ugly: Rendering performance is flaky. It's hard to de-clutter the visual field -- there's nothing analogous to Word or OOo 2.x's "draft mode". NO/J is somewhat unstable from build to build, though genuine stability issues seem to get fixed pretty quickly, and the software will (theoretically) prompt you when there's a new patch or version available. Unpredictable behavior with regard to application of styles -- e.g. I apply a style, and it often doesn't fully obtain. Some of these problems get addressed on a build by build basis, but it's hard to know which are bugs and which are core defects of OOo. This is OO 1.x, after all, which was kind of flaky in the best of times.
The Good: Small, fast, good-looking, and the drawer-palette is less obtrusive than Word 2002's right-sidebar. RTF is its native format, which gives the (false) hope that it will have a high degree of format compatibility with Word.
The Ugly: I had high hopes for this one, but it's been disappointing to learn that it fails in some really critical areas. Format compatibility with Word is hampered by the fact that it's missing some really important basic features, like automatic bullets and outlining. I use those all the time in business and technical writing -- hell, just in writing, period. I don't have time to screw around adding bullets or automating the feature with macros, and because the implementation for bulleted or numbered lists is via a hanging indent, the lists won't map to bullet lists or numbered lists in Word or OO. Ergo, NWE is useless for group work. This is intriguing to me, since they've clearly done some substantial work to make it good for handling long documents, and yet they've neglected a very basic formatting feature that's used in the most commonly created kind of long document, business and technical reports: Automatically numbered lists and outlines.
Interestingly, it also fails to import headers and footers. I would have expected those to be pretty basic. Basically, this isn't exactly a non-starter, but it's close.
The Good: Free.
The Ugly: Unstable and has poor import and rendering performance in the Mac version. I know the developers are working on it, but there's only one guy working on the OS X port right now so I don't have high hopes. Also, it's not as good for long technical documents as Word or OO would be.
The Good: Don't know; haven't tried it. People swear by it for performance, but see below.
The Good: Cheap. Light. Quick.
The Ugly: Features. As in, ain't got many.
The Good: Inexpensive. Conforms to the Mac UI.
Why am I mincing words, here? Pages, based on what I know about it, is the same kind of sad and cynical joke as iTunes. It's a piece of brainwashing; it's eye-candy; it's got nothing very useful to anyone who does anything serious with documents.
For the time being, it looks as though I'll be sticking with NeoOffice/J, and at some point installing the OO plus X11 package to see how ugly that is.
When I read reports from other people's research, I usually find that their qualitative study results are more credible and trustworthy than their quantitative results. It's a dangerous mistake to believe that statistical research is somehow more scientific or credible than insight-based observational research. In fact, most statistical research is less credible than qualitative studies. Design research is not like medical science: ethnography is its closest analogy in traditional fields of science.
[Jakob Nielsen, "Risks of Quantitative Studies"]
I've always found it more than a little ironic that many designers have such a strong, negative reaction to Jakob Nielsen, especially since most of them do so by banging the drum in protest of what could be termed "usability by axiom": The idea that following a set of magic rules will make a website (or any application) more usable. I find it ironic, because Nielsen has always seemed to me to be a fairly ruthless empiricist: His end positiion is almost invariably that if a design idea doesn't provide the usability benefit you imagine it will before you use it, then you shouldn't be using it. This month's Alertbox is a case in point, but there are plenty of others I could cite.
And therein lies the problem. Designers, painting broadly, really do know more than the rest of us do about design, at least on the average: They spend years in school, they produce designs which are done according to the aesthetic rules and basic facts about human interaction with machines and which are then critiqued by their teachers and colleagues. They've often even done their research quite meticulously. But they seldom bother to actually look at what real users do -- at least, in any way that might do something other than validate their preconceptions. And it is, after all, the real users doing real work who will get to decide whether a design is effective or not.
Take "Fitt's Law", for example: If you search for tests of Fitt's Law, you'll find plenty of tests, but the last time I looked, I could find none that tested Fitt's Law in a real working context. And there's a good reason for that: It would be really hard to do. To test effectively, you'd have to include such contextual features as menus, real tasks, application windows -- and then, change them. It's barely feasible, but do-able -- it would be a good project for someone's Master's thesis in interaction design, and it would be simplest to do with Linux and collection of different window managers. You'd have to cope with the problems of learning new applications, and sort out the effect of those differences on the test. It's a tough problem to even specify, so it's not surprising that people wouldn't choose to fully tackle it.
But I digress. My point is that it's relatively easy to validate the physical fact that it's easier to hit big targets than small ones and easier to hit targets at the edge or corner of the mouse area than out in the middle of the visual field. Unfortunately, that's not very interesting or useful, because we all know that by now. (Or should. Surprisingly few university-trained or industrially-experienced interaction designers take account of such factors.)
One thing it would be interesting or useful to do, would be to figure out what the relationship is between "hard edges" (like the screen edge of a single-monitor system) and what we could call "field edges" (like the borders of a window).
What would be interesting would be to study the relationship of the physical truths exposed by Fitt's "Law" with the physical truths hinted at by things like eye-tracking studies.
What would be interesting would be to figure out what the relationship is between understanding and physical coordination. Quickly twitching your mouse to a colored target doesn't tell you much about that; but navigating a user interface to perform a word processor formatting operation could. Banging the mouse pointer into the corner of the screen like a hockey-stick tells you that mouse pointers stop when they hit screen edges; I already knew that from having used windowing user interfaces since 1987. What I don't know is whether cognitive factors trump physical factors, and simple validations of Fitt's Law do nothing to tell me anything about that.
What would be interesting, would be to design to please customers, instead of to please designers.
The latest leading-edge thinking in traffic-calming is that we should remove traffic controls, not add them. Passive controls, that is, like signage; active controls, hard controls, like traffic circles (rotaries, roundabouts), merge lanes -- those can stay. But Yield signs at the traffic circle entrance, "lane ending" indicators, even curbs, stop signs and traffic lights: Those should go.
The thinking is that without them, we think more. With them, we give over our control over our fates to the signage. At the same time, we can do things that, superficially, make a road more dangerous: We allow parking where we'd previously barred it; we make the road-beds narrower instead of wider; we remove turn lanes and traffic lights; we remove explicit barriers between people and traffic. (Note that this doesn't mean eliminating sidewalks altogether: "Instead of a raised curb, sidewalks are denoted by texture and color.")
Results are counter-intuitive: Traffic moves more slowly, and yet trip times are reduced. It's the kind of result that a simplistic understanding of systems can't predict, but that an ecological understanding can.
I have to admit that I was resistent to the idea when I first read it. It reminded me of a trip to Seattle in February of 2000, when I noted the conspicuous absence of stop signs at intersections in many residential neighborhoods. But as I reflect on it, it strikes me that, at the least, bad signage really is worse than no signage. Signage, after all, plays to our conscious, rational mind, which is easily stymied by contradiction and inconsistency in ways that our sub-conscious, a-rational mind is not. And I recall that, when I approached those intersections, I stopped and looked very carefully. I paid attention to what I was doing (driving) instead of to other things.
As I think through it further, I find myself thinking of least three other ideas: The human factors design concept of affordance; Jane Jacobs's "eyes on the street"; and the zen/taoist/buddhist tightrope of mindfulness:mindlessness. The common thread is that they all tap into aspects of humanity that are essentially sub-conscious, in the sense of being as tied to our animal nature as to our human nature. They are rational in the sense that sense can be made of them; they are also a-rational, in the sense that we seldom bother to try. (And also in the sense that when we do bother to try, we often screw it up.) Most imporatantly, they are ecological, not based on a simplistic, modernist understanding of systems theory.
We still need to be able to inculcate awareness of self-interest at a low level of consciousness. We can only rely on our natural accident-avoidance to carry us so far, especially with as many distractions as the world affords.
Adam Kalsey has had the temerity to criticize the Kewl Kidz browser, Firefox, and thinks that maybe, just maybe, aggressively marketing it prior to "1.x" isn't such a good idea: "Aggressively marketing Firefox before it is a completely stable product is dangerous. Youâ??re running the risk of having people trying it out and being put off by the bugs, never again to return." [Adam Kalsey, "Why I don't recommend Firefox"]
I agree; in addition, I wonder again why Firefox is being so aggressively marketed in preference to the more stable, more usable, more feature-rich Mozilla. Wait -- I know the answer to that already: It's basically because Firefox is COOL, and Mozilla is NOT COOL. There really are no serious technical reasons -- it's all a matter of how to best herd the cats.
The history on this is worth looking at. Mozilla and Firefox forked sometime in '00, when Firefox was still "Phoenix". The split happened because a small group of developers thought that some of the approaches used in the Mozilla core were wrong-headed, and they thought everything had to be rebuilt from the ground up to improve performance. They were particularly obsessed with load-time and rendering speed.
Fast forward to 2004: Mozilla still loads faster (though it's slightly -- slightly -- bigger), and renders pages faster. Mozilla core has been modified to have more or less all the customization hooks that Firefox has. Mozilla is still significantly more usable out of the box. But those kooky Firefox kids have their own bazaar, now. Oh, and, yeh, they finally did implement extension management.
In a really objective analysis, there's no strong argument for introducing Firefox to novice browsers, and as Adam points out, lots of reasons not to. There are also very few sound technical arguments for basing future product development on the Firefox paradigm of "distribute minimal, expect the user to do all the work." The Firefox kidz want their own kewl browser? Fine -- let them go build it, like the Camino team did. Don't turn their misbegotten hacker-bait into the core product. That's a sure way to fail.
Nevertheless, it's abundantly clear at this point that Firefox is the way of the future with regard to the Mozilla Project's browser product, and it's also abundantly clear why: The kidz wouldn't play unless they got to do things their way, and the project needed them.
I have seen one of the finest instances of user interface design ever, and I saw it in the men's room at Schipol airport in Amsterdam.
In each of the urinals, there is a little printed blue fly. It looks a lot like a real fly, but it's definitely iconic - you're not supposed to believe it's a real fly. It's printed near the drain, and slightly to the left.
(... so, no, that's not bad Latin, it's Dutch.)
I have nothing particularly clever to add to this, except that it's one of those things that makes a light click on over your skull...
BTW, I found this on my very first experiment with using del.icio.us for serendipitous browsing. I bookmarked the site diagramming story, then looked at the bookmark lists for two of the three other people who had it marked; on the second one, near the top of the list, I found this.
Cool; I think I'd better stop, now, or this will eat the whole day, and I really do have work to do...
Jakob Nielsen has done Nietzsche one better: Instead of just two basic ideologies ("master" and "slave"), he's identified three: Mastery, Mystery, and Misery. These correspond roughly to empowerment, game-play, and control. In the "Mastery Ideology", "...the designer's job is to provide the features users need in a transparent interface that gets out of the way and lets users focus on the task at hand." Mystery 'Obfuscates Choices' by using novel interaction elements. And Misery is an ideology of "... oppression, as mainly espoused by certain analysts who wish the Web would turn into television and offer users no real choices at all. Splash pages, pop-ups, and breaking the Back button are typical examples of the misery ideology."
Nielsen's purpose is to drive the cause of design for usability. That's what NNG do for a living. So it's not surprising that he focuses on the negative aspects of "mystery" (obfuscation) and control ("misery"), and carefully (re)interprets empowerment to mean "usability". He's mapped out (as usual) one path that, if followed, will more or less lead to a better design. It's the most bottom-line path, the path most suited to NNG's target audience: They guys with the money (they're the ones who tell the designers what to do, after all). But it's not the only path, and his re-interpretations have some pifalls.
To start with, empowerment isn't always all it's cracked up to be. Sometimes (as Nielsen implicitly points out elsewhere) it's necessary to constrain in order to empower -- or at least, to create the sense of empowerment. Search is a good example. The earliest search interfaces included Boolean parsing as an integral part of their user interaction design. Gradually, Boolean parsing slipped out of the user interfaces as designers became convinced that it was an impediment.
Boolean search would be empowering; but for most users, it would be less usable. Nielsen has accepted that conclusion for years, incidentally. It's experimentally verifiable. (And that seems to me to be Nielsen in a nutshell: 'Where are the numbers?', he'd ask. At a conference, I heard him tell a story of a site whose usability was improved by increasing the number of clicks to perform a purchase. That's what the numbers told them was the right thing to do. And sure enough, the client's revenue increased. Counterintuitive -- but true.)
Similarly, control isn't all bad. UIs can often be made cleaner and easier to use -- especially for novice users -- by limiting functionality. Again, this is nothing Nielsen himself hasn't accepted for years. This is not to say that constraint is freedom; but constraint can give you more free time, when it prevents you from wasting effort on things you don't need.
Aside: I'm always wary of Google as an exmple of any kind of "empowerment." Google right now controls a mind-boggling array of resources, and is in the process of leveraging them to exert an unprecedented level of control over the merchandisability of your browsing experience. That you will remain largely unaware of this process is a testament both to their technical aplomb and to their insipid arrogance.
Where this starts to get interesting is with mystery. I've conflated Nielsen's "Mystery" with "game-play" -- guilty of my own reinterpretation, to be sure, but I think it's valid, and I'm not really alone. Kim Krause has made a similar leap. "Conformity is Nielsen's mantra," she declares. But to proclaim that, she has to ignore Nielsen's praise for J. K. Rowling's "personal" site, which makes extensive use of playful, "mysterious" interface metaphors. "The site feels more like an adventure game," writes Nielsen, "but thatâ??s appropriate because its primary purpose is to feed fans rumors about Rowlingâ??s next book." He goes on:
User research with children shows that they often have problems using websites if links and buttons don't look clickable. At the same time, using a virtual environment as a main navigation interface does work well with kids, even though it's rarely appreciated by adults (outside of games). Also, children have more patience for hunting down links and rolling over interesting parts of a page to see what they do. On balance, the mystery approach to design succeeds for Rowling -- just don't try it for sites that are not about teenage wizards.
So, maybe these aren't hard and fast rules. Maybe there's a little wiggle-room in Nielsen's declarative statements, after all.
Cool. I like that, too. In fact, that's why I dislike Google, because their "I'm feeling lucky" search is nothing more than a glorified popularity meter. I don't want to know what the most popular return on my search term is; I want to see what my options are.
That's how I find things I don't expect to find: By being able to see the results that might not be "most popular." That's how I get serendipity.
Krause does have a point, though, when she notes that it's memorability that makes the site. Google was memorable, she said, because people learned new ways to use the tool: "They could look up people before that first date. They could type in search terms and hit 'I'm Feeling Lucky' to see what one web site Google would find for them out of all the pages in its index. Google was fun to use." (Actually, I always thought HotBot was terrific fun to use, because its Boolean search interface gave me a sense of power by letting me whittle down my results set to exactly what I wanted. But hey, that's just me, I guess...)
When she talks about sites being "memorable", what's she's talking about sounds an awful lot like Don Norman's "emotional design"; and indeed, I think that's what you get when you unify good design for usability with strong content and a design that speaks to that content. One site that strikes me as very successful in this regard is Burningbird. Superficially, the site is constantly changing, seeming to show a new look with almost every viewing. But having used the site once or twice, you will always still know how to use it again. Nothing about the interaction design per se changes when the graphics and colors and typefaces change. The menus stay in the same place, the action-cues stay the same.
I disagree with people who say that this is inherently hard. It does require care, but it's as hard as you make it; if you lean toward control, then you will be frustrated in your attempt to force an experience of memorable mystery upon your users, and it will all be very, very hard. If you let the content and your purpose drive your design, you will, by definition, get what you went looking for. The problem, as always, is to pick the right goal.
Dave Winer finally speaks out on the Weblogs.com fiasco, and amongst all the usual stuff I find one thing that really leaps out at me:
One of the things I learned is that just because a site is dormant doesn't mean it's not getting hits. The referer spam problem on these sites was something to behold. Search engines still index their pages, and return hits. They were mostly dead, in the sense that most hadn't been updated for several years.
Something troubles me about this and the interminable HTTP code vs. XML redirect discussions, and that's this: If someone links to the content, it's live by definition.
I'll restate that, so it's clear: Content that is used should continue to be accessible. I don't actually know where Ruby or Winer or Rogers Cadenhead or anybody but the writers stand on this, but it remains a non-negotiable best practice and first principle of web interaction design for usability that links should not go dead.
If that means you have to redirect the user to a new location when the content moves, so be it. If you have to do that with meta-refresh in HTML or XML, so be it. Sure, there are "purer" ways to handle it; but it's just stupid to let the perfect be the enemy of the good by saying that you can't redirect if you can't modify your .htaccess file. Even a lot of people with their own hosting environments aren't allowed to modify their .htaccess.
I'm getting the sense that a lot of the people involved in these debates are forgetting that the web was supposed to be about linking information to related information. Protocols and methods are mere instrumentalities to that end. It's the content that matters; there really, really isn't a web without content.
Bonnie Lida Rogers and Barbara Chaparro have studied the effect of breadcrumb trailes on site usability (Breadcrumb Navigation: Further Investigation of Usage, Usability News; courtesy INTERCONNECTED). While the results aren't surprising, they are results, as opposed to aesthetic prejudices.
Among the findings:
Note that there's a lot of work on breadcrumbs at this site...