You are here

InforAction Design

Information and interaction design

Your Mother's Name, Redux

Bruce Schneier, Ed Felten and Steve Ragan all had reactions similar to mine regarding Sarah Palin's email account. As might be expected, the folks posting in Schneier's comment thread were even more hard-core: Most suggested just using a secure password (something like "18D*F9afgsk*", maybe) in place of an answer. But Ragan had what I thought was the most interesting and useful extension to my own practice:

 

If you can pick you own question and answer, then that is the best bet. Make the question and answer something that no one knows, and that would never appear on a personal blog, Facebook or MySpace profile, or outside a close circle of family and friends.

For example, the question could be the name of your personal doctor. This will stop many of the guessing attacks on the system, and offer a stronger level of protection. Moreover, the answer needs to be a full sentence, and use all of the available space offered by the form when signing up for the account.

Q: What is the name of your doctor?

A: Her name is actually the name of the city where she was born.

What if you cannot pick a personal question and have to select one of the offered questions and answers? The fix here is also a simple one, namely you should lie. Lie through your teeth, pick a question, make the answer the same as you would if you wrote the question yourself, and stick to this lie.

The explanation is a little unclear, IMO, so I'll re-state it: You make your answer a complete sentence that you can remember and that is as long as it can be given the size of the box. That way the complexity of the "backup password" [Schneier's phrase] is increased exponentially just by virtue of its length, but the password actually becomes more memorable, because now it's mnemonic.

This is how and why WPA Passphrases work work the way they do. You can have your network authentication be something like "when i was a kid we loved to eat grasshoppers in cleveland." It's absurd and counter-factual (so hard to guess), but memorable (so you don't have to write it down).

For Pity's Sake, Don't Actually Use Your Mother's Maiden Name!

Those security questions that Yahoo, Google, your bank, and everyone else ask you when you register for an account (what's your mother's maiden name? where did you go to high school?) are a stupid, stupid idea.

They always were. I've been telling people this for years: Your city of birth, high school mascot, and mother's maiden name are all matters of public record for just about anyone. If you supply accurate answers to those questions, you are essentially creating a password that can be looked up by random strangers. Your favorite color, the place where you met your spouse and the model of your first car are all things that casual acquaintences could know. A story on Threat Level (@ Wired blogs) about the Sarah Palin Yahoo Mail cracking episode beautifully illustrates exactly why. Here's a description, posted on 4chan by someone alleging to be the cracker:

... it took seriously 45 mins on wikipedia and google to find the info, Birthday? 15 seconds on wikipedia, zip code? well she had always been from wasilla, and it only has 2 zip codes (thanks online postal service!)

the second was somewhat harder, the question was “where did you meet your spouse?” did some research, and apparently she had eloped with mister palin after college, if youll look on some of the screenshits that I took and other fellow anon have so graciously put on photobucket you will see the google search for “palin eloped” or some such in one of the tabs.

I found out later though more research that they met at high school, so I did variations of that, high, high school, eventually hit on “Wasilla high” I promptly changed the password to popcorn and took a cold shower…

In other words, s/he was able to crack Palin's account with only the most basic reverse-social-engineering techniques

The biggest irony in all of this is that it all worked, basically, because for once in her recent life Sarah Palin told the truth. If she'd lied in answering that security question, none of this would have ever happened.

Obscuring the questions does not help. All it does is require a little more research and maybe a little more guessing. What you need to do is have a question that only the person being asked can answer, that would be very difficult for a stranger to guess. You can achieve that very easily with the current generation of "security" questions by simply lieing. For example, if you're born in Atlanta, you give your city of birth as 'Schenectady.' Or, better yet, some random word that as far as you know isn't the name of any actual city, like "mumbledypeg." Your first car? For pity's sake, do not answer that one with the name of an actual car model. Instead, give an unrelated answer that you can remember, like "a raisin."

Ideally, this kind of security regime should involve several questions, about which you tell several different lies, and which lies you should never discuss with anyone. For example, Sarah Palin could say she met Todd at "acetominophen", or that her mother's maiden name was "rotary." But under no circumstance should a regime that's intended to increase security create de facto passwords that someone can just look up via Google.

Damning Virtue for Coup

The Palm Foleo is catching a lot of heat. Some of it is well deserved. (Just what the hell is this device supposed to "assist" a smartphone with? Shouldn't it be the other way around?) But most of it is feeding-frenzy pileon by people who got burned in the first try at thin clients, ten years ago.

Which is to say that AFAIAC most of the most strident critics of the Foleo don't want to admit that they've gotten the point -- they pretend not to understand what the device really is, which is plainly and simply a thin client for web 2.0 applications. But it's a thin client that could actually work: It's got a real web browser to access the real web applications that have sprung up in the interim via the near-ubiquitous broadband that we weren't even close to having the last time around.

Sour grapes like this prevents people from seeing the two real reasons that it will fail: It's not fast enough, and its being sold by idiots. Really, again, that whole 'smartphone assistant' thing: The Foleo should be (and will more likely be) "assisting" the phone, rather than vice versa. It's the thing with the network connectivity, not the phone. It's the thing with the USB connection, not the phone.

Semi-surprisingly, Jakob Nielsen has joined in the fray with a decidedly mainstream take on the specs:

Much too big and fat for a mobile device. At that size, you might as well buy a small laptop, which would run all the software you are already used to. For example Sony's Vaio TZ90 is 10% lighter and thinner...

Palm Foleo – A Failed Mobile Device (Jakob Nielsen's Alertbox)

... and 150% more expensive than the Foleo. Though it does have similar battery life. But that's still kind of a pathetic excuse for a pile-on. Criticize it for something real, why don't you, like, say, what you could do with it, instead of demanding that the device embrace all the weaknesses it's clearly designed to overcome:

The product website is miserable and doesn't provide any concrete info. Luckily the New York Times wrote about the Foleo (paid access) and gave us some facts:

  • weight: 2.5 pounds (1.1 kg)
  • thickness: 1 inch (2.4 cm)
  • size: 11x6 inches (28x15 cm) - estimated from photo

Much too big and fat for a mobile device. At that size, you might as well buy a small laptop, which would run all the software you are already used to. For example Sony's Vaio TZ90 is 10% lighter and thinner than the Foleo.

A mobile information appliance should be thinner than 1 cm (0.4 in), weigh less than 1 pound (.45 kg), and be about 6x4 inches (15x10 cm) big. Something with these specs makes sense because it would fit the ecological niche between the laptop and the phone.

So let's get this straight: A mobile device should be too small to easily read on, too small to type on, but still too big to fit easily in slacks pockets? Where's the text entry? Where's the user interface? Seems like a rather strange set of requirements. Let's restate them so they make more sense, in functional terms. A mobile device must:

  • Be small enough to carry easily. "Easily" is a value-laden term. It's going to be determined largely by the market, but we could distinguish between "packability" (trade paperback) and "pocketability" (mass-market paperback). Foleo is very packable; smart-phones range from pocketable to awkwardly-pocketable, like the Sidekick, and like the larger Palm OS and Windows Mobile devices.
  • Be easy to operate while stanting or walking. Foleo clearly fails here, but so does Jakob, since he doesn't specify that. For that matter, though, most mobile devices fail on this, since you have to pay far too much attention to them while using them in order to avoid making errors.
  • Run for long enough without recharging to allow its target user to make their full circuit -- to office, complete a work day, come home. Foleo falls down here, too, as do many of the current wave of handheld devices. Battery life on these things sucks. Nielsen misses this one, too.
  • Switch on more or less instantly. Nielsen misses this, but it's really important. I can't think of a single market-sucessful hand-held that doesn't hit this point.

Conspicuously missing, but important:

  • Afford text entry with a single hand and minimal distraction from other tasks. (Nothing really meets this criterion, though kids seem to do alright with tactile-feedback numeric pads and a lot of experience.)

In addition, we can make some other generalizations about what a device in the Foleo's class should do:

  • Connect to 802.11 networks. I think Foleo doesn't do as well as it should, here, since as far as I can tell it's only a "b" (802.11b) device. An 802.11n interface would really be ideal, since one of the "ecologically" competetive characteristics of a successful mobile device is liable to be its ability to easily make VOIP calls. This was a true failure of vision at Palm, who should have bit the bullet on weight and given it whatever battery load they needed to in order to allow up to 802.11n speed and hit 7 hours of run time.

So the specs that Nielsen (and so many others) have seen as so ripe for critcism are not at all the ones that are important. The ones that are important, and the ones that will end up being technically critical for this device, are:

  • Battery life: 5 hours
  • Networking: 802.11b [I could be wrong about this]
  • Peripheral connection: USB 1.2 [I could be wrong about this]

So at a technical level, I'm actually positive it fails on only one point, and that's run-time. Nielsen does raise a valid point, though:

Palm seems ashamed of its own specs since they are nowhere to be found on the product pages.

This is a blatant violation of all guidelines for e-commerce. I can't believe even the worst designer would suggest making a site where you can't find out how big the product is (especially for a mobile device). It must be a deliberate decision to hide the facts.

I think he's actually right about that. I think the product managers and marketers at Palm were so gun-shy about identifying Foleo as a thin client that they invented this whole "smart-phone companion" nonsense to cover it up. They basically threw the game -- decided the product was a bust before they even started, and concocted a marketing plan that, while it couldn't possibly succeed, at least had good graphics.

But come on -- a "smart phone accessory" that's ten times the size of the phone? Idiots.

 

Technorati Tags: , , ,

iPhone as Thin Client

The iPhone is the partial realization of the web-based thin client dream. In typical Apple fashion, though, they've gone just far enough to make money, and not so far that it might actually enable people to communicate more freely. They could have done that, but it would have meant leaving consumers' money on the table.

Apple's recent commercial makes this abundantly clear. In it, a user watches a clip of the Kraken from Pirates II, has a craving for calamari, and rotates his iPhone 90 degrees to search out seafood restaurants in his area.

Aside from the gee-whiz UI tricks that his iPhone enables, he's basically doing a Google Maps search. In fact it looks a lot like screenshots I've seen of Zimbra Zimlets for geo-locating addresses in the Zimbra web client. Nifty stuff. But there's no particular reason that it couldn't (or won't) be done on other phones. Hell, it's probably done on other phones now, if you want to pay for the service.

Which brings me to the Palm Foleo. I hadn't heard of the Foleo before Charlie Stross wrote an analysis explaining just why he didn't think it was such a terrible idea. Basically, after looking at the fact that it's really completely independent of phones in every important way, and can connect to WiFi networks all on its own, he thinks that it was intended to be a Web 2.0 terminal. A thin client, as we used to say back when everybody who thought things through thought that was a bad idea for a business plan. Things have changed, now, though: Broadband really is ubiquitous, if you're willing to pay for the access, and good quality high-resolution displays and mass storage are cheap, and battery technology is improving radically, so that the phone and its proprietary network have to do less and less that's customized.

So the iPhone (and any other post-Blackberry phone that wants to be successful) is really a Web 2.0 Terminal. Sometimes they'll have cached data, but by and large they'll do everything they can through the airwaves. The differentiator will be in the user interface.

Apple understands that, of course. They have a late-mover advantage in this field, in that Nokia, Samsung, Symbian, MS, et al. have been so focused on solving the UI problem under now-outmoded constraints that they're having a hard time getting used to the freedom of new user interaction hardware.

It still comes down to paying for service, of course -- unless you're on WiFi, and can attach to the myriad of free nodes that are finally becoming common in our urban landscape. Like you can with the Foleo, or any one of a half dozen (non-Verizon) smart-phones I looked at earlier this week.

But not on an iPhone. You need the extra service to do that on an iPhone.

If there's one thing Apple never forgets to design in, it's making you pay.

Andreas Duany on New Urbanism

Courtesy of the Peoria Chronicle's blog, here are links to a lecture on "New Urbanism" given by Andres Duany in Houston. It's on YouTube in 9 parts of 10 minutes each, and the first several have been posted on the Peoria Chronicle's blog. I'll be working my way through them bite by bite, as I hardly have 90 minutes to spare for anything that requires both visual and auditory attention, simultaneously. I may yet find something objectionable in it, but the basic presentation is quicker than reading Death and Life of Great American Cities.

  1. Introduction; Background; Suburban sprawl patterns; the four major components; public realm/private realm | New Urbanism in 10 minutes a day, Pt. 1
  2. Part 2: Zoning/Codes; Single Use vs. Mixed Use Planning; Traffic and congestion issues; Quality of Life issues; Scale and relation to physical compatibility vs. functional compatibility | New Urbanism in 10 minutes a day, Pt. 2
  3. Part 3: The four major components of suburban sprawl cont'd; Business/retail component | New Urbanism in 10 minutes a day, Pt. 3
  4. Part 4: Residential component today, vs. the way we used to do it-(combining retail with residential); Importance of mixed use/range of income earners; Privacy and Community; "McMansions"; why people prefer to live in traditional towns vs. suburbs
  5. Part 5: Residential, continued; granny flats/garage apartments, addressing affordable housing; The discipline of front/back; Intro, "sense of place
  6. Part 6: "Sense of Place", cont'd; What is it? How do you achieve it? What makes historical neighborhoods so desirable? The role of landscaping; Current residential development issues
  7. Part 7: Residential development issues, cont'd; Open Spaces; Roads: highways,avenues: It's all about the cars; Kevin Lynch; Landmarks; Terminating vistas, then and now
  8. Part 8: It's all about the cars, cont'd; Seniors & children suffer the most from today's sprawl, causing poor quality of life issues and reverse migrations ( more )
  9. Part 9: Back to the 11-hour workday: Spending our lives in our cars; Gold-plated highways at the expense of our civic and public buildings; Vertical vs. horizontal infrastructure; Affordable housing cont'd, by allowing families 'one car less' they can afford $50k more house! Conclusion; Year 2010 and 2015 projections

One comment from the Chronicle blog is interesting:

“New urbanism” is just a facade being used by developers to pack as many people into the smallest footprint as possible, to increase their profits.

In San Diego, older neighborhoods are being transformed into jam packed, noisy, traffic infested cesspools, by billionaires who live on 10 acre estates in Rancho Santa Fe (SD’s Bel Aire).

The 40 year old, 10 unit, low income apt building next to me was converted to $400k “condos” last year. It’s been pure hell, with 15 rude, loudmouthed, morons moving in, several of whom are already about to default on their loans. Several units are now being rented, at 3 times the monthly rent as before. Who wins? A handful of guys sitting around dreaming up their next scheme.

That he misses the point of New Urbanism completely isn't the interesting part -- it's that he's so willing to conflate New Urbanism with a newspeak co-optation of its ideals. He's not necessarily wrong to do so. Like many idealistic movements, it has some foolishness and romanticism baked into and is vulnerable to abuse. There are plenty of people who jump into idealistic movements with a partial understanding of the situation and then end up taking it in whole new, highly rationalized direction.

That's one of my objections to "emotional design": When you choose, as Don Norman, Bruce Tognazzini et al seem to have chosen, to make your evaluation of a design's quality hinge upon its gut, emotional appeal, you're basically opening up the door to tossing out real design and replacing it with pandering. Machines become good if they look cool. By that metric, the AMC Javelin would be one of the coolest, hottest cars ever manufactured. The nigh-indisputable fact that it was a piece of crap would be irrelevant: It had great "emotional design."

Similarly, the fact that PowerBooks are screwed together using 36 (or more) tiny screws of five to six different sizes and head-types, but also force-fit using spring clips, becomes irrelevant: The design feels great, looks great. Never mind that it could cost less to manufacture, cost less to repair and upgrade, and be just as solid, just as sound, if it were designed better. It's still great "emotional design."



Technorati Tags: , ,

Time is the new Bandwidth

I've been doing a lot of video blogging on BEYOND THE BEYOND lately, which must be annoying to readers who don't have broadband. But look: outside the crass duopoly of the USA's pitifully inadequate broadband, digital video is gushing right through the cracks. There's just no getting away from it. There is so much broadband, so cheap and so widespread, that the video pirates are going out of business. I used to walk around Belgrade and there wasn't a street-corner where some guy wasn't hawking pirated plastic disks. Those crooks and hucksters are going away, their customers are all on YouTube or LimeWire...

Bruce Sterling, WIRED Blogs: Beyond the Beyond

Broadband isn't the problem. Bruce makes his living being a visionary. I make my living doing work for other people. It's truly not the visionaries who actually change things -- it's the people who buy (into) their visions, and those people just don't have the time to look and listen at the same time to continuous "bites" of see-hear content.

Podcasts are bad enough -- I have to listen for as long as someone speaks in order to get their point, I can't really skim ahead or scan around with my eyes. I've got to buy into their narrative construction. And I'm paying for that purchase with my time and attention.

This also goes to Cory Doctorow's point about text bites. He's grazing around, taking in small chunks of text at a go, and the web is fine for that, that's his message. Great. Fine. But text can be time- and space-shifted far more effectively than audio, which in turn can be time-/space-shifted far more effectively than video.

What's really needful as I've noted before is a way to mode shift text into audio without human intervention. Or video, for that matter, if you want to get visionary about it. But I'm not going to worry about video right now, because audio is something that some basement hacker could actually pull off with an evening's work, and refine with the labor of just a few weeks. Or so it seems to me. On my Mac, right now, I can select text and have the machine speak it to me, complete with sentence and paragraph pauses. The Speech service is AppleScript-able, so (if I actually knew AppleScript) I could script it to pick up blog posts and pump them into audio files, that in turn could be pumped onto my audio player for listening in the gym or on the road. If I spent that much time in the gym or on the road. Which I don't.

On the FEMA, Brownie, and the Suitability of Email for Critical Communications

Anyone in the corporate world knows that keeping up with email (or voicemail) can be a problem. I'm not talking about spam; I'm talking about ordinary business-related emails. During peak times like product implementations, I've occasionally gotten hundreds of non-trivial emails per day for up to several weeks at a time. In situations like that, sometimes, things get ignored. But if you are any good at your job at all, you find ways to prioritize those emails to ensure that the genuinely important ones don't get ignored.

That job gets much easier if someone summarizes all the really important stuff for you into one email. As someone did for Michael Chertoff and Michael Brown. Every day.

Some people don't get it. In emailed responses to NPR's interview with FEMA official Leo Bosner, several writers complain about the unsuitability of email as a means to communicate vital information. Email boxes get filled up with junk, they reason; Leo Bosner should have picked up the phone. One correspondent even argued that Bosner is the one who should be blamed, for his own dereliction of duty in relegating something so important to a mere email.

They're right about this much: Email can be a poor medium for reliably communicating vital information on an ad hoc basis (though certainly no worse than voice mail). But they also betray a profound lack of understanding of institutional processes and chains of responsibility.

And they make some basic assumptions about the email that they should not be making. This wasn't ad hoc. It was standard procedure. This is the way it was supposed to work.

And here's the really important thing: This wasn't just any email. It was an email that Chertoff and Brown got every day, and that they needed to read, and to understand, every day. That email was their job, writ fine: Know the danger, and be prepared to act. The danger was there; they knew about it; they did not act.

What this might in fact reveal is that there's a poor prioritization in practice. As a friend is fond of saying, "If everything is top priority, then nothing is." So it might reveal that there were too many top priority things in that memo.

But more likely, it reveals a criminal lack of attentiveness to job responsibilities on the part of Chertoff and Brown, as suggested by an earlier report. Critical places like FEMA are not places for political functionaries on the lookout for résumé padding. They're places for serious people who are willing to wear their pagers to bed and never ever turn off their Blackberries. That's what their subordinates -- people like Leo Bosner -- would do.

Imagination Failure of the Moment

Failure of imagination is often indistinguishable from arrogance.

Here's how The Blue Technologies Group conceptualizes the ideal "writers" editing environment:

The concept of single documents in the classical sense is dismissed. Text elements take their part and are organised in a project, the container.
Every text element has two editing levels: the "standard" text and a "note pad".
The ability to format texts in an optical way (bold faced, italics, etc.) is omitted - you can divide paragraphs into levels and set markers instead.

It's passages like this that drive home to me how sorely and sadly in need most people are of a little applied personality theory. Because it's painfully clear to me just from the language that they use that their word processor, Ulysses, is going to be a painfully inappropriate tool for the vast majority of writers.

I know that because Ulysses has clearly been defined to suit the personality of a particular type of writer. The words and concepts its creators deploy tell me that. They talk about "projects", "markers", "levels" (of paragraphs?). These are organizational terms; they're conceptual terms. Using them to appeal to "writers" exposes the assumption that all writers think in similar ways. It implies that "writers" will want to restructure the way they think about producing texts such that they're vulnerable to being organized in "levels", and that they'll find it a benefit to replace italics and boldface with "markers".

My own experience working with writers who need to maintain HTML demonstrates to me abundantly that people aren't typically very interested in replacing italics with an "emphasis" tag. The idea that "italic" is visual and "emphasis" is conceptual (and hence, independent of presentation) is too abstract from the reality of writing, for them -- it's too high-concept; for them, the reality of writing is that emphasized passages are in italics, and strongly emphasized passages are in boldface.

And I also see that while they talk about elimiinating distractions, they produce an application with a cluttered and confusing user interface that looks to me like nothing so much as the UI of an Integrated Development Environment (IDE). While I've grown accustomed to the metaphor, I can remember when I found it cluttered and confusing, and I know from long experience that most people find those UIs as confusing as hell.

Now, this may be a great environment for some creative people. But based on what I know about personality theory, that subset of people is going to be very small -- something less than 7% of the population, most likely, and then reduce that to the much smaller subset that are writers who work on substantial projects.

I might even try Ulysses myself, for whatever that's worth; but if it looks to me like it would be the slightest nuisance to produce reviewable copy (for example, if I have to spend ANY TIME AT ALL formatting for print when I send it to friends and colleagues for review) then it's more or less worse than useless to me: Any time I save by having my "projects" arranged together (and how many writers do I know who organize things into discreet projects like that?), would be wiped out and then some by time wasted formatting the document for peer-reviewers. And I haven't even started to talk about trying to work cooperatively with other people....

The (partly valid) response might be that if writers would only learn to use it correctly, and adopt it widely enough that you wouldn't need special formatting to send a manuscript out for review, then Ulysses would be a fine tool. Of course, that's the same kind of thing that Dean Kamen and his true believer followers said about the Segway: If we'd all just rearrange our cities to suit it, the Segway would be an ideal mode of transport....

It's not the marketing I object to -- that will either work or it won't -- it's the arrogance of presuming that they've found the True Way. Because the implicit lack of interoperability that goes along with defining a new file storage protocol (and I don't care how you dress them up, they're still files) basically inhibits Ulysses users from working with other writers, and therefore implies that it's a truly separate way, if not a purely better way. Ulysses looks to me like a tool that fosters separateness, not cooperation -- isolation, not interaction. It's farther than ever from the hypertext ideal.

But then, I suppose my irritation is indicative of my own personality type.

Anti-Fittism Of The Moment

Big targets mean big distractions.

I'm sitting here listening to Whadya Know on the radio while I write. While I do this, I've got a couple of applications and part of my desktop visible on screen, and a cluttery clumsy app launching device pinned to the left edge of my screen. (I move my Dashboard to the left edge because I value the vertical screen space too much. More vertical space means more text on screen, which means more chance at understanding what the hell I'm looking at. Which is to the point, believe it or not, but I'm not going to go there right now.)

And I'm getting distracted by it all. Part of it is that I'm over 40 and wasn't "raised to multi-task" (as so many people raised in the age of multi-tasking OSs and multiple-media-streams seem to think they have been). But part of the problem is all this extraneous visual noise -- stuff I don't need to see right now, like the "drawer" to the left of my application window that lets me see the subject lines of previous journal entries and, more to the point, blocks out a bunch of other distracting stuff in the windows behind this one. Obviously, I could close the "drawer" and widen my editing window to cover them, but then I'd have a line-length that would be difficult to eye-track.

Anyway, the point of this (man, I am getting distracted) is that having all this clutter on my screen distracts me. Presumably that's why MacJournal (like a lot of "journaling" applications) has a full-screen mode that lets me shut out everything else if I so choose.

Fitt's Law is increasingly invoked these days to justify a lot of design decisions, like pinning a menu bar to the top of the screen for all applications, or putting "hot zones" in the corners of the screen. It's invoked as a rationalization for putting web page navigation at the edge of the page (and hence, presumably, at the edge of a window).

Interestingly, it seldom gets used as a rationalization for making navigation large.

Fitt's Law reduces to a fairly simple principle: The likelihood of hitting a target by using a mouse pointer is a function of the size of the target and the distance from the starting point. That is, it's easier to hit a big target with a mouse pointer than it is to hit a small target.

Fitt's Law is also often cited as demonstrating that it's easier to hit targets that are at a constrained edge or corner; it's as valid a principle as Fitt's Law, but isn't really implied by it. So Fitt's Law gets cited to justify things like parking the active menu bar at a screen edge. It's easy to get to the edge of a constrained screen: Just bang your mouse or track-pad finger or pointing-stick finger over to the screen edge and it will stop -- won't go any farther. Bang in the general direction of the corner, and the cursor will behave like water and "flow" into the corner, so the corners become the easiest thing to hit on the screen. Tognazzini, among others, uncritically and inaccurately cites this as an example of Fitt's Law in action. I don't know who came up with this conflation first, but Tog is the most vocal exponent of it that I'm aware of so I'll probably start referring to it as "Tognazzini's Corollary."

(Aside: Obviously this only holds for constrained edges, as on single-screen systems. On two-screen systems, or on systems with virtualized desktop scrolling, it's a little more complex. Less obviously, this principle is more or less meaningless on systems that are actuated with directly-positioned devices like touch-screens, and it requires that people engage in some selective modification of their spatial metaphors. But that's another topic for another time.)

It's interesting to me that Fitt's Law isn't applied to the size of buttons because that's its most obvious implication: You want to make the element easier to hit, the most obvious thing to do is make it bigger. Yet I don't recall ever seeing it invoked as an argument for making screen elements larger, or discussed when people call for making screen elements smaller. Which makes me suspect even more that Fitt's Law is often more a matter of Fittishization than Fittizing.

Because the reason people (read: designers) don't want to make things bigger is obvious: It doesn't look good. Things look better (or at least, cooler) when they're small. That's why designers who'll lecture you endlessly about the virtues of design usability have websites with tiny text that has poor intensity contrast with its background. So Fittism really tends to serve more as a all-purpose way of rationalizing design decisions than it really does as a way of making pages or applications more usable.

In any case, Fitt's Law isn't really nearly as important as its advocates make it out to be. The current rage for Fittism ("Fittishism"?) over-focuses on the motor-experience of navigation, and de-emphasizes the cognitive aspects. The reason for this is that Fitts Law can be very easily validated on a motor interaction level; but the cognitive aspects of the problem tend to get ignored.

And that's not even considering the effect of edge detection in the visual field. This is not strictly a motor issue, and it's not really a cognitive issue -- though it has cognitive aspects.

For example, if the menu bar is always parked at the edge of the screen -- let's say at the top edge -- then it becomes very important that users be able to have confidence that they're using the right menu. If menus are affixed to a window, then know that menu applies to the application to which that window belongs. If menus are affixed to the top of the screen, you are required to do cognitive processing to figure out which application you've got in the "foreground".

(Another aside: That's fairly difficult to do on a Macintosh, which ever since the advent of OS X and Aqua, has very poor visual cues to indicate what application window is active. Title bars change shade and texture a little; text-color in the title bar changes intensity a little; the name of the application is appended to the beginning of the menu bar, in the space that people visually edit out of their mental map of the page in order to limit distractions. In other windowing environments -- Windows and KDE spring to mind -- it's possible to configure some pretty dramatic visual cues as to which windows are in focus, even if you ignore the fact that the menu you need is normally pinned to the damn window frame. It's trivially easy on an OS X system to start using the menu without realizing that you're looking at the menu for the wrong application. I don't do it much myself anymore, but I see people do it all the time.)

But I'm getting off point, here: I started this to talk about distractions, and I keep getting distracted....

Why is there no decent Mac word processor?

The late Isaac Asimov famously resisted computers for many years. With good reason: Until relatively late in his life, they couldn't have kept up with him. His workspace was infamous. He kept several long tables in the attic of his town house, arranged in a big "U", with an IBM Selectric (the fastest typewriter available then or since) every few feet. Each smaller workspace was set up to work on a different project, or part of a project. When he got bored working on one thing, he'd simply roll over to another project.

I got into computers to use word processors. That's not true: I got into computers to manage prose. That was really my dream: To manage prose, which meant managing ideas, managing text, searching seamlessly through stuff that I'd written, changing on the fly, getting rid of hard copy, automating tedious tasks.... I imagined a day when I'd be able to store large amounts of text and search through them easily. I imagined a day when I'd be able to effortlessly switch back and forth between projects the way that Asimov would wheel from one Selectric to the next.

That was in the mid-80s; I'm part of the way there. I use (and have used for something around ten years) a multi-tasking computer that lets me keep multiple projects in progress (dare I say "a la Asimov"?); with wireless networking, I can get connected to the Internet in a surprising and growing number of places; I have a small, light, powerful laptop that lets me do real work when away from an "office."

But I still don't have the text tools that I really want. OS X 10.4 has nice meta-data-aware indexing, implemented in a fairly efficient way; it also has good solid multitasking and power management. But it's still lacking one thing:

It doesn't have a decent word processor.

What would a word processor need to have for me to regard it as "decent"? At a high level, it needs to fulfill three basic criteria:

  1. It has to have good usability characteristics.
  2. It has to support all of the basic, required business functionality that people nowadays expect from a word processor.
  3. It has to be able to interchange files with no meaningful loss of information or formatting with the people with whom I need to work.
Those are actually pretty loaded criteria. Let's break them down a little:
  1. Usability: By this I mean that it has to stay out of my way and let me work. It has to not require that I do a lot of things with the mouse. It has to not place unusual constraints on me, like saving everything into some proprietary "project" or "drawer".
    1. Good interaction performance: Screen writes need to be fast and free of artifacts, document navigation actions like page up and page down need to be quick.
    2. It must be easy to do basic, standard things like move to different points in a document. There are conventional ways of doing this that might be CUA, but are probably just convention: Ctrl-End to move to the end of the current document, Ctrl-Home to move to the beginning, Ctrl-Up-Arrow to go back one paragraph, etc. You will find these conventions honored on the majority of Windows (and *nix) editors and word processors, with spotty acceptance on the Mac.
    3. It must at least be possible to de-clutter the visual field -- to remove extraneous noise. As an example, many find word processors have for many years offered a "full screen" mode that brings that page to your focus and blocks out all other programs. That's an extreme example; Word and OpenOffice 2.0 have a "draft mode" that's pretty good in that regard.
  2. Features: Again, pretty loaded, but at a minimum I think a useful business word processor absolutely has to support the following -- these are things that I have found myself using again and again in preparing business documents, and they save incredible amounts of time:
    1. Automatically formatted (and numbered) lists and outlines. This might seem picky, but if you don't understand the need for it, you haven't really created many complex business documents. Consider a project plan document, that might have a list of things in order. On review, the order changes. If your list has 50 items, you might need to change 50 ordinal numbers. (This has been available in MS Word, WordPerfect, StarOffice/OpenOffice, and many others for many years.)
    2. Section-sensitive headers and footers. I.e., start a new section, you can change the presentation or content of the headers and footers.
    3. Automated tables of contents.
    4. A simple way to format the first page of a simple document differently than the subsequent pages. This has been possible for many years in Word, WordPerfect.
    5. It must implement style-based formatting at at least the character and paragraph levels; more than that (such as page styles) might be overkill, since my experience so far suggests that they don't interoperate well. Furthermore, though, it must be possible to import styles from other documents or from some kind of repository. The feature is dramatically less useful without that capability.
  3. Interoperability: The software must, must, must be able to both import and export files -- files, not text, but files (this is important, guys, please listen) -- in one or more widely used formats. For practical purposes right now, that means that it must be able to interchange files with Word 2000 and later versions on the Windows platform. OASIS OpenDocument format compatibility would be nice from a future-proofing standpoint, but I'm already seeing some indications that the OpenDocument format may go places where it's not very inter-operable with Word's native RTF. So interoperability with RTF, clumsy and locked-in as it is, is what's needful.
    1. No information should be lost in an import/export. E.g., you should never ever lose footnotes/endnotes; you should not lose change tracking; you should not lose bookmarks.
    2. No formatting should be altered in an import/export. Obviously that's easier said than done -- especially with a poorly-documented format like RTF -- but OpenOffice and Word have come surprisingly close.

It's a fact -- and this is not seriously disputable by any honest and experienced user of both platforms -- that Windows (and to lesser extent Linux) beat all but one (arguably two) of the available Mac word processors hands down on all these counts.

I leapt into using a Mac with the assumption I'd be able to find what I needed when I got here, and for the most part, that's been true. Some glaring exceptions: There really aren't any good music players (iTunes is a sad and cynical joke), and -- most glaringly -- there are no (repeat, no, repeat, no capable, stable, usable, general purpose word processors.

The field of modern word processors is pretty small to begin with. On Windows you've basically got Word, OpenOffice, and WordPerfect, with a few specialist players. Down the feature ladder a bit you've got AbiWord lurking in the shadows: It's pretty stable on Windows, and does most of what you'd need to do for basic office word processing, but it has some problems translating Word docs with unusual features like change tracking.

On *nix, you've always got OpenOffice and AbiWord. In addition, you've got kWrite, which is about on feature-par with AbiWord, but tends to remain more stable version to version.

To be fair, there are a lot of word processors available for the Mac. But few of them really fill the minimal requirements for a business word processor, and those few fail in critical to borderline critical extended requirements. And what's most frustrating for me is that it's been that way for years, and the situation shows no real signs of changing.

Here are the players on the Mac:

Word (Mac)

The Good: It supports all the basic, required business features.

The Ugly: Performance sucks, and so does price. I

OpenOffice 1.1.2
The Good: Supports all the basic, required business features.
The Ugly:The two big problems are that it requires X11 and that it's not up to version with OO on the other platforms. I don't think. Truthfully, I haven't tried it yet, but my expectation is for poor performance. In any case, OpenOffice is in general clumsier than Word on a PC. That may not be true versus MacWord. Also, it does lack some Word features I've come to be very very fond of: Chapter navigation in the sidebar, and (this is a real biggie) the Outline Mode document view.
NeoOffice/J 1.1.4

The Good: Price -- it's free. Features -- it's got all the basic features, just as OpenOffice 1.2 does. By all accounts, it's more stable and performs better than OOo 1.1.2 does on a Mac. This is what I use every day, for better or worse. It's very impressive for what it is; I'd just like it to be more.

The Ugly: Rendering performance is flaky. It's hard to de-clutter the visual field -- there's nothing analogous to Word or OOo 2.x's "draft mode". NO/J is somewhat unstable from build to build, though genuine stability issues seem to get fixed pretty quickly, and the software will (theoretically) prompt you when there's a new patch or version available. Unpredictable behavior with regard to application of styles -- e.g. I apply a style, and it often doesn't fully obtain. Some of these problems get addressed on a build by build basis, but it's hard to know which are bugs and which are core defects of OOo. This is OO 1.x, after all, which was kind of flaky in the best of times.

Nisus Writer Express

The Good: Small, fast, good-looking, and the drawer-palette is less obtrusive than Word 2002's right-sidebar. RTF is its native format, which gives the (false) hope that it will have a high degree of format compatibility with Word.

The Ugly: I had high hopes for this one, but it's been disappointing to learn that it fails in some really critical areas. Format compatibility with Word is hampered by the fact that it's missing some really important basic features, like automatic bullets and outlining. I use those all the time in business and technical writing -- hell, just in writing, period. I don't have time to screw around adding bullets or automating the feature with macros, and because the implementation for bulleted or numbered lists is via a hanging indent, the lists won't map to bullet lists or numbered lists in Word or OO. Ergo, NWE is useless for group work. This is intriguing to me, since they've clearly done some substantial work to make it good for handling long documents, and yet they've neglected a very basic formatting feature that's used in the most commonly created kind of long document, business and technical reports: Automatically numbered lists and outlines.

Interestingly, it also fails to import headers and footers. I would have expected those to be pretty basic. Basically, this isn't exactly a non-starter, but it's close.

AbiWord 2.x

The Good: Free.

The Ugly: Unstable and has poor import and rendering performance in the Mac version. I know the developers are working on it, but there's only one guy working on the OS X port right now so I don't have high hopes. Also, it's not as good for long technical documents as Word or OO would be.

Mellel

The Good: Don't know; haven't tried it. People swear by it for performance, but see below.

The Ugly: File compatibility. Doesn't read OpenOffice files or OpenDocument (OASIS-standard) files, and has a native format that isn't RTF. That makes me think it's a waste of time to even bother to evaluate it. I don't need to be screwing around with something new if I'm going to run up against the same file compatibility issues I have with Nisus.
MarinerWrite

The Good: Cheap. Light. Quick.

The Ugly: Features. As in, ain't got many.

Apple Pages

The Good: Inexpensive. Conforms to the Mac UI.

The Ugly: Conforms to the Mac UI -- which means that it requires finger-contorting key combinations to do basic things without using the mouse, and makes poor use of the screen. And it's severely lacking in features: Apparently it can't export very well to RTF, which is odd, considering how deeply Apple has ingrained RTF into their system.

Why am I mincing words, here? Pages, based on what I know about it, is the same kind of sad and cynical joke as iTunes. It's a piece of brainwashing; it's eye-candy; it's got nothing very useful to anyone who does anything serious with documents.

For the time being, it looks as though I'll be sticking with NeoOffice/J, and at some point installing the OO plus X11 package to see how ugly that is.

The User Experience is the User Experience

Jakob Nielsen, among others, has remarked that "the network is the user experience." They're all wrong, and they're all right.

Browsing through UseIt.com yesterday left Nielsen's June 2000 predictions of sweeping change in the user experience loaded in my browser when I sat down at my desk this morning:

Since the late 1980s, hypertext theory has predicted the emergence of a navigation layer that would be the nexus of the user experience. Traditionally, we assumed that this would happen by integrating the browser with the operating system to create a unified interface for manipulating remote information and local files. It has always been silly to have some stuff treated specially because it happened to come in over a certain network. Browsers must die as independent applications.

It is counter-productive to have users suffer sub-standard user interfaces for applications that happen to run across the Internet as opposed to the local client-server environment. Application functionality requires more UI than document browsing: another reason browsers must die.

Silly, counter-productive: Sure. I've always thought so. But the tendency in the late 1990s was to assume that document browsing was exactly enough. And though the peculiar insanity of things like "Active Desktop" (which strove to make the Win95 desktop work just like the Web circa 1999) does seem to have passed, it remains true that the bias is toward the browser, not toward rich application-scope UIs.

Which is to say that Nielsen, in this old piece, is failing to heed his own advice. Users are inherently conservative: They continue to do what continues to work, which drives a feedback loop.

But more than that, he -- like almost everyone else I can think of, except myself -- is missing the single most important thing about modern computing life: People don't use the same computer all the time. Working from home, now, I frequently use two: My desktop, an OS X Mac, and my laptop, a Sony Picturebook running Windows 2000. In my most recent full time job (where I sometimes spent 12 hour days on a routine basis), I used two more systems: A desktop running Windows NT and a laptop running Windows 2000. And that's not even counting the Windows 2000 desktop I still occasionally use at home. (And would use more if I had an easy way to synchronize it with my Mac and my Picturebook.)

And so it's interesting to look at each of Nielsen's predictions as of June 2000:

  1. Amazon is healthier than ever, in no small part because "zero click payments everywhere" are no closer now than they were in 2000. (See [3].)
  2. Yahoo's network of services is healthier than ever, in no small part because people are less and less tied to specific machines. (See [3].)
  3. Websites know your preferences only insofar as you invest those with a particular services vendor/provider, like Yahoo or Google. That's actually a reflection of increasing network-centricity: These services are finally recognizing that people have lives that cross many machines.
  4. AOL is failing rapidly, but its proprietary messaging system is still going strong -- as are the proprietary messaging systems of Yahoo and Microsoft. Messaging aggregators like Trillian are still bleeding edge.

None of this is to say that I don't think the network is the user experience. He's sort of right about that -- or at least, he's right that it sort of should be, that things would work better if we made apps more network aware. After all, in the age of ubiquitous wireless, the network is spreading to places it's never been before. But what the 2005 situation reveals is that relatively low-impact solutions like using cell phone networks for instant messaging or logging-in to websites have trumped high-impact solutions like re-architecting the user experience to eliminate the web. Instead of using the increasingly ubiquitous broadband services to synch all our stuff from a networked drive, we're carrying around USB keychain drives and using webmail. Instead of doing micropayments, we're still living in a world of aggregated vendors a la Amazon and charity (Wikipedia) or ad-/sales-supported services (IMDB, GraceNote).

At a more fundamental level, we have to be mindful that we don't define "the network" too narrowly. Consider the old school term "sneakernet": Putting files on floppies to carry them from one person to another. It was an ironism -- sneaker-carried "networking" wasn't "networking", right? -- but it revealed a deeper truth: "Networking" describes more than just the stuff that travels across TCP/IP networks. At a trivial level, it also includes (mobile) phone networks and their SMS/IM/picture-sharing components. But at a deeper level, it covers the human connections as well. In fact, the network of people is really at least as important as the network of machines.

Understood that way, "the network is the user experience" takes on a whole new meaning.

Momentary Thoughts on Empiricism in Design

When I read reports from other people's research, I usually find that their qualitative study results are more credible and trustworthy than their quantitative results. It's a dangerous mistake to believe that statistical research is somehow more scientific or credible than insight-based observational research. In fact, most statistical research is less credible than qualitative studies. Design research is not like medical science: ethnography is its closest analogy in traditional fields of science.
[Jakob Nielsen, "Risks of Quantitative Studies"]

I've always found it more than a little ironic that many designers have such a strong, negative reaction to Jakob Nielsen, especially since most of them do so by banging the drum in protest of what could be termed "usability by axiom": The idea that following a set of magic rules will make a website (or any application) more usable. I find it ironic, because Nielsen has always seemed to me to be a fairly ruthless empiricist: His end positiion is almost invariably that if a design idea doesn't provide the usability benefit you imagine it will before you use it, then you shouldn't be using it. This month's Alertbox is a case in point, but there are plenty of others I could cite.

And therein lies the problem. Designers, painting broadly, really do know more than the rest of us do about design, at least on the average: They spend years in school, they produce designs which are done according to the aesthetic rules and basic facts about human interaction with machines and which are then critiqued by their teachers and colleagues. They've often even done their research quite meticulously. But they seldom bother to actually look at what real users do -- at least, in any way that might do something other than validate their preconceptions. And it is, after all, the real users doing real work who will get to decide whether a design is effective or not.

Take "Fitt's Law", for example: If you search for tests of Fitt's Law, you'll find plenty of tests, but the last time I looked, I could find none that tested Fitt's Law in a real working context. And there's a good reason for that: It would be really hard to do. To test effectively, you'd have to include such contextual features as menus, real tasks, application windows -- and then, change them. It's barely feasible, but do-able -- it would be a good project for someone's Master's thesis in interaction design, and it would be simplest to do with Linux and collection of different window managers. You'd have to cope with the problems of learning new applications, and sort out the effect of those differences on the test. It's a tough problem to even specify, so it's not surprising that people wouldn't choose to fully tackle it.

But I digress. My point is that it's relatively easy to validate the physical fact that it's easier to hit big targets than small ones and easier to hit targets at the edge or corner of the mouse area than out in the middle of the visual field. Unfortunately, that's not very interesting or useful, because we all know that by now. (Or should. Surprisingly few university-trained or industrially-experienced interaction designers take account of such factors.)

One thing it would be interesting or useful to do, would be to figure out what the relationship is between "hard edges" (like the screen edge of a single-monitor system) and what we could call "field edges" (like the borders of a window).

What would be interesting would be to study the relationship of the physical truths exposed by Fitt's "Law" with the physical truths hinted at by things like eye-tracking studies.

What would be interesting would be to figure out what the relationship is between understanding and physical coordination. Quickly twitching your mouse to a colored target doesn't tell you much about that; but navigating a user interface to perform a word processor formatting operation could. Banging the mouse pointer into the corner of the screen like a hockey-stick tells you that mouse pointers stop when they hit screen edges; I already knew that from having used windowing user interfaces since 1987. What I don't know is whether cognitive factors trump physical factors, and simple validations of Fitt's Law do nothing to tell me anything about that.

What would be interesting, would be to design to please customers, instead of to please designers.

Ecology of Traffic

The latest leading-edge thinking in traffic-calming is that we should remove traffic controls, not add them. Passive controls, that is, like signage; active controls, hard controls, like traffic circles (rotaries, roundabouts), merge lanes -- those can stay. But Yield signs at the traffic circle entrance, "lane ending" indicators, even curbs, stop signs and traffic lights: Those should go.

The thinking is that without them, we think more. With them, we give over our control over our fates to the signage. At the same time, we can do things that, superficially, make a road more dangerous: We allow parking where we'd previously barred it; we make the road-beds narrower instead of wider; we remove turn lanes and traffic lights; we remove explicit barriers between people and traffic. (Note that this doesn't mean eliminating sidewalks altogether: "Instead of a raised curb, sidewalks are denoted by texture and color.")

Results are counter-intuitive: Traffic moves more slowly, and yet trip times are reduced. It's the kind of result that a simplistic understanding of systems can't predict, but that an ecological understanding can.

I have to admit that I was resistent to the idea when I first read it. It reminded me of a trip to Seattle in February of 2000, when I noted the conspicuous absence of stop signs at intersections in many residential neighborhoods. But as I reflect on it, it strikes me that, at the least, bad signage really is worse than no signage. Signage, after all, plays to our conscious, rational mind, which is easily stymied by contradiction and inconsistency in ways that our sub-conscious, a-rational mind is not. And I recall that, when I approached those intersections, I stopped and looked very carefully. I paid attention to what I was doing (driving) instead of to other things.

As I think through it further, I find myself thinking of least three other ideas: The human factors design concept of affordance; Jane Jacobs's "eyes on the street"; and the zen/taoist/buddhist tightrope of mindfulness:mindlessness. The common thread is that they all tap into aspects of humanity that are essentially sub-conscious, in the sense of being as tied to our animal nature as to our human nature. They are rational in the sense that sense can be made of them; they are also a-rational, in the sense that we seldom bother to try. (And also in the sense that when we do bother to try, we often screw it up.) Most imporatantly, they are ecological, not based on a simplistic, modernist understanding of systems theory.

We still need to be able to inculcate awareness of self-interest at a low level of consciousness. We can only rely on our natural accident-avoidance to carry us so far, especially with as many distractions as the world affords.

Office IM As A License To Bully

Workplace IM is one of those ideas that just won't die. It's made the Red Herring, and it's officially made it into my corporate workplace, so I'm afraid we're not going to see this one just fade away like the fad it should have been.

"The real questions will be whether supervisors seek to employ IM as a monitoring tool," said Jonathan Zittrain, professor and co-director of the Berkman Center for Internet and Society at Harvard Law School.

With IM, employers would be "able to ping employees at any moment, with a very low threshold since no one has to pick up a phone or wander over to a desk," he said. "Employees who would object immediately to a camera monitoring their desk feel IM is far less intrusive."

[Red Herring, "I work, therefore, IM"] [via SoulSoup]

I work for a Major Staffing Services Company, and some folks around here use IM all the time. And they do it for one simple reason: It lets them make people jump. That's why people like it: It gives them power over others. So IM is really just another manifestation of the schoolyard-bully meme that's becoming so prevalent in modern American business. Previously, IM use has been limited to the population of people servicing one particular large client who has an expectation that we use their IM software; now, Corporate IT has pushed MS Messenger out to all of the Corporate-imaged computers, so I know there will be an expectation that we start using it. (Fortunately, my Corporate-imaged computer is so piss-poor that I don't use it, so I get a pass, for a while, at least.)

I had to interact with that sub-population a lot during a recent launch phase, and I can tell you with great confidence that getting the message to me via IM did not improve their chances of a quality resolution. In fact, it arguably decreased them, because I would continually have to shift focus to deal with new problems.

On the average, it takes something like 7 minutes to recover context and return to the previous state on a complex task after you take an unexpected phone call; the numbers for IM can't be a hell of a lot better. So IM is terribly disruptive and time-wasting.

Put succinctly: IM is a thoughtfulness-killer. There truly are very very few business needs that are so important that you have to IM, but not so important you can't pick up the phone. Yes, the phone takes more prep and concentration. That's a good thing; it means you use it less.

Since it's so commonly a power-trip, IM also drives the workplace further in the direction of being a war of all against all. That's not how work gets done. Work gets done through a combination of cooperation, and letting people get work done.

Adam Kalsey Dares To See Through The Emperor's Cloak

Adam Kalsey has had the temerity to criticize the Kewl Kidz browser, Firefox, and thinks that maybe, just maybe, aggressively marketing it prior to "1.x" isn't such a good idea: "Aggressively marketing Firefox before it is a completely stable product is dangerous. Youâ??re running the risk of having people trying it out and being put off by the bugs, never again to return." [Adam Kalsey, "Why I don't recommend Firefox"]

I agree; in addition, I wonder again why Firefox is being so aggressively marketed in preference to the more stable, more usable, more feature-rich Mozilla. Wait -- I know the answer to that already: It's basically because Firefox is COOL, and Mozilla is NOT COOL. There really are no serious technical reasons -- it's all a matter of how to best herd the cats.

The history on this is worth looking at. Mozilla and Firefox forked sometime in '00, when Firefox was still "Phoenix". The split happened because a small group of developers thought that some of the approaches used in the Mozilla core were wrong-headed, and they thought everything had to be rebuilt from the ground up to improve performance. They were particularly obsessed with load-time and rendering speed.

Fast forward to 2004: Mozilla still loads faster (though it's slightly -- slightly -- bigger), and renders pages faster. Mozilla core has been modified to have more or less all the customization hooks that Firefox has. Mozilla is still significantly more usable out of the box. But those kooky Firefox kids have their own bazaar, now. Oh, and, yeh, they finally did implement extension management.

In a really objective analysis, there's no strong argument for introducing Firefox to novice browsers, and as Adam points out, lots of reasons not to. There are also very few sound technical arguments for basing future product development on the Firefox paradigm of "distribute minimal, expect the user to do all the work." The Firefox kidz want their own kewl browser? Fine -- let them go build it, like the Camino team did. Don't turn their misbegotten hacker-bait into the core product. That's a sure way to fail.

Nevertheless, it's abundantly clear at this point that Firefox is the way of the future with regard to the Mozilla Project's browser product, and it's also abundantly clear why: The kidz wouldn't play unless they got to do things their way, and the project needed them.

Vlieg Urinor

One more note for today. The first fruit of adventures with del.icio.us, a story about a novel and innovative bit of user interface design in Dutch public toilets:

I have seen one of the finest instances of user interface design ever, and I saw it in the men's room at Schipol airport in Amsterdam.

In each of the urinals, there is a little printed blue fly. It looks a lot like a real fly, but it's definitely iconic - you're not supposed to believe it's a real fly. It's printed near the drain, and slightly to the left.

(... so, no, that's not bad Latin, it's Dutch.)

I have nothing particularly clever to add to this, except that it's one of those things that makes a light click on over your skull...

BTW, I found this on my very first experiment with using del.icio.us for serendipitous browsing. I bookmarked the site diagramming story, then looked at the bookmark lists for two of the three other people who had it marked; on the second one, near the top of the list, I found this.

Cool; I think I'd better stop, now, or this will eat the whole day, and I really do have work to do...

Any Typeface, Anywhere, Any Time (...as though we needed that)

Those control-freak design-guys are at it again. sIFR [Scalable Inman Flash Replacement] is flash movie / JavaScript library combination that builds on a set of techniques which dynamically replace text content with Flash-rendered content.

I'm of multiple minds on this. My first impression is that this control-freak approach to web design is so often couched in religious tones that it irritates the hell out of me. It always has. But then, I'm a content guy: It's the content that's important, and if you really truly need a specific typeface in an anally-retentively-specified size to make the content work, then your content is probably bad.

Unless it's a cartoon, in which case you should just break down and use Flash already.

That said, and assuming I ever get a Flash editing environment, I could very well see myself using this technique or one of its descendents, though not for my own sites. It addresses problems that I've had to deal with in dealing with designers. The big problem is that a designer would have to plan for this technique in her/his design; most web designers (and I do mean most) still fall back to static graphics for controlled type presentations, and then build rigid frames around them that either don't need to scale or flow, or would break when scaled. Also, this technique is not quite up to replacing a static GIF or PNG image in terms of layering elements and placing them in relative position.

One thing that strikes me: In a sense, this technique doesn't really do what the designers say it does. The stated goal is to enable the designer to use whatever typeface they want, with smooth rendering. Smooth rendering is a bit of a red herring; most users have font smoothing turned on by default (presumably the designers don't, so that they can see what their audience would see). It's the typeface that's important, here, and that's really it.

Which is OK, but it's far from the most important feature of this kind of technique, and glosses over the best and most interesting use. One of the biggest drawbacks to Flash-driven interfaces has always been that they're difficult to edit, as they're most often implemented. If someone asks me to make modifications to a site that was developed in Flash, I'm basically very limited in what I can do for them. They've called me, because they can't do it themselves; I have to tell them they have to go do someone else, already, which makes them feel still more powerless and possibly even "had", because their original supplier locked them in by doing the site in Flash.

The more interesting use for a technique like this would be one that's hinted at in Mike Davidson's writeup, and that's to replace random blocks of text with a flash-driven presentation: A clean, low-labor way to put up a Flash site without requiring flash in the browser, and that would allow any non-Flash-compliant schlub (like, say, your clients) to edit the menus and content. sIFR isn't quite there, especially with regard to things like dynamic menus, and the techniques required may not really even be similar. But it raises intriguing possibilities, and I don't doubt that somebody will be pursuing them.

Actually, it occurs to me that what's really needed to translate between a technique like this and CSS-driven "graphics" (the awesome and infamous CSS Pencils being the most extreme example that I know) is a new kind of design tool that uses XHTML and CSS as its native formats. If done properly, it could quickly simulate display in different environments by consulting some kind of a mapping resource. Initially, at least, it would have the drawback of being able to produce layouts and designs that couldn't be rendered across browsers, but that's alright. Because -- again, if done right -- it could be designed to limit the designer to renderable choices. And, in any case, the availability of standard but un-renderable designs would serve as a driver to the development teams to complete their standard CSS implementations. (Trust me, they're not standard yet.)

I seem to recall that Fireworks was supposed to do something like this, but I don't hear about anybody using it this way. I doubt it would be that extreme. And in any case, Macromedia would have no interest in developing such a tool, since it would undermine the market for Flash.

Nielsen's Nietzschean Web Design Ideologies

Jakob Nielsen has done Nietzsche one better: Instead of just two basic ideologies ("master" and "slave"), he's identified three: Mastery, Mystery, and Misery. These correspond roughly to empowerment, game-play, and control. In the "Mastery Ideology", "...the designer's job is to provide the features users need in a transparent interface that gets out of the way and lets users focus on the task at hand." Mystery 'Obfuscates Choices' by using novel interaction elements. And Misery is an ideology of "... oppression, as mainly espoused by certain analysts who wish the Web would turn into television and offer users no real choices at all. Splash pages, pop-ups, and breaking the Back button are typical examples of the misery ideology."

Nielsen's purpose is to drive the cause of design for usability. That's what NNG do for a living. So it's not surprising that he focuses on the negative aspects of "mystery" (obfuscation) and control ("misery"), and carefully (re)interprets empowerment to mean "usability". He's mapped out (as usual) one path that, if followed, will more or less lead to a better design. It's the most bottom-line path, the path most suited to NNG's target audience: They guys with the money (they're the ones who tell the designers what to do, after all). But it's not the only path, and his re-interpretations have some pifalls.

To start with, empowerment isn't always all it's cracked up to be. Sometimes (as Nielsen implicitly points out elsewhere) it's necessary to constrain in order to empower -- or at least, to create the sense of empowerment. Search is a good example. The earliest search interfaces included Boolean parsing as an integral part of their user interaction design. Gradually, Boolean parsing slipped out of the user interfaces as designers became convinced that it was an impediment.

Boolean search would be empowering; but for most users, it would be less usable. Nielsen has accepted that conclusion for years, incidentally. It's experimentally verifiable. (And that seems to me to be Nielsen in a nutshell: 'Where are the numbers?', he'd ask. At a conference, I heard him tell a story of a site whose usability was improved by increasing the number of clicks to perform a purchase. That's what the numbers told them was the right thing to do. And sure enough, the client's revenue increased. Counterintuitive -- but true.)

Similarly, control isn't all bad. UIs can often be made cleaner and easier to use -- especially for novice users -- by limiting functionality. Again, this is nothing Nielsen himself hasn't accepted for years. This is not to say that constraint is freedom; but constraint can give you more free time, when it prevents you from wasting effort on things you don't need.

Aside: I'm always wary of Google as an exmple of any kind of "empowerment." Google right now controls a mind-boggling array of resources, and is in the process of leveraging them to exert an unprecedented level of control over the merchandisability of your browsing experience. That you will remain largely unaware of this process is a testament both to their technical aplomb and to their insipid arrogance.

Where this starts to get interesting is with mystery. I've conflated Nielsen's "Mystery" with "game-play" -- guilty of my own reinterpretation, to be sure, but I think it's valid, and I'm not really alone. Kim Krause has made a similar leap. "Conformity is Nielsen's mantra," she declares. But to proclaim that, she has to ignore Nielsen's praise for J. K. Rowling's "personal" site, which makes extensive use of playful, "mysterious" interface metaphors. "The site feels more like an adventure game," writes Nielsen, "but thatâ??s appropriate because its primary purpose is to feed fans rumors about Rowlingâ??s next book." He goes on:

User research with children shows that they often have problems using websites if links and buttons don't look clickable. At the same time, using a virtual environment as a main navigation interface does work well with kids, even though it's rarely appreciated by adults (outside of games). Also, children have more patience for hunting down links and rolling over interesting parts of a page to see what they do. On balance, the mystery approach to design succeeds for Rowling -- just don't try it for sites that are not about teenage wizards.

So, maybe these aren't hard and fast rules. Maybe there's a little wiggle-room in Nielsen's declarative statements, after all.

But Krause seems to want more than just wiggle room -- she wants mystery. She wants "I'm Feeling Lucky."

Cool. I like that, too. In fact, that's why I dislike Google, because their "I'm feeling lucky" search is nothing more than a glorified popularity meter. I don't want to know what the most popular return on my search term is; I want to see what my options are.

That's how I find things I don't expect to find: By being able to see the results that might not be "most popular." That's how I get serendipity.

Krause does have a point, though, when she notes that it's memorability that makes the site. Google was memorable, she said, because people learned new ways to use the tool: "They could look up people before that first date. They could type in search terms and hit 'I'm Feeling Lucky' to see what one web site Google would find for them out of all the pages in its index. Google was fun to use." (Actually, I always thought HotBot was terrific fun to use, because its Boolean search interface gave me a sense of power by letting me whittle down my results set to exactly what I wanted. But hey, that's just me, I guess...)

When she talks about sites being "memorable", what's she's talking about sounds an awful lot like Don Norman's "emotional design"; and indeed, I think that's what you get when you unify good design for usability with strong content and a design that speaks to that content. One site that strikes me as very successful in this regard is Burningbird. Superficially, the site is constantly changing, seeming to show a new look with almost every viewing. But having used the site once or twice, you will always still know how to use it again. Nothing about the interaction design per se changes when the graphics and colors and typefaces change. The menus stay in the same place, the action-cues stay the same.

I disagree with people who say that this is inherently hard. It does require care, but it's as hard as you make it; if you lean toward control, then you will be frustrated in your attempt to force an experience of memorable mystery upon your users, and it will all be very, very hard. If you let the content and your purpose drive your design, you will, by definition, get what you went looking for. The problem, as always, is to pick the right goal.

How I Want To Work, Part I

Here's how I want to work: I want to be able to just note stuff down, wherever I happen to be at that moment, and have it get stored and automatically categorized, and be available for publication wherever I want from wherever I am, whenever I want to. This has been an achievable dream for nearly ten years -- people are constantly hacking together systems to do just that. But we're stuck in a technologically-determined rut that keeps these solutions from being developed.

I've been thinking about these things a lot, and decided it was time that I wrote it all out, to organize my own ideas as much as anything else. So here's part one, where I try to unpack what it is that I'm really asking for, and start to get a sense for what's not working now, and why. So, as a separate story (because they're long, and would push everything down the page and out of site), here's how I want to work...

How I Want To Work, Part One

[continued from blog entry]

Here's how I want to work: I want to be able to just note stuff down (in my ideal world, wherever I am at that moment) and have it get stored and automatically categorized, organized -- by timestamp, at least, but ideally also in some kind of navigable taxonomy

That Pernicious "Search Is King" Meme

There's an ever-waxing meme out there which basically boils down to this: "Forget about organizing information by subject -- let a full-text search do everything for you." The chief rationale is that such searching will help increase serendipity by locating things across subject boundaries.

Here's the problem: It's a load of crap. It throws the baby out with the bathwater, by discarding one time-honored, effective way of organizing for serendipity in exchange for another, inferior (but sexier) one.

This morning, via Wired News:

"We all have a million file folders and you can't find anything," Jobs said during his keynote speech introducing Tiger, the next iteration of Mac OS X, due next year.

"It's easier to find something from among a billion Web pages with Google than it is to find something on your hard disk," he added.

... which is bullshit, incidentally. At least, it is on my hard drive...

The solution, Jobs said, is a system-wide search engine, Spotlight, which can find information across files and applications, whether it be an e-mail message or a copyright notice attached to a movie clip. "We think it's going to revolutionize the way you use your system," Jobs declared.

In Jobs' scheme, the hierarchy of files and folders is a dreary, outdated metaphor inspired by office filing. In today's communications era, categorized by the daily barrage of new e-mails, websites, pictures and movies, who wants to file when you can simply search? What does it matter where a file is stored, as long as you can find it?

Ah, I see -- the idea of hierarchically organizing data is bad because it's "dreary" and "outdated" -- that is, of course, so quintessentially Jobsian a dismissal that we can be pretty sure the reporter took his words from The Steve, Himself.

But this highlights something important: That this is not a new issue for Jobs, or for a lot of people. Jobs was an early champion (though, let's be clear, not an "innovator") in the cause of shifting to a "document-centric paradigm". The idea was that one ought not have to think about the applications one uses to create documents -- one just ought to create documents, and then make them whatever kind of document one needs. Which, to me, seems a little like not having to care what kind of vehicle you want, when you decide to drive to the night club or go haul manure.

But I digress. This is supposed to be how Macs work, but it's actually not: Macs are just exactly as application-centric as anything else, though it doesn't appear that way at first. The few attempts at removing the application from the paradigm, like ClarisWorks and the early versions of StarOffice (now downstream from OpenOffice), merely emphasized the application-centricity even more: While word processors and spreadsheet software could generally translate single-type documents without much data loss, there was no way that they were going to be able to translate a multi-mode (i.e. word processor plus presentation plus spreadsheet) document from one format to another without significant data loss or mangling.

Take for example, Rael Dornfest, who has stopped sorting his e-mail. Instead of cataloging e-mail messages into neat mailboxes, Dornfest allows his correspondence to accumulate into one giant, unsorted inbox. Whenever Dornfest, an editor at tech publisher O'Reilly and Associates, needs to find something, he simply searches for it.

Again, a problem: It doesn't work. I do the same thing (though I do actually organize into folders -- large sigle-file email repositories are a data meltdown just waiting to happen). This is a good paradigmatic case, so let's think it through: I want to find out about a business trip to Paris that was being considered a year and a half ago. I search for "trip" and "paris". If my spam folder's blocked, and assuming we're still just talking about email, I'm probably not going to get a lot of hits on Simple Life 2 or the meta-tags for some other Paris Hilton <ahem!> documentary footage. In fact, unless the office was in Paris, and the emails explicitly used the term "trip", which they may well not, I probably won't find the right emails at all. Or I'll only find part of the thread, and since no email system currently in wide use threads messages, I won't have a good way of linking on from there to ensure that I've checked all messages on-topic. (And that could lead into another rant about interaction protocols in business email, but I'll stop for now.)

By contrast, if I've organized my email by project, and I remember when the trip was, I can go directly to the folder where I keep that information and scan messages for the date range in question.

The key problem here is that search makes you work, whereas with organization, you just have to follow a path. I used to train students on internet searching. This was back in the days when search engines actually let you input Boolean searches (i.e., when you could actually get precise results that hadn't been Googlewhacked into irrelevance). Invariably, students could get useful results faster by using the Yahoo-style directory drill-down, or a combination of directory search and drill-down, than they could through search.

If they wanted to get unexpected results, they were better off searching (at least, with the directory systems we had then and have now -- these aren't library catalogs, after all). And real research is all about looking for unexpected results, after all.

And that leads me to meta data.

Library catalogs achieve serenditity through thesaurii and cross referencing. (Though in the 1980s, the LC apparently deprecated cross-referencing for reasons of administrative load.)

The only way a system like Spotlight works to achieve serendipitous searching -- and it does, by the accounts I've read -- is through cataloged meta-data. That is, when a file is created, there's a meta-information section of the file that contains things like subject, keywords, copyright statement, ownership, authorship, etc. Which almost nobody ever fills out. Trust me, I'm not making this up: from my own experience, and that of others, I know that people think meta-data is a nuisance. Some software is capable of generating its own meta-data from a document, but such schemes have two obvious problems:

  1. They only include the terms in the document -- no synonyms or antonyms or related subjects, and no obvious way of mapping ownership or institutional positioning -- so they're no real help to search.
  2. They only apply to that software, and then only going forward, and then only if people actually use them.

Now, a lot of this is wasted cycles if I take the position that filesystems aren't going away and this really all amounts to marketer wanking. But it's not wasted cycles, if I consider that the words of The Steve, dropped from On High, tend to be taken as the words of God by a community of technorati/digerati who think he's actually an innovator instead of a slick-operating second-mover with a gift for self-promotion and good taste in clothes.

This kind of thinking, in other words, can cause damage. Because people will think it's true, and they'll design things based on the idea that it's true. And since "thought leaders" like Jobs say it's important, people will use these deficient new designs, and I'll be stuck with them.

But there's little that anyone can do about it, really, except stay the course. Keep organizing your files (because otherwise, you're going to lose things, trust me on this, I know a little about these things). The "true way" to effective knowledge management (if there is one) will always involve a combination of effective search systems (from which I exclude systems like Google's that rely entirely on predictive weighting) with organization and meta-data (yes, I do believe in it, for certain things like automated resource discovery).

Funny, who would have thunk it: The "true way" is balance, as things almost always seem to come out, anyway. You can achieve motion through imbalance, but you cannot achieve progress unless your motions are in harmony -- in dynamic balance, as it were. What a strange concept...

Stale Is Not Dead

Dave Winer finally speaks out on the Weblogs.com fiasco, and amongst all the usual stuff I find one thing that really leaps out at me:

One of the things I learned is that just because a site is dormant doesn't mean it's not getting hits. The referer spam problem on these sites was something to behold. Search engines still index their pages, and return hits. They were mostly dead, in the sense that most hadn't been updated for several years.

Something troubles me about this and the interminable HTTP code vs. XML redirect discussions, and that's this: If someone links to the content, it's live by definition.

I'll restate that, so it's clear: Content that is used should continue to be accessible. I don't actually know where Ruby or Winer or Rogers Cadenhead or anybody but the writers stand on this, but it remains a non-negotiable best practice and first principle of web interaction design for usability that links should not go dead.

If that means you have to redirect the user to a new location when the content moves, so be it. If you have to do that with meta-refresh in HTML or XML, so be it. Sure, there are "purer" ways to handle it; but it's just stupid to let the perfect be the enemy of the good by saying that you can't redirect if you can't modify your .htaccess file. Even a lot of people with their own hosting environments aren't allowed to modify their .htaccess.

I'm getting the sense that a lot of the people involved in these debates are forgetting that the web was supposed to be about linking information to related information. Protocols and methods are mere instrumentalities to that end. It's the content that matters; there really, really isn't a web without content.

Unsung Development of the Moment: Wikipedia Reinvents

Wikipedia is probably the most siginificant, important website on the net right here in May/June 2004. It's the signal success we can point to for bazaar-style projects, and the great white hope for the persistence of free, non-corporate-sponsored information on the web. Not to disregard Wikipedia's smaller cousin, WikInfo; they're just not big enough to be a great white hope, yet.

So, now, Wikipedia has done something intriguing: You can now talk about any article, or view previous versions. These appear to be benefits of upgrade to version 1.3 of MediaWiki, the hyper-extended Wiki implementation that Wikipedia developes and uses to drive the site.

Tired terms like "community portal" don't do this justice. I don't think the great mass of the digerati really have any clue how important Wikipedia (and WikInfo) are. This kind of move, once they notice it, could blow Wikipedia wide open.

My great fear is that it could literally blow it wide open: How will they be able to handle the loads? Will their community software be able to cope with input from every Tom, Dick and Harry with an opinion?

The upside, of course, is that with a project of this broad scope, we'll finally get that "online experiment" that other "communities" have been claiming to be for years.

Addendum: I've posted this on Mefi; let's see if anybody cares.

Second addendum: Mefites assure me that it's always had that functionality, though it wasn't as obvious as it is, now. I wonder if they've made changes that will let them handle the greater load and have decided to front-and-center those features?

Fun With Statistics, Fox News Style

In the eternal quest of all upright citizens to promote critical thinking, BoingBoing notes the following from Jason Schultz....

[Cory Doctorow's compatriot Jason Schultz notes:] Among today's top stories, a new "Fox News Poll" that says 33% of those surveyed think the media is too easy on Kerry and 42% think the media is too tough on Bush. [Of course, if it were limited to FoxNews coverage, you'd probably see dramatically different numbers in the opposite direction.]

But let's just look at the numbers they've given us. 33% think the media is too easy on Kerry. That means 66% (or 2/3rds) think the media is fair or too tough on Kerry, right? Isn't that the real story?

Jason and Cory: Why do you hate Freedom so? Or is it just America? (Note to self: Drupal must be stripping my sarcasm tags...)

So, to flog this horse just a little more, and because I'm sure Jason and Cory have better things to do than belabor the obvious, here are two other ways to spin these numbers.

First, the way they'd look if "liberals" thought like rightists:

Media Soo Nasty To Kerry   67%
Media Too Nice To Bush   58%

Now, a "balanced" way:

Media Too Nice Media Soo Nasty Media ... Er, Somethingerother
Kerry   33%   <=67%   <=67%
Bush   <=58%   42%   <=58%

Get the point?

Breadcrumb Trails and Web Navigation Efficiency

Bonnie Lida Rogers and Barbara Chaparro have studied the effect of breadcrumb trailes on site usability (Breadcrumb Navigation: Further Investigation of Usage, Usability News; courtesy INTERCONNECTED). While the results aren't surprising, they are results, as opposed to aesthetic prejudices.

Among the findings:

  • Given the option, users do tend to use breadcrumbs as a navigational tool.
  • If they use the breadcrumb trail, they tend to use the "Back" button somewhat less. Not stated, but implicit: If they're not using the back button, they're probably using the breadcrumb trail. Since this study tested "locational" breadcrumb trails (i.e. Drupal-like, category-driven), that might have resulted in some frustration. While I like locational trails aesthetically, that should be studied.
  • There is no significant improvement in navigational efficiency. In raw numbers, there is a slight decrease; I would suspect the back-button confusion I note above.
  • However, there is a significant impact on a user's "mental model" of the site hierarchy.
  • Position of the breadcrumb trail had a significant impact on whether or not it would be used. Breadcrumb trails positioned below the site header -- i.e., at the top of the page body -- were much more likely to be used.

Note that there's a lot of work on breadcrumbs at this site...

Subscribe to RSS - InforAction Design