You are here

Web Dev

The Message Is The Message; Or, Enforcing Subjective Aesthetics Through Ridicule Since 2007

Design Police Are Operating In This Area -- And They've Got Crappy Design Sense, Too

Why does anybody still think that ridicule is a useful tool for achieving positive ends? And why is anyone still willing to accept the idea that people who claim to use ridicule for positive ends are doing anything other than bullying people to make themselves feel superior?

Design is a religion. Let's just be clear about that. It has so many of the salient characteristics of religion that I find it difficult to understand why people become so offended at the notion that what they're preaching is not objective truth, it's faith. After all, they've expended a great deal of effort, karma and usually money to get their design credentials, and then they have to live in a world that Doesn't Take Design Seriously. (Much like the world doesn't Love Poetry. But that's another subject for another time.)

I feel for them, but I can't quite reach them, as my dad used to say. Here's a hint: Preachers go to grad school, too. There's a difference between VoTech and science, and unless you're formulating and falsifying hypotheses, design students, you're basically in a jumped-up VoTech program. Just like preachers.

Design fascists like the Design Police start very quickly to sound like folks who see oppression of Christians in all aspects of American daily life. What they're really seeing is that their particular religious biases are not honored by everyone who doesn't share them. Designers see stuff they don't like and confuse that with "bad design" in much the same way that extreme religioninsts see attitudes and behaviors they don't like and confuse it with immorality.

Truly hard-core design fetishists have a wonderful and seemingly limitless capacity for arrogance. They can say stuff like "Comic Sans is Evil", can insist that proper kerning and ligature are crucial to truly understanding the meaning of a text, and basically imply that the rest of the world is populated with design-illiterate idiots who are destroying civilization through sloth and ignorance, all with a straight face and all without realizing that they're basically the design-equivalent of Anne Coulter: endlessly blathering that people who don't "get" them just have no sense of humor. (And bad taste, of course, to boot. Because Helvetica on pink bubblegum is the height of design, doncha know. Wait, I forgot: Intention is what matters; they meant it to be ugly, it's a statement....)

Take the Design Police ("Bring bad design to justice"). (Please take them.) They're a couple of design students (ah, they're still in DESIGN SCHOOL, that goes a long way toward explaining their sophomoric arrogance) I got a link to this lovely little bit of high-concept hideousness ("it's ugly on purpose! that makes it clever!") from a designer in my company. She's easily offended and basically a nice person, not given to deep thought about the fact that her attitude basically implies that everyone else is an idiot, so I refrained from pointing out to her that this is actually pretty fucking offensive elitist bullshit. She works in advertising. She doesn't realize or doesn't accept that design is not as important as designers like to think it is, and why should she? Why would she? It would have a negative impact on her ability to do her work. Heaven forbid someone should point out that the high-concept design choice may not communicate as effectively as a simpler, more message-oriented choice.

Many designers seem to have been drilled in the facile mantra that "medium is message", without any real analysis of what that means. So they take a basically insightful concept like Emotional Design and turn it into a justification for the simple subordination of understanding to gut feeling. Most designers are what the President would call "libruhls", but the attitude is the same as his: The gut is king, the emotions rule over all, what I feel is much more important than anything you or I might know, and that's as it should be. That's not, of course, what Don Norman was arguing when he wrote Turn Signals Are The Facial Expressions Of Automobiles, and it's not what guys like Tognazzini profess to mean when they use the term "emotional design." But I've worked and interacted with a lot of designers, and it seems pretty clear to me that in the current design zeitgeist -- at least on the web -- emotional design means "to look good is much more important than to be good." That appearance becomes its own reality. A very neo-conservative attitude.

I've got no illusions about changing the viewpoint of designers any more than I have about changing the viewpoints of militant religioninsts or militant atheists. They'll believe what they believe. I would really just prefer that they stop wasting my attention and lots of people's energy and money with their bullying (pomo) blather about the importance of clearly marginal crap like the "unimaginative" choice of Helvetica.

Technorati Tags: , ,

The problem of trackback and comment spam in Drupal, and one way to address it

(I just posted a version of the following over at Drupal.org, in their "Drupal Core" forum. I doubt it will make much of an impact, but I had to try...)

I propose that there is a problem with the ways that program function URLs are written in Drupal, that causes Drupal to be a disproportionate target for trackback and comment spammers.

The problem with comment and trackback spam in Drupal is this: It's too easy to guess the URL for comments and trackbacks.

In Drupal, the link for a node has the form "/node/x", where x is the node id. In fact, you can formulate a lot of Drupal URLs that way; for example, to track-back to x, the URI would be "/trackback/x"; to post a comment to x would be "/node/comment/reply/x". So you can see that it would be a trivially easy task to write a script that just walked the node table from top to bottom, trying to post comments.

Which is pretty much what spammers do to my site: They fire up a 'bot to walk my node tree, looking for nodes that are open to comment or accepting trackbacks. I have some evidence that it's different groups of spammers trying to do each thing -- one group seems to be switching IPs after a small number of attempts, and the other tends to use the same IP until I block it, and then takes a day or so to begin again -- but that hardly matters. What does matter is that computational horsepower and network bandwidth cost these guys so little that they don't even bother to stop trying after six or seven hundred failures -- they just keep on going, like the god damned energizer bunny. For the first sixteen days of August this year, I got well over 100,000 page views, of which over 95% were my 404 error page. The "not found" URL in over 90% of those cases was some variant on a standard Drupal trackback or comment-posting URL.

One way to address this would be to use something other than a sequential integer as the node ID. This is effectively what happens with tools like MoveableType and Wordform/WordPress because they use real words to form the path elements in their URIs -- for example, /archives/2005/07/05/wordform-metadata-for-wordpress/, which links to an article on Shelley Powers's site. Whether those real words correspond to real directories or not is kind of immaterial; the important point is that they're impractically difficult to crank out iteratively with a simple scripted 'bot. Having to discover the links would probably increase the 'bot's time per transaction by a factor of five or six. Better to focus on vulnerable tools, like Drupal.

But the solution doesn't need to be that literal. What if, instead of a sequential integer, Drupal assigned a Unix timestamp value as a node ID? That would introduce an order of complexity to the node naming scheme that isn't quite as dramatic as that found in MT or WordPress, but is still much, much greater than what we've got now. Unless you post at a ridiculous frequency, it would guarantee unique node IDs. And all at little cost in human readability (since I don't see any evidence that humans address articles or taxonomy terms by ID number, anyway).

Some people will immediately respond that this is "security through obscurity", and that it's therefore bad. I'd argue that they're wrong on two counts: First, it's not security through obscurity so much as security through economic disincentive; second, it's not bad, because even knowing exactly how it works doesn't help you very much to game it. The problem with security through obscurity, see, is that it's gameable. Once you know that the path to the images directory is "/roger/jessica/rabbit/", then you can get the images whenever you want; even if you know that the URL to post a comment is "/node/timestamp/reply/comment/", you're not a heck of a lot closer to getting a valid trackback URL than you were before you knew that.

Anti-Fittism Of The Moment

Big targets mean big distractions.

I'm sitting here listening to Whadya Know on the radio while I write. While I do this, I've got a couple of applications and part of my desktop visible on screen, and a cluttery clumsy app launching device pinned to the left edge of my screen. (I move my Dashboard to the left edge because I value the vertical screen space too much. More vertical space means more text on screen, which means more chance at understanding what the hell I'm looking at. Which is to the point, believe it or not, but I'm not going to go there right now.)

And I'm getting distracted by it all. Part of it is that I'm over 40 and wasn't "raised to multi-task" (as so many people raised in the age of multi-tasking OSs and multiple-media-streams seem to think they have been). But part of the problem is all this extraneous visual noise -- stuff I don't need to see right now, like the "drawer" to the left of my application window that lets me see the subject lines of previous journal entries and, more to the point, blocks out a bunch of other distracting stuff in the windows behind this one. Obviously, I could close the "drawer" and widen my editing window to cover them, but then I'd have a line-length that would be difficult to eye-track.

Anyway, the point of this (man, I am getting distracted) is that having all this clutter on my screen distracts me. Presumably that's why MacJournal (like a lot of "journaling" applications) has a full-screen mode that lets me shut out everything else if I so choose.

Fitt's Law is increasingly invoked these days to justify a lot of design decisions, like pinning a menu bar to the top of the screen for all applications, or putting "hot zones" in the corners of the screen. It's invoked as a rationalization for putting web page navigation at the edge of the page (and hence, presumably, at the edge of a window).

Interestingly, it seldom gets used as a rationalization for making navigation large.

Fitt's Law reduces to a fairly simple principle: The likelihood of hitting a target by using a mouse pointer is a function of the size of the target and the distance from the starting point. That is, it's easier to hit a big target with a mouse pointer than it is to hit a small target.

Fitt's Law is also often cited as demonstrating that it's easier to hit targets that are at a constrained edge or corner; it's as valid a principle as Fitt's Law, but isn't really implied by it. So Fitt's Law gets cited to justify things like parking the active menu bar at a screen edge. It's easy to get to the edge of a constrained screen: Just bang your mouse or track-pad finger or pointing-stick finger over to the screen edge and it will stop -- won't go any farther. Bang in the general direction of the corner, and the cursor will behave like water and "flow" into the corner, so the corners become the easiest thing to hit on the screen. Tognazzini, among others, uncritically and inaccurately cites this as an example of Fitt's Law in action. I don't know who came up with this conflation first, but Tog is the most vocal exponent of it that I'm aware of so I'll probably start referring to it as "Tognazzini's Corollary."

(Aside: Obviously this only holds for constrained edges, as on single-screen systems. On two-screen systems, or on systems with virtualized desktop scrolling, it's a little more complex. Less obviously, this principle is more or less meaningless on systems that are actuated with directly-positioned devices like touch-screens, and it requires that people engage in some selective modification of their spatial metaphors. But that's another topic for another time.)

It's interesting to me that Fitt's Law isn't applied to the size of buttons because that's its most obvious implication: You want to make the element easier to hit, the most obvious thing to do is make it bigger. Yet I don't recall ever seeing it invoked as an argument for making screen elements larger, or discussed when people call for making screen elements smaller. Which makes me suspect even more that Fitt's Law is often more a matter of Fittishization than Fittizing.

Because the reason people (read: designers) don't want to make things bigger is obvious: It doesn't look good. Things look better (or at least, cooler) when they're small. That's why designers who'll lecture you endlessly about the virtues of design usability have websites with tiny text that has poor intensity contrast with its background. So Fittism really tends to serve more as a all-purpose way of rationalizing design decisions than it really does as a way of making pages or applications more usable.

In any case, Fitt's Law isn't really nearly as important as its advocates make it out to be. The current rage for Fittism ("Fittishism"?) over-focuses on the motor-experience of navigation, and de-emphasizes the cognitive aspects. The reason for this is that Fitts Law can be very easily validated on a motor interaction level; but the cognitive aspects of the problem tend to get ignored.

And that's not even considering the effect of edge detection in the visual field. This is not strictly a motor issue, and it's not really a cognitive issue -- though it has cognitive aspects.

For example, if the menu bar is always parked at the edge of the screen -- let's say at the top edge -- then it becomes very important that users be able to have confidence that they're using the right menu. If menus are affixed to a window, then know that menu applies to the application to which that window belongs. If menus are affixed to the top of the screen, you are required to do cognitive processing to figure out which application you've got in the "foreground".

(Another aside: That's fairly difficult to do on a Macintosh, which ever since the advent of OS X and Aqua, has very poor visual cues to indicate what application window is active. Title bars change shade and texture a little; text-color in the title bar changes intensity a little; the name of the application is appended to the beginning of the menu bar, in the space that people visually edit out of their mental map of the page in order to limit distractions. In other windowing environments -- Windows and KDE spring to mind -- it's possible to configure some pretty dramatic visual cues as to which windows are in focus, even if you ignore the fact that the menu you need is normally pinned to the damn window frame. It's trivially easy on an OS X system to start using the menu without realizing that you're looking at the menu for the wrong application. I don't do it much myself anymore, but I see people do it all the time.)

But I'm getting off point, here: I started this to talk about distractions, and I keep getting distracted....

From the "Holy Crap!" Department: Adobe Acquires Macromedia

At first, I thought it must have been some kind of a joke, but it seems to be true: Adobe and Macromedia have agreed to a friendly takeover, at a price of about $3.4B. So the question is, does this save Adobe or destroy Macromedia? And is there any conceivable way that merging two 800 pound gorillas could be good for web developers or end users?

Macromedia and Adobe have presented as competitors for years, but they actually compete head to head in very few areas. Even places where they seem to butt up against one another, as in the case of ImageReady versus Fireworks, or FlashPaper versus PDF, the truth is more complex: In the first case, most design shops just buy their people one of each, and in the second, the formats, while presented as directly competetive, really aren't. PDF is almost zealously print-centric; FlashPaper is really an effort to make Flash more print-friendly, and in fact ends up incorporating PDF into its own standards stack. Both have more usability warts than most people on either side like to admit.

It's hard to see how this helps consumers. Adobe have become enormously complacent in recent years. They're effectively the only game in town for professional image editing, and they know it. In the graphics professions, the price of a copy of Creative Suite is simply a part of the cost of setting up a new designer graphic artist. Even heavily Macromedia-focused web shops use Adobe software at some stage in their workflow, thanks to Adobe's strong lock on certain color technologies. But they've never bothered to develop anything like Flash, and have never worked very hard to overcome the profound weaknesses of PDF as a format.

Macromedia are somewhat hungrier, somewhat more innovative -- but they, too, have a market lock. Professional web design shops either work with Macromedia StudioMX (or possibly just Dreamweaver), or they most likely do inferior work. I know of a few good web designers who stick with Creative Suite for everything, but they're old pros with lots of experience dealing with Adobe's deeply idiosyncratic conventions and techniques. Macromedia's workflow for web production is far, far superior to Adobe's in every regard except for color management and high-end image/graphic editing. Their "round-trip" code management is on an entirely different plane from Adobe's understanding of how to deal with HTML.

If I have to predict the shakeout, I'd predict that the final product lineup from the merged entity will include Dreamweaver and Flash from Macromedia, Acrobat, Photoshop, Illustrator, and InDesign from Adobe, and will probably include both ImageReady and Fireworks until they figure out which one is harder to de-integrate. My guess would be that the good bits of ImageReady would be incorporated into Fireworks, which has much, much stronger HTML generation capabilities. (That said, its file format may prove difficult to integrate with Photoshop and Illustrator.) Acrobat and Flash will have a relationship analogous to that between Flash and Director: Flash will be a mezzanine for rendering and delivering PDF, and Acrobat itself will continue as a separate product.

And, of course, Macromedia's server-side products will remain intact, because they're what Adobe really wants. Adobe is digital graphics, basically; but they aren't positioned to continue to grow in a post-Web world. Specifically, they are vulnerable to being obselesced as technology moves beyond them. Macromedia, by contrast, has spent the last several years experimenting with web-focused (not merely web-based) workflows.

ADDENDUM: After reviewing the MeFi thread, I'm no longer so sure that Adobe will be humble enough to keep Macromedia's very emperically-grown software development stack. And I see that some of my assumptions regarding the smartest choice of components may be too optimistic. One thing's for sure: Our web dev apps will sure be a lot more expensive...

Spam-Whack: What Happens When You Cut Humans Out Of The Loop

For about 12 hours, I've been getting hit heavily by texas-holdem spam. This, coming just two days after "texas-holdem.rohkalby.net" "spam-whacked" (to coin a phrase) its way to a high position in the Daypop Top 40, one of the key indicators of memetic activity in the blogosphere. It didn't stay there more than a day, but it was there long enough for my 12-hour aggregation cycle on Daypop Top 40 to pick it up.

This wave of comment spam here (all caught by my filter, after the initial training) is conventional comment spam. But my hunch is that the "rohkalby.net" Daypop-whack was done with trackback. I just can't imagine it happening rapidly enough and in a widespread enough form to do so without the assistance of trackback auto-discovery.

BTW, I haven't found anybody actually mentioning this incident, which is very interesting to me. It meas, I think, that they either didn't notice, didn't understand the importance, or didn't want to admit the importance. Which is huge, because this would demonstrate two things -- one very important, the other merely interesting:

  1. The effectiveness of trackback spam is more or less entirely due to auto-discovery, which effectively automates the distribution of trackback spam. (The blogorati will underestimate the importance of this by observing snarkily that this could have been avoided by using nofollow. They're probably right, but the observation misses the point in a big way.)
  2. The merely interesting thing is that this helps to clarify who's responsible for the attractiveness of Trackback spam: Sixapart.

We can say safely that SixApart are responsible, by the way, because they initially invented trackback as a manual means of "giving juice" to someone else, and then failed to understand that it needed to stay manual. It was intended to be initiated by human action, not automated. But then they proceeded to automate it; that made trackback geometrically more attractive as a target for spam: It meant that spammers could potentially place themselves into the various automatically-compiled "top"-lists in a completely automated fashion (i.e., at cost to the spammer approaching zero). And with no legal consequences, to boot: They couldn't be prosecuted under email laws, because it wasn't email; they couldn't be charged with theft of service or hacking because -- and this is key -- the spamming was being carried out as a normal designed feature of the "exploited" systems, using their resources.

The great mystery isn't that it happened, but that it took so long to happen.

Shelley et al.'s "tagback" concept might profide a "fix" for this, of a very limited sort, but it still leaves us without trackback. Trackback was a very useful concept; it allowed people to create implicit webs of interest, one connection at a time and without central planning, and -- and this is really important -- without the mediation of a third party like Google or Technorati. And we all know that spammers will find a way to parasitize themselves onto tagback, as well.

And anyway, reliance on third parties for integration gives them power they should not be allowed to have. It's a bad design principle. Trackback, pre-autodiscovery, was a good simple piece of enabling technology. But it was mis-applied, quickly, with the encouragement of people who should have known better. And now it will be forgotten. Which is really, deeply stupid, when instead it could simply be re-invented without auto-discovery.

Adam Kalsey Dares To See Through The Emperor's Cloak

Adam Kalsey has had the temerity to criticize the Kewl Kidz browser, Firefox, and thinks that maybe, just maybe, aggressively marketing it prior to "1.x" isn't such a good idea: "Aggressively marketing Firefox before it is a completely stable product is dangerous. Youâ??re running the risk of having people trying it out and being put off by the bugs, never again to return." [Adam Kalsey, "Why I don't recommend Firefox"]

I agree; in addition, I wonder again why Firefox is being so aggressively marketed in preference to the more stable, more usable, more feature-rich Mozilla. Wait -- I know the answer to that already: It's basically because Firefox is COOL, and Mozilla is NOT COOL. There really are no serious technical reasons -- it's all a matter of how to best herd the cats.

The history on this is worth looking at. Mozilla and Firefox forked sometime in '00, when Firefox was still "Phoenix". The split happened because a small group of developers thought that some of the approaches used in the Mozilla core were wrong-headed, and they thought everything had to be rebuilt from the ground up to improve performance. They were particularly obsessed with load-time and rendering speed.

Fast forward to 2004: Mozilla still loads faster (though it's slightly -- slightly -- bigger), and renders pages faster. Mozilla core has been modified to have more or less all the customization hooks that Firefox has. Mozilla is still significantly more usable out of the box. But those kooky Firefox kids have their own bazaar, now. Oh, and, yeh, they finally did implement extension management.

In a really objective analysis, there's no strong argument for introducing Firefox to novice browsers, and as Adam points out, lots of reasons not to. There are also very few sound technical arguments for basing future product development on the Firefox paradigm of "distribute minimal, expect the user to do all the work." The Firefox kidz want their own kewl browser? Fine -- let them go build it, like the Camino team did. Don't turn their misbegotten hacker-bait into the core product. That's a sure way to fail.

Nevertheless, it's abundantly clear at this point that Firefox is the way of the future with regard to the Mozilla Project's browser product, and it's also abundantly clear why: The kidz wouldn't play unless they got to do things their way, and the project needed them.

Whitehouse.Gov's Robot Exclusion File

A friend (who shall remain nameless) just learned about Robot Exclusion Files; these are wide open and you can look at them for a number of very public sites. Being the curious sort, and being particularly mindful of the current administration, it occurred to her to see what happened when she tried to look at the robots.txt file for Whitehouse.gov.

Surprise!

[Since these are paranoid times, I think I should point out pre-emptively that by its very nature, the robots.txt file is intended to be read -- that, in fact, it's read many many many times a day (just not usually by humans). So while the massively secretive and paranoid current occupants of the White House might wish otherwise, there is no conceivable legal reason why I shouldn't be able to look at it. OK?] (The preceding paragraph, and my friend's insistence on anonymity, by the way, are examples of the "chilling effect" in action.)

Typically these things are pretty short. CNN's is an exception, and an educational one. In fact, the four commercial examples I give above seem to all be pretty good examples of when a big site would use them:

  • To exclude highly dynamic content that really just shouldn't be indexed, anyway.
  • To exclude stuff that just plain doesn't need to be indexed, like login pages.
  • To prevent indexing of non-text content like images, audio/video files, amd Shockwave movies. (CNN's web geeks have some fun with this. Hell, why not?)

What's immediately interesting to me on the White House's robots.txt is how superficially mundane a lot of the links are -- and how suggestive others are. If you actually plug in some of those links, you'll find that they run the gamut from broken pages to 404s; the broken or ill-formed pages seem to be static, for the most part, and often old. My friend wondered why the list was so long; I set that in the back of my mind and had a much more mundane and plausible answer flash into my head as I lay my head down last night: Sloppiness. Their web admins are too lazy to set up a sandbox or set passwords, or they don't have the clout to get White House staffers to actually use passwords, so they're opting for security by obscurity.

Maybe someone should tell the Bushites that "security by obscurity" is an oxymoron....nah, let 'em figure it out for themselves.

Any Typeface, Anywhere, Any Time (...as though we needed that)

Those control-freak design-guys are at it again. sIFR [Scalable Inman Flash Replacement] is flash movie / JavaScript library combination that builds on a set of techniques which dynamically replace text content with Flash-rendered content.

I'm of multiple minds on this. My first impression is that this control-freak approach to web design is so often couched in religious tones that it irritates the hell out of me. It always has. But then, I'm a content guy: It's the content that's important, and if you really truly need a specific typeface in an anally-retentively-specified size to make the content work, then your content is probably bad.

Unless it's a cartoon, in which case you should just break down and use Flash already.

That said, and assuming I ever get a Flash editing environment, I could very well see myself using this technique or one of its descendents, though not for my own sites. It addresses problems that I've had to deal with in dealing with designers. The big problem is that a designer would have to plan for this technique in her/his design; most web designers (and I do mean most) still fall back to static graphics for controlled type presentations, and then build rigid frames around them that either don't need to scale or flow, or would break when scaled. Also, this technique is not quite up to replacing a static GIF or PNG image in terms of layering elements and placing them in relative position.

One thing that strikes me: In a sense, this technique doesn't really do what the designers say it does. The stated goal is to enable the designer to use whatever typeface they want, with smooth rendering. Smooth rendering is a bit of a red herring; most users have font smoothing turned on by default (presumably the designers don't, so that they can see what their audience would see). It's the typeface that's important, here, and that's really it.

Which is OK, but it's far from the most important feature of this kind of technique, and glosses over the best and most interesting use. One of the biggest drawbacks to Flash-driven interfaces has always been that they're difficult to edit, as they're most often implemented. If someone asks me to make modifications to a site that was developed in Flash, I'm basically very limited in what I can do for them. They've called me, because they can't do it themselves; I have to tell them they have to go do someone else, already, which makes them feel still more powerless and possibly even "had", because their original supplier locked them in by doing the site in Flash.

The more interesting use for a technique like this would be one that's hinted at in Mike Davidson's writeup, and that's to replace random blocks of text with a flash-driven presentation: A clean, low-labor way to put up a Flash site without requiring flash in the browser, and that would allow any non-Flash-compliant schlub (like, say, your clients) to edit the menus and content. sIFR isn't quite there, especially with regard to things like dynamic menus, and the techniques required may not really even be similar. But it raises intriguing possibilities, and I don't doubt that somebody will be pursuing them.

Actually, it occurs to me that what's really needed to translate between a technique like this and CSS-driven "graphics" (the awesome and infamous CSS Pencils being the most extreme example that I know) is a new kind of design tool that uses XHTML and CSS as its native formats. If done properly, it could quickly simulate display in different environments by consulting some kind of a mapping resource. Initially, at least, it would have the drawback of being able to produce layouts and designs that couldn't be rendered across browsers, but that's alright. Because -- again, if done right -- it could be designed to limit the designer to renderable choices. And, in any case, the availability of standard but un-renderable designs would serve as a driver to the development teams to complete their standard CSS implementations. (Trust me, they're not standard yet.)

I seem to recall that Fireworks was supposed to do something like this, but I don't hear about anybody using it this way. I doubt it would be that extreme. And in any case, Macromedia would have no interest in developing such a tool, since it would undermine the market for Flash.

Nielsen's Nietzschean Web Design Ideologies

Jakob Nielsen has done Nietzsche one better: Instead of just two basic ideologies ("master" and "slave"), he's identified three: Mastery, Mystery, and Misery. These correspond roughly to empowerment, game-play, and control. In the "Mastery Ideology", "...the designer's job is to provide the features users need in a transparent interface that gets out of the way and lets users focus on the task at hand." Mystery 'Obfuscates Choices' by using novel interaction elements. And Misery is an ideology of "... oppression, as mainly espoused by certain analysts who wish the Web would turn into television and offer users no real choices at all. Splash pages, pop-ups, and breaking the Back button are typical examples of the misery ideology."

Nielsen's purpose is to drive the cause of design for usability. That's what NNG do for a living. So it's not surprising that he focuses on the negative aspects of "mystery" (obfuscation) and control ("misery"), and carefully (re)interprets empowerment to mean "usability". He's mapped out (as usual) one path that, if followed, will more or less lead to a better design. It's the most bottom-line path, the path most suited to NNG's target audience: They guys with the money (they're the ones who tell the designers what to do, after all). But it's not the only path, and his re-interpretations have some pifalls.

To start with, empowerment isn't always all it's cracked up to be. Sometimes (as Nielsen implicitly points out elsewhere) it's necessary to constrain in order to empower -- or at least, to create the sense of empowerment. Search is a good example. The earliest search interfaces included Boolean parsing as an integral part of their user interaction design. Gradually, Boolean parsing slipped out of the user interfaces as designers became convinced that it was an impediment.

Boolean search would be empowering; but for most users, it would be less usable. Nielsen has accepted that conclusion for years, incidentally. It's experimentally verifiable. (And that seems to me to be Nielsen in a nutshell: 'Where are the numbers?', he'd ask. At a conference, I heard him tell a story of a site whose usability was improved by increasing the number of clicks to perform a purchase. That's what the numbers told them was the right thing to do. And sure enough, the client's revenue increased. Counterintuitive -- but true.)

Similarly, control isn't all bad. UIs can often be made cleaner and easier to use -- especially for novice users -- by limiting functionality. Again, this is nothing Nielsen himself hasn't accepted for years. This is not to say that constraint is freedom; but constraint can give you more free time, when it prevents you from wasting effort on things you don't need.

Aside: I'm always wary of Google as an exmple of any kind of "empowerment." Google right now controls a mind-boggling array of resources, and is in the process of leveraging them to exert an unprecedented level of control over the merchandisability of your browsing experience. That you will remain largely unaware of this process is a testament both to their technical aplomb and to their insipid arrogance.

Where this starts to get interesting is with mystery. I've conflated Nielsen's "Mystery" with "game-play" -- guilty of my own reinterpretation, to be sure, but I think it's valid, and I'm not really alone. Kim Krause has made a similar leap. "Conformity is Nielsen's mantra," she declares. But to proclaim that, she has to ignore Nielsen's praise for J. K. Rowling's "personal" site, which makes extensive use of playful, "mysterious" interface metaphors. "The site feels more like an adventure game," writes Nielsen, "but thatâ??s appropriate because its primary purpose is to feed fans rumors about Rowlingâ??s next book." He goes on:

User research with children shows that they often have problems using websites if links and buttons don't look clickable. At the same time, using a virtual environment as a main navigation interface does work well with kids, even though it's rarely appreciated by adults (outside of games). Also, children have more patience for hunting down links and rolling over interesting parts of a page to see what they do. On balance, the mystery approach to design succeeds for Rowling -- just don't try it for sites that are not about teenage wizards.

So, maybe these aren't hard and fast rules. Maybe there's a little wiggle-room in Nielsen's declarative statements, after all.

But Krause seems to want more than just wiggle room -- she wants mystery. She wants "I'm Feeling Lucky."

Cool. I like that, too. In fact, that's why I dislike Google, because their "I'm feeling lucky" search is nothing more than a glorified popularity meter. I don't want to know what the most popular return on my search term is; I want to see what my options are.

That's how I find things I don't expect to find: By being able to see the results that might not be "most popular." That's how I get serendipity.

Krause does have a point, though, when she notes that it's memorability that makes the site. Google was memorable, she said, because people learned new ways to use the tool: "They could look up people before that first date. They could type in search terms and hit 'I'm Feeling Lucky' to see what one web site Google would find for them out of all the pages in its index. Google was fun to use." (Actually, I always thought HotBot was terrific fun to use, because its Boolean search interface gave me a sense of power by letting me whittle down my results set to exactly what I wanted. But hey, that's just me, I guess...)

When she talks about sites being "memorable", what's she's talking about sounds an awful lot like Don Norman's "emotional design"; and indeed, I think that's what you get when you unify good design for usability with strong content and a design that speaks to that content. One site that strikes me as very successful in this regard is Burningbird. Superficially, the site is constantly changing, seeming to show a new look with almost every viewing. But having used the site once or twice, you will always still know how to use it again. Nothing about the interaction design per se changes when the graphics and colors and typefaces change. The menus stay in the same place, the action-cues stay the same.

I disagree with people who say that this is inherently hard. It does require care, but it's as hard as you make it; if you lean toward control, then you will be frustrated in your attempt to force an experience of memorable mystery upon your users, and it will all be very, very hard. If you let the content and your purpose drive your design, you will, by definition, get what you went looking for. The problem, as always, is to pick the right goal.

Breadcrumb Trails and Web Navigation Efficiency

Bonnie Lida Rogers and Barbara Chaparro have studied the effect of breadcrumb trailes on site usability (Breadcrumb Navigation: Further Investigation of Usage, Usability News; courtesy INTERCONNECTED). While the results aren't surprising, they are results, as opposed to aesthetic prejudices.

Among the findings:

  • Given the option, users do tend to use breadcrumbs as a navigational tool.
  • If they use the breadcrumb trail, they tend to use the "Back" button somewhat less. Not stated, but implicit: If they're not using the back button, they're probably using the breadcrumb trail. Since this study tested "locational" breadcrumb trails (i.e. Drupal-like, category-driven), that might have resulted in some frustration. While I like locational trails aesthetically, that should be studied.
  • There is no significant improvement in navigational efficiency. In raw numbers, there is a slight decrease; I would suspect the back-button confusion I note above.
  • However, there is a significant impact on a user's "mental model" of the site hierarchy.
  • Position of the breadcrumb trail had a significant impact on whether or not it would be used. Breadcrumb trails positioned below the site header -- i.e., at the top of the page body -- were much more likely to be used.

Note that there's a lot of work on breadcrumbs at this site...

Subscribe to RSS - Web Dev