The Wall St. Journal had a section on the Technology Innovation Awards yesterday (9/29/08)which included a trends section called “The Latest Buzz On …” on page R2. In it, user interface guru Jakob Nielsen praises ribbons bars and, in particular, the ribbons in Microsoft Office 2007 (like those in PowerPoint 2007 below). I’m going to disagree with Jakob here, and it isn’t the first time. I’ve been diving into Office 2007 more extensively lately and am not a fan of the new UI.
As a UI, it seems to have sacrificed personalization for context. By context I mean the drawing tools appear at the top when you click on a drawing object and otherwise they remain hidden so as not to distract you. That’s nice. But the toolbar used to adapt to who I am, not just what I’m doing. If I was a user of the indent feature, it would show up on the toolbar and if I didn’t use it it would eventually disappear since there isn’t room for every icon. If I wanted to have the review toolbar float near the text and keep other options at the top of the screen to fit my personal work style I could. In fact, I could move any toolbar to float or dock in any side of the screen. Now I can only appear at the top and you can only customize the quickbar which is permanently docked.
Besides that, there are still many items that seem to be randomly placed. There is only so much screen real estate on the ribbon to lay out commands and have them attractively grouped, so certain commands couldn’t fit in their optimal spot. Does “research” belong under Review? Doesn’t turning on “snap to grid” in PowerPoint belong under some menu option – any menu – rather than having to right click in the workspace?
There is nothing under the “home” tab that one would guess should be categorized under “home”. Is “home” a function, process, or task I do like insert, review, or view? Why would I expect to change fonts, styles, and bullets under “home”? Didn’t “paste” make more sense under “edit” (its old place) rather than “home” or “insert”?
I know that any UI design, particularly that of a complex system such as Office, is a choice between the lesser of evils. Everyone thinks differently and I’m sure Microsoft did extensive research to ask people where they would look for things and my brain just isn’t on the same wavelength. But that’s why I think personalization is so important. You can never get it just right, so allowing the system to have dynamic last-used, first-shown buttons and movable tool bars helps each user adjust. Sure training and support can be a little tougher when icons can move, but I think that problem is minimal compared to everyday use. And I know you can do anything to the ribbons you want to programmatically, but it used to strike a better balance for the experienced user, but not one that wants to dig into code or buy a 3rd party product – like ones that put the interface back to what it was before.
Yesterday I posted up a set of interruption models. I mentioned in that post that I’d write another entry on how they can be used to test interruption study methodologies. I know that sounds pretty arcane – mostly of interest to people doing interruption studies or interpreting their findings. That may not sound like too many of you, but one survey in particular, from Basex, has gotten into a lot of popular press for its easy-to-digest dollar amount for “unnecessary” interruptions in the U.S. ($650,000,000,000). It’s used by pop press journalists whenever they write about a fuzzy info-stress topic, but want to show this is really important and add a drop of academic-sounding data. Any of them wanting to delve deeper can select from hundreds of academic papers on interruption, attention, and human-computer interface (interruptions.net has a great list), but none of those have a big dollar figure to quote.
My attempts to determine the methodology of the Basex study have been unsuccessful so far. The way I would evaluate its legitimacy is the same way I’d evaluate any interruption study’s legitimacy – by lining it up against the models I’ve presented to see how accurately it would count them. Clearly not all interruptions are “bad” or “unnecessary” – many of the interruption models I listed have a positive net closed-loop benefit. A seemingly valid methodology that simply asks people how often they were interrupted (or observes them and records interruptions) and how much time they lost can provide a very inaccurate conclusion. Each model I list (except maybe the jerk model and blast model) could be easily miscounted by a poor survey methodology.
For example, I believe the Help-me model to be a large proportion of interruptions. This is where one person needs a little bit of someone’s time to provide a good deal of benefit to them. A study that just counts interruptions and their cost would only count the costs and not the benefits to the interrupter which is often many multiple higher than the cost. Only net closed-loop benefit analysis would hunt down the person that interrupted them and determine the value to them and add it back in. That’s difficult to do in a survey, but essential for an accurate estimate. Alternately a survey could ask how often you interrupted other people and how much benefit you got.
As another example, the Help-you model is common as well. This is where someone is interrupted to be told they should stop or modify what they’re doing, perhaps due to new information that’s just come in. But a methodology that only asks about the cost in time of each interruption in negative terms may miss the positive value the interruptee places on the interruption.
One more example: The Interaction model would throw any survey off if it doesn’t properly define “interruption” versus the simple act of collaboration. I defined interactions as interruptions that take place within the task the person is currently working on. Many people wouldn’t even consider this really an interruption. Survey takers may randomly include interactions fitting this model as interruptions, possibly incorrectly counting each positive benefit as a negative.
Well, that headline is what I’d like to write anyways. But, of course, solving email addiction is beyond the capabilities of a mere software behemoth. Still, Google took a humorously kitschy attempt in some new lab features for Gmail just released.
By going into Gmail settings (the “Labs” tab) and enabling the “Email Addict” feature, you get a “Take a break” link added to your email:
Then, whenever you click on it, your screen blanks out and you get the following message:
At least until you reload the page and get back to your email.
Cute. Even though it’s just for fun, it does acknowledge that email addiction is on people’s minds. Maybe not those of Google or the programmers themselves, as they may have meant this as a satiric swipe at their users who think this is a problem. After all – why would they want its users to reduce their usage of email and IM when they seem to thrive on more and more personal information from users being stored on their servers? Google needs bytes to live. <zombie voice> “More bytes …” </zombie voice>.
Well, in any case, it’s a nice email addiction / information overload / attention management joke. And it plays off the idea that people who are addicted to something have little ability to help themselves anymore and need external help.
If Google really wanted to help these users I think there are some real features they could have added:
- Mail arrival schedules (hourly, morning/noon/evening, morning/night, daily): Remember waiting by the (real) mail box for the postman to arrive? Unless you are expecting something to act on today, why not do that again and break the unconscious habit the rest of the day of checking for it? You could set it for the frequency (for example, every hour on the hour) and create a whitelist of certain people or messages that get “express” delivery without waiting.
- Measurement capabilities. Like many behavioral changes, measurement is often a key starting point and more. This feature would provide measurements on the number of times email is checked and useful stats on frequency (per day, per hour) and graph when checking was done over time. Granted this is a bit difficult when it’s just left open, so maybe this feature would have to be enabled and would turn off automatic refreshes. Once people really see how much they check email reflexively they will be surprised and may do more to curb it if they think this is a problem.
- Slow delivery. I find myself checking mail more often when I’ve just sent a bunch of emails because I am now waiting on the responses. This creates an echo effect then where, for example, 20 emails sent out prompt 12 emails back (some quick, some slow, like clapping in a large cathedral). I then respond to 8 of those, 5 people then respond back, etc until the echo dies away. If the emails aren’t very urgent, using slow delivery (they go out in a bundle the next morning for example) would take a burden off for response checking and possibly enable some reflection that would have you change or rescind the messages before they are sent. The “slow design” movement and slowmail have been advocating an approach like this for some time. I think you’d turn this feature on as a default and then only flag messages individually if they need instant delivery.
- Tokens. What if you only had a certain number of tokens per day or week to spend on checking email? Maybe you start with 10 tokens in the morning and it costs you one each time you check email. If administrators are having trouble with load, they could raise the cost to 2 tokens first thing in the morning or right after lunch. You’d start noticing how often you’re really checking (see measurement above) and start planning out your checking better throughout the day. I would recommend that extra tokens can carry over to the next day so you’re not encouraged to do a bunch of frantic checking at the end of the day. Similar attempts to putting a price on email activity have been made for sending email (see Serios from Seriosity).
- A free e-book on Zen. OK, this one is a bit out there. Maybe it’s just me, but while email is ostensibly about communication and human connection, so often it seems to be all about one person and controlling. Someone checks because they want to see if someone found the joke they sent out was funny, if they got someone else to finally admit they were right or agree to do what they said, if everyone else in the group agreed to their restaurant choice. What does it mean about me if people don’t respond to me, listen to me, include me? If my email/IM/message board posting/blog posting falls in the internet forest and no one responds, am I silent and irrelevant? Like sound, does my message only matter if it causes something to resonate in someone’s head? A reminder now and then to “be the water, not the rock” and “let things be and take what comes” may be all that some people need.
I’m working on my Enterprise Virtual Worlds presentation and was filling in some detail on communication in game-oriented virtual worlds that I would like to share here as well.
Enterprises are wise to look to gaming from time to time due to trends in:
- Outside-in technology: how consumer technologies such as blogs and wikis increasingly find their way into enterprises
- Emergent gameplay: the use of gaming technology in ways the original designer hadn’t intended
- User experience lessons: UE improvements tend to filter from the competitive gaming market to generalized applications. Gaming is an optional activity, so UE has to be at a high level when you want the users to pay you to use their systems rather than the other way around.
Communication is interesting to explore since the number of communication channels that enterprises use (and every information worker must now attend to) has increased a great deal over the past five years to include instant messaging, presence, websites, and blogs. Getting enterprises used to the idea of “channels” and how to manage and select between them has taken some time and some pain.
I was quite impressed when all the methods of communication in World of Warcraft (which was released in November of 2003) are laid out. WoW communication is strikingly similar (and maybe more efficient) than enterprise communication technology in many areas.
- Channels: Players can subscribe to communication channels such as /trade to receive ongoing chat on the channel, or unsubscribe. Another example is in EVE Online, which has a “newbie” channel that can put new players in touch with others taking their first steps, but can be turned off once the player is more confident.
- Chat modes (IM): The variety of built-in IM modes goes beyond most enterprise IM implementations which rely on groups. They are: /say (vacinity), /party (your group only), /guild (your broader community), /yell (all in larger region), /whisper (one person)
- Presence: Friends can be selected and you are made aware when they come online/offline, and location is displayed (a feature still on the cutting edge in the enterprise)
- Mail: Consists of normal mail, packages, and COD packages. The inbox is visited at WoW Postal Service facilities, which has the pleasant effect of isolating the player trying to accomplish objectives from the stream of email since they only check it periodically when they visit town. Also, since email costs money to send (a few copper pieces), there is practically no spam
- Emotes: There are over 100 emotes such as /wave, /thank, /cheer, /dance, etc. It is amazing how fluid the use of emotes gets in the real game, such that they do not feel like a conscious effort to be funny, but rather a natural way of expressing oneself in group situations.
According to ICANN:
The Internet Corporation for Assigned Names and Numbers will launch an evaluation of Internationalized Domain Names next week that will allow Internet users to test top-level domains in 11 languages.
“This evaluation represents ICANN’s most important step so far towards the full implementation of Internationalized Domain Names. This will be one of the biggest changes to the Internet since it was created,” said Dr Paul Twomey, ICANN’s President and CEO. “ICANN needs the assistance of users and application developers to make this evaluation a success. When the evaluation pages come online next week, we need everyone to get in there and see how the addresses display and see how links to IDNs work in their programs. In short, we need them to get in and push it to its limits.”
The evaluation is made possible by today’s insertion into the root of the 11 versions of .test, which means they are alongside other top-level domains like .net, .com, .info, .uk, and .de at the core of the Internet.
Next Monday, 15 October 2007, Internet users around the globe will be able to access wiki pages with the domain name example.test in 11 test languages — Arabic, Persian, Chinese (simplified and traditional), Russian, Hindi, Greek, Korean, Yiddish, Japanese and Tamil.
While it may seem like knowing just enough English to type “.com” is not a problem, the issue is twofold. First, writers of languages with non-Roman alphabets may not have an English keyboard that can type “.com”. They could always copy and paste it from other content when needed, but that brings me to the second point: they shouldn’t have to. The content on the Internet is not owned by the U.S. (even if ICANN is) and being able to use addresses in other alphabets has a great deal of symbolic meaning.
I’m currently researching and writing a paper on globalization due out around January. You’d have to be living under a rock to not understand the impact that globalization is having on the demographics of Internet usage and, accordingly, the web technologies, processes, and cultural sensitivity needed to support them. But the recent statistics were still surprising.
The fall of the Iron Curtain (generally considered to be 1989) began a change in market forces that is still being felt in global businesses. For machine translation, SDL reports that Eastern bloc countries account for seven out of the top 10 fastest growing languages for its translation modules in 2007 (Source: http://www.sdl.com/en/events/news-PR/Eastern- Europe-and-China-dominate-2007-translation- trends.asp). Internet World Stats reports that English is by far the most common language on the Internet (with 365 million users versus 184 million for #2 Chinese), but there has been massive growth between 2000 and 2007 for Arabic (+941%), Portuguese (+525%), and Chinese (+470%). The rest of the world’s languages (outside the top 10) still represent 15% of all internet users and had 440% growth from 2000 to 2007.
In terms of usage, Internet usage outside of North America dwarfs usage within it (see table below), although North America has the highest Internet penetration (69%).
Why ICANN picked Yiddish as one of the 11 languages though baffles me a bit. Couldn’t they have picked something more common, like Hebrew? Oh vey.
I was happy to see the Wall St. Journal had a 3 page section in the 5/14/07 issue on “Business Solutions: Building a More useful Intranet”. But I was disappointed to see the total absence of any mention of the importance of trimming down the information presented based on the the user’s profile. The “Portal” word was invoked once, but not explained or associated with “contextual delivery”. Personalization – not mentioned.
There is an interview with Kara Coyne from Nielsen Norman Group. I had the pleasure of sitting down with Kara a few years ago and discussing how I thought that the human factors industry was not keeping up with the times by continuing to treat screen design as a static home page issue – as if one was laying out a newspaper ad. But here she is, on page R6, responding to a statement about clutter by saying
What upper managers often don’t understand is, the more items that there are, the less likely it is that users are going to see your item. If you have too many choices, you’ll end up tuning them all out.
Right! That statement is the perfect lead-in for the need for a technology that helps line up information about who a specific user is (a profile compiled from the directory, HR information such as title and department, skills inventory, heuristic analysis of their contributions and attention stream, and self-profiling or “opt in”) with metadata about the content to determine what would be important to that individual. It’s personalization and portals have been doing it a very long time. You can make it very complicated if you want to, but even in its simplest form it can have tremendous value by narrowing down the information to be sorted through.
The problem is not that there are too many items on the intranet or on the home page. The problem is that no effort has been taken to determine which items are of value to a given user. 99%+ of the information on an intranet is useless to any one individual, so any simple filtering of information – even just by department or job function – can have a huge impact. Why force them all to see the same thing when technology has existed for years to help winnow down the information? Search (which is mentioned in the section) is part of the picture (the “push” part), but not a winning answer for home page design.
The American Electric Power example they give (winner of a Nielsen Norman Group award) is once again a demonstration of an advertising-like devotion to examining “the” homepage as a static work of art. Any design review, in my opinion, should be dynamic. It should start with asking what the main types of users are and then asking to see what the homepage looks like for each type (the executive’s homepage, a call center agent’s homepage, an IT developer’s homepage, etc.). The design has to serve the function and only by knowing what various users need from a website can you determine if the design is appropriate. And no one design will meet the majority of needs of even the top 5 categories of users, so why spend all that time on usability of a non-optimized page? The AEP page looks beautiful in a generic way, but why is it forcing an information worker to click and dig to get their information instead of caring a little bit about who they are and bringing it to them? Maybe it does, but that’s not apparent from the web page shown.
Also, on a separate peeve, I’ve spoken many times about the value of collaboration when used in context. The AEP site has a button at the top called “The Agora” that’s described as “a new area where employees can meet and collaborate”. Does the user need to go to a special area to collaborate? Does that mean the rest of the intranet is not for collaborating? I can’t tell from the picture or text, but I’ve seen that quite often when I’ve done design reviews. If someone’s job is to track sales leads, for example, collaboration should be used in context – right there next to the sales lead system and displaying collaborative discussions and documents relating to the sales lead being examined. Information workers rarely stop what they are doing to say “Gee, I think I’ll go collaborate for a while and then get back to work”. Collaboration can have its own home page and entry point too, but contextual collaboration is where I see collaboration having the most value.