2009 Prediction: There Will Be Pronouncement That Unnecessary Interruptions and Information Overload Tops $1 Trillion ($1,000,000,000,000)February 17, 2009 at 5:05 pm | Posted in Attention Management, etiquette, Information Work, interruption science, knowledge management | 7 Comments
Commentators and average folk alike were aghast as the amount of the financial bailout crept towards the $1 trillion mark. But as Congress backed away into sub-$800bn territory (for now), another cost is likely to be announced that beats them to this lofty mark: the cost of information overload.
These Basex figures get quoted a lot in the press and, while I do believe that many people and organizations do suffer from information overload, I’m not buying into attempts to quantify it and certainly not at a price tag of $1,000,000,000,000. In fact, I think there’s long term harm from trying to get people to act by shocking them with inflated numbers. Just look at knowledge management. KM was a real issue and worthy cause too, before it was done in by money-losing attempts to recover the huge dollar estimates of its inefficiencies.
How do I know this pronouncement is coming? I’ve been following their stats on “unnecessary interruptions” for some time. They went from $588 billion in 2005 (for interruptions without the “unnecessary” tag) to $650 billion in 2007 (you’d think the number would decrease when just the unnecessary ones are counted, but it jumped up instead). In December, they posted a blog entry saying
According to our latest research Information Overload costs the U.S. economy a minimum of $900 billion per year in lowered employee productivity and reduced innovation. Despite its heft, this is a fairly conservative number and reflects the loss of 25% of the knowledge worker’s day to the problem. The total could be as high as $1 trillion.
I’ll have to examine that new research more. How do the interruption and information overload numbers intersect? Are they separate (totalling $1.6 trillion?) or are interruptions part of information overload (which makes sense, but then why is the umbrella number smaller than the 28% of worker’s day previously quoted for interruptions?).
If the $1 trillion figure is anything like the $650bn number I’m not going to buy it. I haven’t seen a full disclosure on their methodology for workplace interruptions, but from what I could glean there were potentially several techniques used to generate a large number:
1. Lumping in social interactions and distractions with interruptions
Just lump all time wasting annoyances, distractions, and socializing in with the more scientific-sounding “interruptions” and you’ll get a pretty big number. Or better yet, don’t define interruption in any strict sense and survey takers will do the lumping for you. You’ll be able to lump 28% of the average information worker’s day into this category.
2. By counting all costs and no benefits (quote “total cost” instead of “net cost”)
How do you lose $10,000 at blackjack while walking out with the same $100 you went in with? Simple, just tally up all the losing hands and ignore the winning hands. Play 200 hands at $100 per hand, win half and lose half, and you’ll come out even. But that means you lost $10,000 (the total of the 100 losing hands)!
If you don’t like the blackjack analogy, then plug in your own one-side-of-the-coin analogy. How about totaling up just the expenses on a large company’s income statement without subtracting it from revenue and being shocked at how big the number is and the potential that even a small amount of improvement in that number could make?
One non-Basex study I saw asked how many interruptions people had, then assumed 50% of them were unnecessary based on other research. Fine, but then interruptions as a whole average out, don’t they? You can still optimize – a company that’s at break-even can always reduce costs, but the size of the total cost pool is not the issue then. It counted all the losing hands (calling them “unnecessary”) and ignored the winners. The implication is that you can keep all the necessary ones and chip away at the unnecessary ones, but who is involved in judging an interruption as “‘unnecessary?”
3. By ignoring closed-loop analysis
Here’s a surefire way to double the $10,000 in losses I quoted in #2. Just interview everyone at the table (you and the dealer) and add up all their losses. Since the half I lost was $10,000 and the half the dealer lost was $10,000, that’s $20,000 in total losses at that table. But we both came away even!
Basex went to some effort to quantify “unnecessary” as not urgent, not important, could have been done another way, etc. But if you ask individuals this instead of both sides of each transaction, you’re just interviewing the dealer and the player about their blackjack losses and forgetting that quite often one wins when the other loses. Almost every possible model I can think of for interruptions (see interruption patterns) results in one of the parties involved losing on the deal, so pretty much every interruption will be counted as unnecessary by someone and without closed-loop analysis almost every interruption will get incorrectly totaled.
You need to do closed loop analysis – treating each interruption as an interaction between the interrupter and those interrupted and determining, as a whole, if it was useful to the organization. Most interruptions are useful to someone, or why would they do it (I propose only a small proportion are careless etiquette transgressions)? If it’s a matter of self-important timing on the part of the interrupter, consider if there is ever really a “good” time you could push these interruptions to.
4. By playing loose with the definition of “unnecessary”
Reversing a question can help validate it. In this case, ask the question from the other side to see if you get the same answer. Ask each survey taker how many times they interrupted someone else that day and how many of those were unnecessary. If the interrupter thinks it was necessary, shouldn’t a conservative estimate give them the benefit of the doubt? I predict the difference in results between the question that yields $900bn and this one would be enormous. Only a small portion would be because the interrupter forgets they interrupted someone – the rest is the inaccuracy of the methodology.
In common parlance, any unnecessary activity interrupts a necessary one you’re working on. Have to stop working on your coding to go to a stupid meeting? That meeting interrupted your coding. That’s 1 hour of interruption plus 15 minutes to get back to what I was doing. If I decide to take a break and look at email, and then get sidetracked by a dumb one? The email “interrupted” me unnecessarily. If you want to let survey takers count all unnecessary activities as “unnecessary interruptions” that’s fine, but throwing interruption technology and etiquette solutions against the general problem of business inefficiency is like throwing a pebble at a wall to knock it down. The survey definitions and the solutions have to use the same definition of “unnecessary”.
5. By comparing against perfect short-term productivity instead of long-term sustainable productivity
Yes, people take breaks and, being social creatures, they often interrupt others to do it with them. People need breaks. Even the best runners have to pace themselves for a marathon. I calculated that optimal performance for the best marathon runner is obtained by running at only about half speed. What if you spend a bunch of time and effort getting people to eliminate certain time-wasting habits, and they just re-fill that time with other habits because they need or want that time? It may be worth figuring that out before throwing a lot of time and money away.
So you’ve figured out by now that I don’t buy the $900 billion number and I certainly won’t when it hits a trillion. Maybe the surprising part if you don’t regularly read my blog is that I’m very much a believer that attention management is a very useful approach and that organizations and individuals can take real steps to manage their attention better (for enterprises see my Enterprise Attention Management conceptual architecture; for individuals my Personal Attention Management tips). But I also believe in having an accurate picture of costs and benefits.
Another techno-cultural topic I believe in is knowledge management. KM’s basis tenets were sound- that knowledge (or at least “information” if you don’t want to sound too pompous about it) is an asset just like a factory or an employee and needs to be managed as such. But KM became a dirty word after a few years of consultants exaggerating the size of the problem and what could be done about it. It’s taken about a decade for KM to get back on its feet, and only now under new names so as not to arouse those burnt on KM before. I don’t want this to happen to attention management and information overload too. It’s a real problem, but a complex one that is impossible to pin a real number on. And it has real solutions too that can help when recognized problems exist – if you don’t promise too much.
Note: This version has been updated due to a helpful comment from Mark Worth pointing out the shift from quoting “unnecessary interruptions” to “information overload”.
Mind mapping, concept mapping, brainstorming tools have been around a long time and the question has always been whether they will ever take off. I’ve been using them for a while now in my research, first using FreeMind and then Mindjet MindManager Pro (after getting an eval copy). My research process involves reading a whole lot from many different sources and angles on a topic until a mental map of the topic and a conceptual model that I’d like to convey starts forming. That’s when I start the mind map. Then, as I continue to read, I decide whether each new piece of information is already accounted for in my model, is an enhancement I can add on, or points out a flaw in which case I need to fix it. I know I’m ready to start writing when I get to the point that I read several articles in a row with no changes needed to my map since everything in them is already accounted for in my conceptual model.
These tools occupy a comfortable middle ground between two extremes of structure. On one extreme, people are used to dumping unstructured information into emails, documents, or presentation slides that later get copy/pasted into some semblence of order. The larger and more complex the topic, the more difficult this becomes and, accordginly, the more likely the author is to avoid testing out another way of organizing the information. On the other extreme are structured content creation tools (e.g., XMetal, ArborText) and database forms that require rigid planning and adherance to structure. But it’s difficult to get people to think in structured enough fashion for DITA or XML tools.
Mind and concept mapping tools allow the user to enter information in a very unstructured manner, but organize it later. And that’s really the point – adding a step to the process that allows one to digest the information that has been accumulated and look for patterns. I think there’s so much emphasis on getting things done that the content planning process gets overlooked. People spend a lot of time accumulating information – from RSS feeds, from emails, from web pages, from Google searches – but not enough time taking a step back to reflect on what they’ve accumulated. What does this information tell you? How do you try out different hypotheses to see which connect up the most information?
Your standard productivity suite can provide some relief. In Microsoft Office, OneNote allows the separate little content pieces to maintained as separate pieces and organized with tabs and clicked/dragged around, which is a little better. And Visio 2003 had a brainstorming template that drew mind maps, although its flexibility with regards to page size when entering large maps made it of limited use.
As useful as brainstorming tools can be, I don’t think they are for everyone or every situation. They are mostly useful for discovering patterns and innovating. Putting your grocery list in them is a waste of time since there’s no creativity involved. Unless you own a chain of different ethnic restaurants and are trying to do menu planning.
The real fun will begin when these tools get collaborative. I don’t mean tracking changes on them, but treating them like wikis where someone’s body of knowledge (not just the information, but their way of organizing, categorizing, and drawing conclusions from it) can be posted out to someone’s Myspace page, blog, or a workspace and others can build on it, adding new examples that confirm a view, pointing out others that contradict the model, or show how new ways of organizing the model make it more clear.
Accordingly, these tools will have to get better at enabling multiple conceptual models to be applied to the same data points, saved, and quickly switched between. It is common in preparing for speeches, for example, to want to flip between a chronological or topical narrative until you’ve decided on the best choice, but changes to the nodes themselves should appear in either organizational scheme.
Still, I think there’s a bright future for mind and conceptual mapping tools and I encourage anyone who hasn’t tried one of these tools to take one for a spin.
If you haven’t seen the video of a woman opening her 300 page iPhone bill, check out the article and link here. I’ll admit – I’m not currently a fancy phone kinda person, so you won’t see me commenting a lot on the mobile industry unless I get assigned that as a research topic. However, brand management and information management are passions of mine and in those terms I consider this a minor disaster.
From my view it’s a cautionary tale in 3 ways:
- The potential for collateral damage to brand image from partnerships. Manufacturers of products endorsed by athletes have often had to deal with this type of problem. In fact, it has become so prevalent that some companies and sponsors of events have decided the risk of collateral damage outweighs the benefit and now avoid such spokespeople. Now it’s Apple’s turn. Apple has earned a strong brand image that associates them with sleek, streamlined, innovative (not tied to legacy), understanding young people, and hip. But their relationship with AT&T has resulted in a brand management issue that is getting heavy exposure (including CNN Headline News) that will associate “Apple” and “iPhone” with something non-sleek, tied to an old way of doing things, unhip, and abhorrent to the values of many young people.
- The Web 2.0 generation has massively greater power to embarrass large organizations than previous generations. Accordingly, large organizations need to allocate budgets massively greater than those of a generation ago to mitigate this risk through continuous monitoring of legacy and Web 2.0 communication channels as well as a general PR contingency plan for unpredictable disasters.
- Old information dissemination practices must be reviewed in light of new information demands. When the only thing a cellphone did was make calls (and expensive ones at that), a paper itemized bill made sense. Text messages are far more numerous (an astounding 30,000 for Justine) so the same format will be practically useless. Even if one was interested in the information on those pages, they would have great difficulty finding and using it.
And to those people who say it’s her fault for not selecting e-bill, you should have to opt-in to a bill that may require being shipped in a box, not opt-out. And I don’t think one would reasonably expect that their paper billing would result in a few redwoods worth of itemization.
They are both back in style. Bain survey of 1,221 international executives by Bain found:
Of course, KM definitions are always fuzzy and this one includes collaboration and innovation as well. Still, it wasn’t that long ago that executives needed a cootie shot if someone mentioned knowledge management in a meeting.
When talking to clients about business cases for portal, content management, collaboration, or intranets I have found that they jump too quickly into their spreadsheets or lists of benefits. Before jumping into the number crunching, it’s worth taking a step back to look at what is really being asked for. The intention of this list is to help business case authors to be aware of the catches and pitfalls, to know the questions to ask of their peers and financial folks, and to lay out the territory so they can avoid unnecessary work and find the path of least resistance to gaining approval.
Some questions to ask to assess the situation before diving into the business case:
Why me? Did every initiative have to have a business case? Did e-mail?
Business Case vs. Business Plan: Are you producing a business case (presented to a decision maker who will judge whether to proceed with the direction recommended in the plan) or a business plan (done after a business case is accepted and contains all sorts of things the judge probably does not need to know such as the phasing and schedule, staffing plan, and project plan)?
Business case vs. ROI: A business case is not necessarily numerical. Providing a spreadsheet when the boss expected a textual business case can sink your project, or at least waste a lot of time.
ROI, NPV, IRR, payback, TTV: If you’re doing quantitative analysis, do you have a choice of method? Are the time period and discount rate specified? If not, this just becomes an exercise in picking the most favorable parameters you can get away with.
Conceptual vs. Concrete: How fuzzy (“collaboration in the workplace”) or concrete (a specific product) is your initiative? Do you have the option of how to scope it? Business cases can be at many different levels: Technology, program/project, Organizational, and Conceptual
Project vs. infrastructure: A project has a discrete beginning, middle, and end. Infrastructure is ongoing. Quantitative analysis such as ROI was made for discrete projects and is difficult to apply to infrastructure where one does not know exactly all the ways in which the enabling infrastructure will be used. Also, the infrastructure simply makes other assets more efficient (people, systems), rather than directly providing benefit itself, so determining what percentage of improvement is due to the boost provided by the infrastructure is like trying to determine how many milliseconds faster a runner is because of the vitamins she is taking. One common way around this problem is to “projectize” the infrastructure: to take one major and immediate way in which the infrastructure will be used and justify its entire cost based on that project, treating the rest as free icing on the cake (“And we’ll get to use it for other things too …”).
Hard vs. soft: You can think of both costs and benefits as being on a hardness scale like the 1-10 one used in the Mohs Scale of Mineral Hardness. Provable, measurable impact to a company’s profit through increase of revenue or decrease of expenses, accurately attributable to a direct cause, is a diamond (10). And it’s equally rare and valuable. On the other hand, “It will save every salesperson 5 minutes per day” is talc (1). Simple, easy, and soft. Helping to decrease printing expenses by providing information online is maybe Orthoclase Feldspar (6. OK, I looked it up on Wikipedia). No non-investment business could function if it required a rating of 10 on benefits and costs for everything it did. Some faith (faith = (10 – hardness rating) * 10%) is generally involved.
Net benefit vs. status quo: Generally the status quo – the current state cost of doing nothing and proceeding as always without the infrastructure project – is incorrectly assumed to have a cost and benefit of $0. This math ignores the possibility that the status quo itself may have costs and benefits, particularly if the environment is changing in some way.
Prioritization vs. financial testing: The effect of the project on the time and money of the company and attention of your team and those touched by the project (including the end users) needs to be prioritized against other potential projects. On occasion, KM project owners jump through many financial hurdles only to find their project rejected anyways since other projects were deemed more important or worthy of the effort.
Lottery vs. followup: Is the business case treated as a lottery where winning results in a pile of cash showing up at the door? Or does the eye of the finance department continue to watch you after you win approval? You should have to show you actually used the resources allocated for the project in question and got the benefits you promised.
I’ve been an analyst since 1998, focusing most of that time on various knowledge management technologies like portal, web content management, and intranets. One client inquiry that has been consistent over time and between technologies is the need to prove value for knowledge infrastructure. Sometimes this takes the form of financial analysis (ROI, NPV, time to payoff), and sometimes it is around metrics (how can I show improvement or prove in a year or two that it was worthwhile).
Overall, there are two reasons owners of collaboration infrastructure start work on a business case: because they have to and because they know it’s good for them. There’s an inbetween option which is that it behooves them to do it now because there’s a good chance they’ll be asked for it in the future. Even if not explicitly asked for, a business case should be an integral part of any collaboration plan or strategy to validate that the technology is aligned with business goals and objectives.
The journey is the destination when it comes to business cases. When the business case creation process is seen as valuable on its own rather than just a hurdle to get past to start work it can be used to steer the direction of the project and form the basis for an ongoing dialogue with the business.
At a high level, the business case for technologies that fit under the KM umbrella are very similar (portal, web content management, attention management, intranets, collaboration, search, etc.). Note that I will not distinguish between the business case and a business justification. The differences are very minor and lead to the same steps in my experience.
I’ve talked to 100+ organizations about their business cases over the past 7 years, worked in detail on a few, have read through many case studies, and have seen many different approaches that worked at different companies. It’s all about 1) starting with a worthy project, 2) understanding the situation – the why, what, and how, 3) building the business case by selecting the most useful methods out of the many available for the specific situation and the line items to apply those methods on, 4) presenting the business case (in multiple formats), 5) keeping the business case in mind while executing the initiative, and 6) following up once the initiative is in place.
That represents my high level view of this issue. I’ll post more thoughts in an ongoing fashion. I’ll focus on #2 since I think that’s where most of the misunderstandings occur and business case production often goes astray.