Using Interruption Models to Test Interruption Studies

June 20, 2008 at 1:37 pm | Posted in Attention Management, interruption science, User experience | 1 Comment

Yesterday I posted up a set of interruption models.  I mentioned in that post that I’d write another entry on how they can be used to test interruption study methodologies.  I know that sounds pretty arcane – mostly of interest to people doing interruption studies or interpreting their findings.  That may not sound like too many of you, but one survey in particular, from Basex, has gotten into a lot of popular press for its easy-to-digest dollar amount for “unnecessary” interruptions in the U.S. ($650,000,000,000).  It’s used by pop press journalists whenever they write about a fuzzy info-stress topic, but want to show this is really important and add a drop of academic-sounding data.  Any of them wanting to delve deeper can select from hundreds of academic papers on interruption, attention, and human-computer interface (interruptions.net has a great list), but none of those have a big dollar figure to quote.

My attempts to determine the methodology of the Basex study have been unsuccessful so far.  The way I would evaluate its legitimacy is the same way I’d evaluate any interruption study’s legitimacy – by lining it up against the models I’ve presented to see how accurately it would count them.  Clearly not all interruptions are “bad” or “unnecessary” – many of the interruption models I listed have a positive net closed-loop benefit.  A seemingly valid methodology that simply asks people how often they were interrupted (or observes them and records interruptions) and how much time they lost can provide a very inaccurate conclusion.  Each model I list (except maybe the jerk model and blast model) could be easily miscounted by a poor survey methodology.

For example, I believe the Help-me model to be a large proportion of interruptions.  This is where one person needs a little bit of someone’s time to provide a good deal of benefit to them.  A study that just counts interruptions and their cost would only count the costs and not the benefits to the interrupter which is often many multiple higher than the cost.  Only net closed-loop benefit analysis would hunt down the person that interrupted them and determine the value to them and add it back in.  That’s difficult to do in a survey, but essential for an accurate estimate.  Alternately a survey could ask how often you interrupted other people and how much benefit you got.

As another example, the Help-you model is common as well.  This is where someone is interrupted to be told they should stop or modify what they’re doing, perhaps due to new information that’s just come in.  But a methodology that only asks about the cost in time of each interruption in negative terms may miss the positive value the interruptee places on the interruption.

One more example: The Interaction model would throw any survey off if it doesn’t properly define “interruption” versus the simple act of collaboration.  I defined interactions as interruptions that take place within the task the person is currently working on.  Many people wouldn’t even consider this really an interruption.  Survey takers may randomly include interactions fitting this model as interruptions, possibly incorrectly counting each positive benefit as a negative.

Advertisements

1 Comment »

RSS feed for comments on this post. TrackBack URI

  1. Hi Craig.

    I have used an interaction model as the basis for trying to provide tangible ROI to collaboration installations at large companies, and yes, the numbers can be staggering. I think an interaction model is the missing link in defining a cost based ROI.

    If you extrapolate on the basic premise that good collaboration increases self servicing while reducing manually supported interactions, and apply that to even a single “Saved” interaction per person per day, the resulting numbers (manhours saved) are significant when applied across a large corporate division. Large enough to raise some eyebrows, yet not so stupid to seem ridiculous.

    Also, the Basex findings of 28% of time as “generally wasted” (my words) is consistent with other studies conducted over the last 15 years. I did a study in the early 90’s and the numbers were the same – 30-35%. I have seen others over the years in the same ballpark. So, a bigger question is, if the numbers haven’t changes in 15 years then what the heck is going on?

    I think that in a social setting, one needs to examine what is a reasonable level of interruptions that creates the “normal” environment. I think an environment with zero interruptions would be considered harsh and would not be long lasting.

    Kevin Shea

    PS — I’ve worked a lot with Larry Cannell in his previous position.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.
Entries and comments feeds.

%d bloggers like this: