I’d like to start out today with a thesis:
- When designing a product a vendor decides where they want to be on a continuum from capability-driven (starting at the list of capabilities and features a product should have) to purpose-built (starting at use cases)
- Those two extremes distinguish an application vs. infrastructure mindset
- The choice made on this continuum is subtly noticeable in the finished product
This thinking came out of my investigations for a report I’m writing on building portal sites in the new version of SharePoint (Windows Sharepoint Services 3.0 and Microsoft Office SharePoint Server 2007, henceforth known here as SP2007). I have covered SharePoint since its 2001 incarnation and have spent quite a bit of time over the years with clients trying to use it and behind the scenes in Redmond. The more I look at SP2007 the more I get the underlying feel that it is a purpose-built application. An analogue for developers would be that a purpose-built app is the equivalent of a 4GL language where a capability-driven app is like a 3GL. It’s solution building as opposed to just solution selling. On the contrary, Websphere Portal (or ASP.NET for that matter) feels more weighted towards capability-driven.
Note: this is a continuum, not a binary choice. Even starting with a pack of use cases the vendor will take the time to think what the infrastructure needs to provide to support them. And vice versa – a vendor would develop use cases to test a set of capabilities and features they assembled. For a product like an intranet builder, the difference is whether you start with templates and then build the required services underneath or start with the services and then build the templates later.
My theory is that infrastructure products and application products are developed in fundamentally different ways. To develop as infrastructure (or 3gl) you start with the set of capabilities and then features you want to provide and design upward from there. To develop an application (or purpose-built app or 4gl app) you start with use cases and design your product around them. Microsoft develops using Scenarios, which I am told includes use cases as well as a higher level idea of who the developer is and what they are trying to do. Like 4GL development tools, purpose-built products seem to flow and demo easily around the tasks it was meant for, but you notice inconsistencies. These inconsistencies can take the form of inconsistent nomenclature, why you can take certain actions on one screen or object but not another, lack of a “view all” or holistic views of environment, or inconsistent fit and finish from one area to the next.
4GLs were famously difficult to use if you veered too far off the path of what their designers expected you to do. At a recent local SharePoint User’s Group meeting I attended, a consultant presenting on SP2007 strongly made the point that SP2007 is a tool for doing specific kinds of things and you shouldn’t try using a hammer to do things other than pounding nails. Of course that’s true for any app, but the problem arises when its not clear at the outset exactly what it’s meant to do. It’s pretty clear that the animation functionality in PowerPoint is not appropriate for building Space Invaders. But should I guess that building a web site with 4 columns instead of 3 should require outside assistance? The consultant strongly recommended that a technical resource be present when doing requirements gathering and doing mockups to make sure users don’t have a lot of expectations for things SP2007 can’t easily do. That’s crossing the line for me – once I need to interfere with the business telling me what they need I’m going too far to shoehorning business requirements into what I can do.
To some degree I’m just musing here. I’m not saying one side of the continuum is better than another. Each has its benefits and pitfalls, and both can overcome their pitfalls with care. And the more I look at SP2007 the more I’m impressed with the forethought that went into it and convinced it will have a strong impact on the market for building portal-like sites. I do believe that collaboration has to be thought of as infrastructure rather than applications at this point and an infrastructure mindset (capability-driven) would do the best job of making sure a set of reusable services are developed that are consistent and comprehensively support unforeseen scenarios.
I think we’ve found the pet rock of the 21st century. Actually, it’s more of a virtual pet rock. Now, I don’t have anything against companies that legitimately set up virtual places with a purpose, like IBM or Cisco. But owning an avatar just because everyone else has one is a bit silly, as Nicholas Carr reports in Slumming it in Second Life
The mucketymucks have invaded Second Life. Or at least a little roped-off corner of it.
The big thing at this year’s elite World Economic Forum in Davos, Switzerland, is to don a cartoon persona and slum around the virtual world as if you “get it.” An avatar, reports the Financial Times, “has become the must-have accessory for [WEF] delegates.”
I thought we were in the final days of poor integration of web-based direct customer interaction with business processes. You know – the days when you’d fill out an online form that would just get turned into an email to someone who would then print it out and put it in a pile with all the other normal forms they process. Well, I’ve found a company that’s managed to even take that a step backward.
I have been looking for a car battery that seems to be very difficult to find and went to the “parts” link on a local dealer’s website. Below the direct phone number for the parts department is a web form where you can enter all the info on the part you’re looking for along with your email address. I filled it all out including make, model, part, phone number, and email adddress (I left the VIN blank). Today I got a voicemail message from someone at the dealership saying they got my online form, read the basic info back to me, and gave me the number of the parts department that I should call so they can help me.
Boy, it’s tough to even figure out how to optimize that process! Well, one way would be to have eliminated development of the web form and the callback procedure. That would have saved 100% of the development costs for that feature and yielded 0% difference in actual functionality. Better would be actually having someone in the parts department read the information and call me back with an answer. Guess I’ll have to wait for Web 3.0 for that. In the meantime, I’m going to an old-fashioned garage. No web site, but they seem to have better customer service anyways.
The moral of the story has to do with setting expectations. Web technology often sets high expectations, but poor process integration leaves a company worse off than if they hadn’t done anything at all. I’ll be following up on this thread later this week, but today’s experience provides a good primer on the issue.
I just got back from vacation and was pleasantly surprised that the email backlog waiting for me was less than I expected. Still, I’m only halfway through it, but I thought it would be worse. When I look at the email pattern, it seems there was a flurry of emails the day after I left, and then it died down from there. And today the email spigot has been turned back on and I’m getting quite a few.
Could be chance. But I think this is a pretty typical pattern, although I’ll leave it to this blog’s readers to tell me if I’m wrong. It demonstrates the broad definition of one’s presence indicators and the difficulty of creating a unified presence indicator. When I think of presence indicators, the first thing that jumps to mind is the green or red circle on my IM tool.
My IM presence indicator certainly let people know I was out, but that’s not the only way they knew. There is also my Outlook out of office message, my out of office voicemail message, my response rate to emails or phone calls (in case the out of office slipped the sender’s notice), my calendar blocking for vacation, and word of mouth (like telling client services I would be out for a week and to forward messages to my research director). If I was in an office my physical presence (or lack thereof) would come into play as well.
I see four kinds of presence at play:
- Explicit presence: Presence indicators in a system called “presence” (e.g., IM)
- Implicit presence: Indicators of your presence in non-presence systems (e.g., out of office e-mail and voicemail)
- Behavioral presence: Actions (or lack thereof) that indicate your presence (e.g., quickly responding to or not responding at all to voicemails)
- Physical presence: Seeing or hearing one’s presence in the real world (I’ll lump hanging a “gone fishin’” tag in the window here too since it’s physical)
So what does this mean for unified presence and its role in attention management?
First, it shows why making one’s availability known is difficult and requires several efforts across explicit and implicit presence indicators.
Second, it shows why a true unified presence system is unlikely. Unifying explicit presence is easiest, implicit presence a bit harder, behavioral presence starts becoming more art than science, and physical presence gets into audio/video sensors that won’t be used in business settings. Presence can be more unified than it is today, but won’t reach the extreme of a single unified presence system.
Third, it’s interesting to note the degree that non-technical factors – behavioral and physical presence – begin to feed into overall attention indicators. People have natural, organic attention management systems that supplement or fill in when the electronic ones are not enough.
I’m leaving for vacation tomorrow, but wanted to post up one more entry before I leave. I got the following question from a reporter doing an e-mail based interview. I thought I’d post the answer here too:
Q: Do you have a sense of the cause of EAM? You mention too much input (such as emails), but is any of EAM also traced to staff cuts, portfolio overloads, outsourcing or even that IT may be in the midst of a build-out or refresh cycle with technologies that are so new that no one on staff is an expert (Ajax, RIA, SOA, Identity/Security)
A: A continual increase in competitiveness (which covers all the causes you mention) is partially to blame. Making content creation easier has enabled everyone, not just content experts, to be prolific. But another part of the problem is that expectations for responsiveness continually creep upwards over time. Each new communication technology, such as when pagers were introduced or cell phones or email, increases how quickly we can respond which puts us ahead of the expectations. For a short time. If you’re the first one on the block with a Blackberry this works out well since you can be as responsive as everyone else with less effort or more responsive with the same effort. But then everyone gets one and the arms race continues. What you want is a technology that makes you – and only you – more responsive, but that isn’t going to happen.
I noticed a couple of comments connecting my Enterprise Attention Management system with attention deficit disorder. Here’s a few:
FCW.com commented that EAM
“sounds like something you’d hear discussed at a middle-school PTO meeting …I wonder what the strategy was using language that many people will surely associate with attention deficit disorder. Does that make it sound more serious than chronic messy desk syndrome? “
The ecm blog had mentioned a similar thing:
First thing I thought of was Enterprise Attention Deficit Disorder (EADD).
Well, I had actually noticed the connection to ADD / ADHD while doing my research and dedicated a paragraph to it in the full version of my report. I’ve attached the relevant paragraph below, but a quick summary would be that I was trying to explicitly distance EAM from ADD. ADD connects to a powerful meme among parents (and all sorts of easy plays on words about Ritalin and comparing executives to children). But at the risk of sounding humorless, I believe it leads down a path that is not constructive for addressing the difficulty information workers have finding important messages and information and pushing unimportant ones back. There’s nothing necessarily dysfunctional with these people – they just need help given the enormous amount of information present in the enterprise environment.
From my paper “Techniques to Address Attention Fatigue and Info-Stress in the Too-Much-Information Age”:
In researching attention management in the workplace, one runs across references to “ADD” or “ADHD” (Attention-deficit/hyperactivity disorder; for example, Accenture’s “Overcoming Management Attention Deficit Disorder [MADD]”) and the search for a pill to solve it (“corporate Ritalin”, see examples at Hopelessly Devoted: A Customer Communications Renaissance Customer Inter@ction Solutions by David R Butcher and Advice Line by Bob Lewis). The principal characteristics of ADHD are inattention, hyperactivity, and impulsivity. Although this connects frazzled multi-tasking with the well-known popular narrative of ADHD as a dysfunctional disorder for children with short attention spans, it does not provide understanding or solutions for EAM. While it is useful at a base level to apply these characteristics to the business world, it does not have significant prescriptive value for the information worker’s response to information overload. The problems addressed by EAM are mostly intentional, controllable, and driven by the worker’s environment. ADHD is mostly defined in the negative, through inattention. EAM takes a positive view, assuming the information workers are functional human beings, and is focused on bringing important messages closer to the user’s awareness and pushing less important messages further away. Indeed, a worker’s ability to be distractible and multi-task may be highly beneficial, as described in the next section.
In a recent mention in the ecm blog George Dearing introduces Enterprise Attention Management:
Helping companies manage information is big business. Which means every consultancy, especially the big ones, is jockeying to be seen as the thought leader. And I guess with that thought leadership comes the enviable task of inventing new acronyms. Enter Enterprise Attention Management (EAM).
It’s a good posting on EAM and I’m happy to see it there. It did bring up the issue of buzzworthiness for me though. I’m not against coining a new term when needed, but I think clarity is the #1 priority. Besides, if you invent a new word when existing words will do you look kinda obvious! With EAM I was aiming for clarity, as I described in my comment to ecm blog
I’m glad you like the direction of Enterprise Attention Management. The term “attention management” has been floating around for a while now. I just clarify my take on it as “enterprise” because I studied what enterprises can do about it as a whole rather than individuals or advertisers trying to grab people’s attention. I’m not trying to coin it as a term really – just clarify what type of AM I’m talking about.
My report on attention management was just released yesterday. I’ve attached the information and link below, but it requires a client subscription. But if you can’t get to the paper, you can see a (very) brief summary of it in an eWeek article that came out the same day “IT Pros Advised to Address Information Overload”.
The flow of messages and content faced by information workers is increasing but their attention spans remain fixed and limited. Each beneficial new communication and collaboration technology brings along the burden of one more channel that information workers, already suffering from information overload, must pay attention to. Accordingly, attention fatigue will be a gating factor for the success of collaboration and communication projects and activities. In this overview, Craig Roth examines how attention overload afflicts businesses and how an Enterprise Attention Management strategy improves information worker effectiveness and responsiveness.
Amazing. In a blog posting “Shock the avatar” Nicholas Carr reports that the Milgram experiments (where people administer what they believe are real shocks based on the guidance of authority figures) works on virtual humans too!
The participants in the [experiment] often behaved in a way that only made sense if they were responding to the virtual character as if she were real. For example, when she asked participants to speak louder, they invariably did so. The voices of some participants showed increasing frustration at her wrong answers. At times when the [avatar] vigorously objected, many turned to the experimenter sitting nearby and asked what they should do. The experimenter would say: ‘Although you can stop whenever you want, it is best for the experiment that you continue, but you can stop whenever you want.’ As we have seen some did stop before the end. Some giggled at the [avatar's] protests, as was observed by Milgram in the original experiments. When the [avatar] failed to answer at the 28th and 29th questions, one participant repeatedly called out to her ‘Hello? Hello? …’ in a concerned manner, then turned to the experimenter, and seemingly worried said: ‘She’s not answering …’