Google announced Wave at its conference on Thursday, resulting in some bubbly coverage by the IT press. Check out the video from Google’s conference where they announced Wave (although allow 1 hr 20 min).
I watched the announcement and sometimes during effusive vendor presentations I feel like the guy at magic shows trying to get past what the magician wants you to focus on to reveal how the tricks are done. That’s how I felt watching the video of the presentation where Google Wave was introduced.
For example, the story accompanying Google Wave includes some magician’s hand-waving about eliminating e-mail and reinventing communication (“e-mail was invented 40 years ago before the internet … instead of point-to-point like e-mail, there’s a server-hosted conversation that participants connect to …”) as he slips a collaborative workspace into your pocket. Boil this down and it’s a workspace instead of channel. Workspaces have been around 40 years too and also pre-date the internet as bulletin boards, usenet, etc.
The spell checker (an applause line brought up at least 3 times in the presentation) is contextual which is neat, but I don’t think the technology was created by Wave. While they didn’t mention its origins, I suspect it comes from the work done in Google Translate that implements statistical translation (one of two machine translation methods with the other being rule-based). By analyzing a truly enormous amount of text that is deemed to be accurately translated (one blog reported that Google used 200 billion words from United Nations documents as input), a learning system can develop inferences about how words are to be used and, given a new piece of text to translate, the highest probability of proper translation based upon past experience.
The presenter demos real-time editing with color highlighting and cursors for different editors. When the presenter asked if we could picture students taking notes in a class together, I thought “Yes, I can picture it very easily because I’ve seen SubEthaEdit.” Real-time collaboration editors have been around for a while. What’s cool is not that you can do that at all (“Imagine …”), but that it’s working in a browser and has an open API.
Beyond the re-purposing and re-skinning there are some advances:
- You can respond to parts of messages, which should be handy for those people that include several points in a message that you want to break apart. This also works in larger pieces of content so it acts as a larger content review process (comments are inline instead of in bullets to the side like Word)
- Google made the decision to have text entry be synchronous (you can check an async box to turn that off) so people can see what’s being typed as its typed.
- There’s a playback mechanism. Wikis inherently have logging, so it seems an obvious but fun next step to play through the changes.
The audience seemed happy. There were applause lines for dragging and dropping photos into a discussion, wiki-like changing of other people’s text and markup, drag and dropping a link to a collaboration space.
To me the upside is not the new invention (or re-invention) of capabilities. Think about Google Maps. The cool thing about Google Maps wasn’t that a programmer could overlay data on maps and scroll/zoom around it. That had existed for quite some time. What was cool is that the API made it so easy and embeddable that great applications (“Mapsups” as one client of mine called them) started showing up everywhere.
Well, Wave was created by the Google Maps team. If they can do the same thing with collaboration spaces and synchronous collab that they did with the Maps API we could see much better use of web-based collaboration. Too often collaboration tools have been modal rather than blending contextually into other apps. Hopefully Wave can make some inroads here. And that was, indeed, the point of the presentation which was to get developers excited about using the APIs to get the snowball rolling. It would have been useful to get past the presdigitation and instead of pretending this is all new or the “e-mail killer” to point more to the APIs as the real value.
The WSJ published another article on information overload, which they generally do when Basex releases a new number on information overload, unnecessary interruptions, or interruptions (it’s evolved over the years). You can see the comment I entered in the comments tab on the article (click here and look for Craig Roth). Now that I re-read it, my comment sounds more harsh than intended. It’s not a bad thing that this issue gets more attention. There’s something to be said for the Basex approach of shaking people awake and getting them to see the danger in their current path. The $900 billion number is like the “you won’t live to see your kid’s graduation” pronouncement that physicians sometimes trot out if an unhealthy patient is ignoring his more measured advice to lose weight and exercise.
Still, I’d like to see some of these articles getting past the “information overload 101″ template: observation on how we’re overloaded, quote from overloaded person, “woe is me” pronouncement, attitudinal survey stat, latest Basex figure, quote from an organized executive, personal time and attention management tips.
Get people to think about:
- “Closed loop” rather than selfish view of interruptions (treating each interruption as an interaction between the interrupter and those interrupted and determining, as a whole, if it was useful to the organization)
- Pacing (even if 28% of workers’ days are wasted, 0% isn’t the proper target; step back and think about what the real target should be to get a realistic picture of potential cost savings)
- What they really mean by interrupted versus distracted and what people call “unnecessary” interruptions (does the person doing the interruption ever think their interruption is unnecessary and if not, who gets to judge?
- How social contracts and organizational structure influence interruptions and information flows in ways that aren’t captured in overload calculations
- By all means, use the Basex number as an example of one extreme way of estimating it, but follow up by talking about the importance of determining a realistic goal for improvement. Once you get executives to buy into a strategy based upon dollar savings rather than quality and speed of decision making and employee retention, you’ll be expected to prove how much you’ve saved in hard dollars later. The Basex number – from what I can tell – doesn’t serve that purpose since it’s a sum of personal observations rather than closed-loop, depends on colloquial and self-determined definitions, and is more an indication of overall angst than a number to actually target as waste.
- How technology can help. Technology is not the answer, but it’s certainly a lot of the problem and, accordingly, can be a participant in an improvement approach
- Teachable moments. Much of the information overload is due to etiquette and culture. It’s been said that you can’t force changes in culture, but there are certainly cases where culture has drastically changed. Part of the answer lies in exploiting teachable moments to make positive changes in counterproductive communication and information management behaviors.
- If you’re in a business publication, talk about systematic changes that can improve the efficiency of a large number of workers rather than just personal tips on how any one person interested can help themselves. What can executives and owners of communication systems do that is more than what any one individual worker can do?
One of the arguments that many alternate productivity suite vendors have made is that most users of Office are not power users and don’t need all the complex functionality it provides. These basic users just want the ability to create simple documents, spreadsheets, and presentations and the unneeded complexity of Office makes Office bloated, overly complex, and too expensive. Guy Creese summed it up well:
Both sides of this argument are wrong: Microsoft saying that you need to overbuy because you never know when a worker might need a certain feature (true, but not as often as Microsoft claims); Google, IBM, and Sun saying that you don’t need all that functionality (actually, sometimes you do).
In thinking through some common productivity use cases with Guy for some upcoming research he’s doing on productivity suites, it occurred to me that an argument could be made that certain complex features should be left out not because they’re infrequently needed, but because they don’t belong in that tool in the first place.
Microsoft has given the world three hammers in Word, Excel, and PowerPoint and now every content situation looks like a nail to information workers weaned on these tools. They are very generalized tools and have been expanding in functionality to incorporate many situations that other tools would be better for.
To give just a few examples I often see:
- Excel as a database and reporting tool. It’s not uncommon to see spreadsheets with thousands of rows being maintained and various tricks to get summary data out of them and enable multiple users to input data into it. Isn’t that what simple end-user databases are supposed to do for you?
- PowerPoint as a photo slide show. I keep getting .pps files with slide shows of funny pictures or inspirational images, one .jpg per slide. Why? Just to save the trouble of someone figuring out how to use a zip file of .jpgs?
- Excel as business intelligence tool. Excel is often cited as the #1 BI tool. Depending on how high-falutin’ your definition of BI is (and mine stretches to OLAP), shouldn’t you just use a BI tool if that’s what you want to do?
- Word or PowerPoint as a page layout tool. Want to create a greeting card? Or do fancy layout of a newsletter? That’s why there’s a category of software for doing page layout and publishing, ranging from consumer-level to professional.
While there’s no doubt sometimes people stretch tools too far simply because they are familiar with them, it shows forethought and flexibility when new uses for a tool keep cropping up. Specialized tools can be expensive and require learning yet one more interface.
Ultimately, this is just one facet of the “which tool to use?” problem I outlined previously, and it extends to most tools in the information worker toolbelt, from using e-mail for collaboration instead of a collaborative workspace to collating changes in Word docs instead of using a wiki.
This is a cross-posting from the KnowledgeForward blog, but here in my personal blog I’ll add one more example of stretching the boundaries of Office: using PowerPoint to design a New Year’s hat for my kid (see below). Not quite what its creators intended I’m sure!
I’m as shocked as anyone to read the transcripts of the flight talk on the doomed flight 3407 to Buffalo that crashed apparently due to wrong maneuvers by the pilot. Apparently when a stick pusher tried to avert a stall by diving, the captain forced the plane to do the opposite and crashed.
As someone who travels frequently (and sometimes on “puddle jumpers” like this one) it’s pretty scary to see this can happen. I don’t normally use this blog to vent my own fears and anxieties, so I’ll tie it to one of my coverage areas: enterprise virtual worlds. Last year I spoke with Arnaldo “AJ” Peralta of Icarus Studios, a designer of virtual world strategies, who demonstrated the value of simulation and rehearsal by talking about a Jet Blue flight that made a successful landing with the front wheels stuck sideways. He told me the captain of that flight claimed he was able to land the plane safely because he had learned from three previous crashes – in simulation.(I haven’t found an online reference to verify that, although a commenter here describes hearing the pilot mention the value of the regular training simulations they do).
This presents us with an unfortunate example of the value of virtual environment simulations in preparing for catastrophic events in a manner that allows for safe mistakes. Kurt Squire in 2003 (cited here) worded this type of learning from mistakes in safety as “provid[ing] choices and consequences in simulated worlds.”
These types of simulations are not strictly learning (acquiring new information and skills), but are rather rehearsal. Rehearsal follows training/learning just as rehearsing for a play follows the actors memorizing their lines. The rehearsal pokes, prods, and tests whether the user can retrieve and apply that information in the correct situations. It also solidifies the information by connecting it to real experience and allows for iterative learning from mistakes in an environment where failures have no costs.
Scaling down the nature of the disaster, one can see the value of providing enterprise virtual world simulations of situations they may encounter, such as a reactor overload at a nuclear power plant, a chemical spill on a major freeway, or a fire on an oil rig.
Unfortunately, according to the WSJ, the simulators used at Colgan (the operator of the doomed flight) didn’t cover this scenario:
Colgan’s standard training program stops short of demonstrating the operation of the stick-pusher in flight simulators. Without such hands-on experience, safety investigators argue, pilots could be surprised and not react properly when the stick-pusher activates during an emergency. The FAA is required to sign off on all airline training manuals.
On Sunday, Colgan said its FAA-approved program includes “comprehensive” classroom training on the stick-pusher but emphasized a demonstration in a simulator “is not required by the FAA and was not part of the training syllabus” Colgan received when it obtained its Q400s.
There’s nothing that can be done now to help those aboard flight 3407. But enterprises can take the comparison between a disaster in an emergency situation that the pilot had not run in simulation and a successful avoidance of disaster from a pilot that had simulated the incident. That’s an over-simplification to be sure (every situation is different, lack of sleep was involved as well, compliance to policies was not monitored or enforced, etc.), but I believe the value of rehearsal through virtual simulations can be distilled from this incident and applied to other common business and government scenarios.
I’d like to follow on to yesterday’s posting about how Second Life articles seem to have veered from overly glowing to overly cynical and are now mildly positive. I’m sure mildly negative is next before they settle in on a reasonable, non-hype-driven, balanced view.
What this reveals to me is that the Gartner Hype Cycle isn’t as good a model as a pendulum. While the hype cycle consists of a single “up, then down, then plateau”, I think what really happens is like the plucking of a string or the movement of a pendulum. An event starts it in motion and it goes from very positive, to very negative, to slightly positive, and so on, all the while eventually seeking the center. The number of swings is determined by the size of the story. The virtual world topic plucked the string fairly loud, so it has swung to the positive side twice already. The hype cycle may be accurate for a small enough story where there is one iteration of high, low, then center. But Second Life demonstrates that for larger stories there are multiple iterations before it centers.
I’d also argue that there is a natural downward sloping trend for visibility for all technologies rather than a horizontal plateau as shown in the hype cycle. To be more exact, it is exponential decay. Put ‘em together and you have something more like the figure below.
Back in 2007 I wrote A Guide to Writing About Second Life, a tongue-in-cheek how-to guide to lazy journalists that want to write a story about Second Life based on my experience of reading way too many articles of this type and their lack of depth. It offered a few choices, such as how to write “the positive, glowing update”, followed by “the negative, cynical slam”, and maybe “the deep thought piece” to get philosophical about it. My point was that the mass media seemed to follow each other like a herd, veering toward glowing tributes first, then all getting cynical next. And they all use pretty much the same list of talking points, that are both true simultaneously if one were to write a more balanced piece (why don’t they?).
Now I see the third iteration: the “c’mon, it’s not that bad, let’s be reasonable” story in Information Week. The story is called “Rumors Of Second Life’s Failure Are Just Lousy Journalism“. He’s right that journalism here has been lazy and many of the slams were overstated. There’s good and bad to say about Second Life, like anything else. I am working now on a short document to give a quick view into where we see Enterprise Virtual Worlds and their potential value. I’ll try to jump past the one-sided views and get right to a more moderated view of pros and cons and where I’ve seen real business value today.
Virtual Worlds for Inspiration, Innovation, and Participation
Charles White, Senior System Designer and Lead for Virtual Worlds for Engineering and Science, NASA Jet Propulsion Laboratory
NASA has used simulator worlds as training environments for many years, but now virtual worlds offer new opportunities to interact with the public, and offers a new canvas to visualize real scientific data. Explorer Island in Second Life is the Jet Propulsion Laboratory’s entry into virtual space. Charles White (Aka: Jet Burns) shares observations and lessons learned from over the virtual horizon.
I hope to see you there!
There’s been some lively internal discussion here about the desirability of automated browser updates for security patches.
An article in Techzoom.net called “Why Silent Updates Boost Security” practically salivates at the thought of patches automatically and instantly being deployed. It praises Google for its 5 hour automated update cycle and states “After 21 days of releasing Google Chrome 188.8.131.52, an exciting 97% share of active Google Chrome 1.x users were using the latest Google Chrome 1.x version.”
That excitement wasn’t shared over at cnet (“Google issues, then reissues Chrome security fix“) where they wrote “Google fixed security holes with a new release of its stable version of Chrome–then released a replacement shortly afterward to prevent a batch of crashes that turned up as well.”
I agree with my fellow analysts that the idea of pushing out silent updates does not and should not sit well with enterprise IT. Still, I understand the other point of view too. Just creating a patch and putting it on your website isn’t likely to have much impact. The majority of browser security breaches are targeted at personal PCs who don’t have IT staff to push out updates and don’t even know what a patch is.
One part of the answer then could be creating separate versions of the product (consumer and enterprise) that have different patching strategies. Another part of the answer is that vendors need to take extreme caution when pushing updates directly to anyone’s browser. It seems the balance has shifted to quickly trying to close holes rather than the primacy of a personal user’s control over their desktop environment. It needs to shift back. Lastly, a middle ground between silent updates and passive posting of patches needs to be used. This includes effective NAGs that let the user know their security patches are outdated (red alert in the titlebar perhaps?), but are not overly disruptive to users.
Our company has been playing with Yammer lately. We’re using it like an internal Twitter for employees. Officially it would be known as persistent chat since conversations do not get initiated and then shut down like an instant messaging chat would, but rather they stay open and people continually talk throughout the day. It’s a virtual water cooler in the virtual office.
Here’s a scrubbed sample of how the conversations go:
Bill Malmsteen: nice job Alice et al on the call
12:11 PM – reply
Joe Lynch in reply to Bill Malmsteen: +1 12:15 PM – reply
James Emmett: On the con call w/Amy, Sue, Mary. Going well.
11:47 AM – reply
Bill Malmsteen in reply to Joe Lynch: question in Q for you: 11:48 AM – reply
Bill Malmsteen in reply to Bill Malmsteen: “What kind of assessments should customers be expecting from the providers? ” 11:48 AM – reply
Daniel Emmett: Most of them will offer a type 2 but you have to be careful …
As I review the daily digests of conversation I get emailed, I’m fascinated. And not as a technology analyst, but as a student of sociology (my minor in college a long time ago, now just a hobbyist). If one makes the assumption that the selection effect of those participating is minimal and that people are not acting differently in the chat than they would in real, public conversations, then what you have is a compiled record of a type of hallway chit-chat that occurs regularly in a business. Actually I wouldn’t make these assumptions – I would validate them in private interviews with employees to make sure the conversation accurately represents the public face they use with their co-workers. The tone of the conversation is somewhat moderated and averaged out by its public nature. This is good for study – participants inadvertently conform to a common view of the the corporate culture, which yields a gold mine for any sociologist (or corporate anthropologist) that wants to study the impacts of corporate culture and lacked a good way to quantify it.
The conversations are already in handy textual digest form, so all the researcher has to do is paste it into a spreadsheet or simple database and then get a bunch of grad students to tag each posting (a conversational text fragment) up. Sample tags could include:
- Constructive tear-down, unconstructive tear-down, agreement
- Non-work related (sub-category: sports, travel, restaurants, family)
- Level of poster (CxO, executive, manager/director, worker)
- Informative, asking for help
- Detailed, uses jargon, high level
Once you’ve accumulated a large set of these statistics for a few dozen companies, correlate for key success factors such as growth, profitability, or average length of employee tenure. You may now find statistically significant correlations between culture (as revealed by conversational tone and topics) and corporate success. Persistent chat provides an easily searchable and taggable artifact for something that would be difficult or impossible to observe otherwise – casual conversation between a broad swath of employees with zero observer effect (where the fact that people know they are being observed by the researcher distorts their behavior).
I’ve noted already how the culture of Burton Group is what I would call “nice” – people enjoy the opportunity to compliment others and build on ideas. Another company where I worked had a very different culture, where people wanted to be seen doing the best take-down of others possible and were granted status for being successful at it. I always felt that way, but now I could actually quantify this opinion if only I had a few grad students hanging around to tag up the transcripts for me …
Oracle announced its release of Beehive 1.5 today. They are hoping that a technology refresh of the Beehive collaboration assets (along with additional assets acquired along with BEA) can give Oracle another shot at the collaboration market after the moribund Oracle Collaboration Server has fizzled.
The announcement comes at a good quiet spot between IBM’s collaboration announcements at Lotusphere in January and Microsoft’s announcements on SharePoint 2010 that will probably come to a peak at their conference in October. Likewise, its most attractive feature is that its platform and standards offer an alternative to a Microsoft stack (Windows Server, SQL Server, SharePoint) and an IBM stack (Notes/Domino and/or Quickr+Connections with WebSphere). Beehive offers more standards than you can shake a stick at (although I don’t recommend shaking sticks at beehives generally): WebDAV, IMAP/SMTP, JSR 170 for content repository access, XMPP for IM and presence, LDAP or AD for directory, and JMX for management. You can use Solaris, Windows Server, or Linux for the serve and any development tool desired. From a technology point of view its appeal is likely to be based on architectural decisions about what standards and stack an organization wants to embrace (or stay away from).
But technology aside, the key for Oracle (as always) is whether they can utilize their channel to sell this stuff and whether organizations can be persuaded to pay real money for it after previous false starts. In the past, Oracle hasn’t had much voice left to talk about collaboration and portal after yelling about database and ERP. But since the Stellent acquisition, content management has been a bright spot for them and I think it has changed some minds.
Personally, I want to see the collaboration market stay competitive. End users win when vendors compete hard on features, quality, and pricing. Lately it seems like Microsoft SharePoint has gotten a lion’s share of attention from organizations. Microsoft has been the main attraction at this tournament and I’m glad to see Oracle showing up to play. IBM Lotus still feels to me like they haven’t shown up to the tournament and are setting up a parallel exhibition match for the same sport in another part of town. They didn’t mention SharePoint by name in the Lotusphere main tent (although it was clear who they were talking about and Jive got a mention). But as an analyst I’m like an unaligned spectator at a sporting event – you just want to see a diverse set of skillful challengers compete really well and bring up the level of play.
Note: This is a cross-posting from the Collaboration and Content Strategies blog