I thought I’d write a few thoughts on a question a client just asked me. I’ll call it the “SharePoint and anything” question and it’s quite common. The company has standardized on some product(s) for collaboration, content management, and portal but now SharePoint is sprouting up all over the place and they want to know what to do. This issue has many dimensions and possible paths, so I’d recommend a client actually talk to me so we can work through it. But I’ll give the high level outline here.
The first question they tend to ask is “Do other companies have this issue?” Definitely! For lots of reasons, both good and bad, the majority of large organizations winds up with more than one system for doing collaboration, content management, and/or portal. I’ve talked to some that list several full portal products in use. There’s a difference between asking what other companies are doing and what best practice is though. Getting down to one platform may not be possible, but it’s a vector in most cases (there’s some complexity here that I won’t expound on right now).
Next, it’s important to determine if you are in an overlap scenario or a contextual integration scenario. Contextual integration just means you have two non-overlapping, complimentary technologies you want to use together like SharePoint and an ERP system. Great – that’s a whole discussion of it’s own that, you guessed it, I won’t expound on right now. But more often it is overlapping with other technology that does a good portion of what SharePoint does. That’s when I poke around and find out why it’s emerging. Because if SharePoint is growing up from the grass roots among other installed alternatives, it must be meeting some latent need that isn’t being met by your current collaboration, content management, or portal.
There are many different types of integration and sometimes organizations want several of these. The project owner has to determine which integration they will need to deal with. Types of integration include:
- Programmatic/API level: Either custom coding or dedicated bridging applications like SAP’s Duet
- Database/Repository/Metadata level: Sharing or direct access of directories, documents in a document library, etc.
- Execution: Being able to port across platforms, like Mainsoft for Java EE, Portal Edition
- Web Services/SOA: Wrappering functionality in one portal for use in another
- Portlets/Web Parts: Java portlets that can surface SharePoint discussions, libraries, or lists. Or, vice versa, Web Parts that can surface content from other portal, collaboration, or content management products
- Single sign-on: Passing credentials from one system to another
- Unified search: Ensuring that a search done from SharePoint can pull up documents and discussions in another product, or vice-versa
- Screen scraping: Old-fashioned scraping of HTML from one site to display in another
- Linking: As simple as it can get – just helping users of one portal know when information is available in the other by providing a link (but not actually surfacing any content or helping with single sign-on)
Before getting too techie about it, I need to point out that there are technical and non-technical solutions to overlapping portal solutions. Governance and usage procedures are a critical part of the integration and sometimes the entire answer if there are technical constraints. Letting users know when they should upload a document to SharePoint versus their company intranet or their contract management system increases findability of key information.
This is a really quick summary. There are a lot more possible paths to this discussion, such as “systems integration” (connecting it to your infrastructure services, installing and customizing it, etc.), federation scenarios, vendor-specific information about the “anything” vendor and how easy they are to integrate with SharePoint, and more. But those will have to wait until another day.
The case for utilizing portals on mobile and pervasive devices is a good one. First, consider the driver of portals on the desktop. In a standard web-based portal that is accessed from a desktop PC, the portal helps pick out and display the several applications, pieces of content, and navigation links that are useful for the user out of the huge number of sources the portal has access to. It’s a great answer to the problem of information overload. And, even if you know where all your content and applications are, it is a big timesaver to have them all in one place with single sign-on in front of them. It keeps the user from having to hunt through a “web” of pages (ah, that’s where that word comes from!), scanning them all and clicking in and out of them to get to the needed pieces of content.
That same set of drivers goes double for mobile devices. With smaller screens on PDAs and smart phones it is even tougher to scan through web pages. Combine that with slower access times and it is even more painful to load large web pages to view bits of content and click through several levels of pages to get to it. And that’s why mobile use of portals is going to take off …
… At least, that’s what I was told in 2002. I saw some impressive technology for it as well, mostly from Sybase but also from IBM and Oracle. The technology went beyond simple transformation of pages into WAP and actually provided design-time selection of deprecated page elements, matrices of style sheets for different devices (many pre-provided), and emulators for testing the pages.
The problem is in practice it just hasn’t caught on. Vendors I spoke to a few years later were a bit peeved that significant development effort was spent on features that wound up not being used very much. Mobile portal features appeared as needs on lots of RFPs, so maybe the vendors won some deals they wouldn’t have otherwise, but this code was supposed to be actually used, not just artwork.
There have certainly been inroads into mobile access to information in certain industries, like utility field service and medical. But those are mobile applications and don’t need to leverage portal infrastructure for personalization, application integration, single sign-on, and page assembly. There is a lot of room for inroads in mobile computing separate from portals.
I think this is a case where it just takes time to catch on and the vendors were expecting a big bang. The drivers are still valid and while I haven’t seen stats, I’m sure mobile use has increased slowly but steadily, although it’s still at a low percentage of overall usage. Maybe a better browser like in Apple’s iPhone will be the breakthrough. Salesforce.com is designing for it successfully. And once the AT&T network that Apple uses upgrades to a higher speed, the improved web page viewing with better resolution and gesturing may be the catalyst for mobile portals finally taking off.
I’ve been involved with audio production as a hobby for quite some time now and enjoy playing with my own musical efforts. So when our company started to do podcasts and I thought the audio quality was poor, I donated some twiddling time to see how to polish them up a bit. I thought I’d post what I came up with here to help others who are doing spoken podcasts.
This process worked well for me doing a home-recorded, single person, spoken podcast. It would require further twidling to handle music, rumbling, or very poor quality files. I used a freemium (free for basic version, pay for premium) program called Audacity. These instructions apply to the free version.
The problems I was attempting to fix were:
- Audio was too soft. It was hard to crank it enough on my laptop. The audio required normalization, which cranks the volume to the maximum level (or whatever level just below maximum you set)
- Drastic variances in volume made the speaker difficult to understand. This is because the dynamic range was too high. Since most podcasts will be listened to in less than ideal environments (through earbuds while outside, through PC speakers), it’s important that the volume be consistent throughout rather than really loud that time you leaned near the mic and then waving in and out the rest of the time. A little compression was called for to take care of that
- Lots of hissing. Cutting some of the high frequencies out was the answer here. It wouldn’t help the cymbals in your favorite rock music sound very better, but for spoken words it’s perfectly fine
- Files were too big. Spoken words require a lot less audio fidelity than music. No one will care if your brilliant thoughts are in mono instead of stereo, for example. Switching them to mono, dropping the bit rate (a lot), and converting to mp3s made almost no difference in quality, but made a huge difference in file size
Here are the step-by-step instructions I wrote down for readying the podcast files for publishing:
- Load file into Audacity
- Ctrl-a (to select entire file)
- Effect, Equalization. Click on RIAA, then Load predefined curve, then ok
- Effect, compressor, ratio 8:1,.threashold -20, uncheck 0db normalization
- Normalize with default settings
- File export as mp3
There is also a onetime setup in Audacity for you to do (Note: This will affect everything else you do with Audacity too):
- Edit, Preferences
- Audio i/o tab: channels 1(mono)
- File format tab:select bit rate 32
This should make your spoken podcasts much easier to hear and understand. I hope this helps!
I’ve been writing this blog for about 10 months (and 115 postings) now and have enjoyed the opportunity to participate in my small way in various debates in the internet community. I’ve been able to get feedback to ideas I’m working on, publish smaller pieces of content that don’t normally fit the heft or formal voice I use in my professional writing, and plug an event or report now and then. In all, it’s been a good experience.
So reading Jakob Nielsen’s (a usability guru) recent screed against casual blogging (“Write Articles, Not Blog Postings (Jakob Nielsen’s Alertbox)“) I can’t help but feel he’s missed the point of this particular style of blogging (blogging technology can also be used to do other styles of blogs such as formal content publication and internal enterprise blogging).
He begins by relating a conversation he had with a “world leader in his field” about whether to blog.
… I recommended that he should instead invest his time in writing thorough articles that he published on a regular schedule. Given limited time, this means not spending the effort to post numerous short comments on ongoing blogosphere discussions.
I’d summarize the rest by saying Jakob describes how the wildly varying nature of most blogs (entries of varying level of quality, expertise, and depth) leads to a scattershot approach that sinks the writer below the thin upper crust of top experts in the field. Longer, in-depth, carefully written entries would be better since they would maintain the appearance of having the highest level of expertise.
That may be true if the goal is to be a good writer. But I think most bloggers want to be a good conversationalist. If you were trying to engage people at a dinner party I would not recommend you stand up, talk for thirty solid minutes in a properly formatted argument with numbered points and rebuttals for anticpated arguments, then sit down. If you were at a conference that would be appropriate. They are different forums. His comments seem to frame blogging as being about content when it’s really about community.
As for the variance in quality, expertise, and depth I think readers of blogs have different expectations than they do of a white paper, conference presentation, or academic thesis. In many cases, the reader simply wants to live in the head of the blogger to see what they find interesting and what they’ve been reading. That’s an attention management characteristic of new technologies such as social tagging/bookmarking as well – people pay more attention to content that people they respect are paying attention to.
Many bloggers just link to articles or provide minimal commentary on the topics of the day along with the links. Jakob dislikes this – “Blog postings will always be commodity content: there’s a limit to the value you can provide with a short comment on somebody else’s work.” But I think back to one of my first posts in 2006 where I talked about David Foster Wallace’s writing style: “The point of Wallace’s writing style, to me, is that the value of his content is the unique structure he superimposes on it. More than most other writers, Wallace really gives you a feeling of not just what he knows and thinks, but how he is thinking about it.” That is what is going on in many blogs as well. Even if a blogger is just linking to information, he provides value by the structure imposed on it – what is selected.
There was a seminar this morning on E-Discovery that focused on the intersection between the law and IT with regards to information retention and discovery. There were presentations by Joseph L. Fogel and Hillard M. Sterling (attorneys at Freeborn & Peters LLP), Karen Hobert (Burton Group analyst for Collaboration and Content Management), and Trent Henry (Burton Group analyst for Security and Risk Management Strategies). Then I moderated a panel with the four speakers. The audience was about one third lawyers and two thirds IT people. E-Discovery is not my normal area of research, but I found the issues and discussion fascinating and thought it could be useful to others if I posted my notes here.
- I had an interesting discussion with Joe before the seminar about how the courts reconcile the leading edge nature of e-discovery and the conservative nature of many large corporations when it comes to IT. It’s all to easy for someone who hasn’t been a part of IT migration and upgrade plans to see a brochure for some great search, categorization, or discovery tool, take that as a proof point that a high level of information is reasonably accessible, and then assume any corporation that isn’t using such technology is just being obstinate. I’m not a lawyer, but Joe pointed to a Rule 26(b)2)(B) that says “a party need not provide discovery of electronically stored information from sources that the party identifies as not reasonably accessible because of undue burden or cost. …”. This market is obviously at the leading edge of its growth curve judging by the number of disconnected technology markets, number and size of vendors, and confusion among IT shops in what to do. So how quickly are organizations expected to be able to adopt the newest technology and best practices? Joe’s answer is that, like much else in law, it depends on the judge you get. There is a wide variance in the level of understanding of technical matters that judges have and that can strongly influence whether he/she deems an organization to be lax in its duties for disclosure.
- It was made clear that policies for retention or auto-deletion of information have allow for modification due to potential pending litigation. Rule 37(f) (“safe harbor”) was cited. To me this means the policies have to be flexible and there needs to be a path of communication between legal and IT so that they know who to tell in IT that auto-deletion must stop and that it can be stopped quickly once that order is sent.
- Hillard mentioned that for most of the large judgments that are handed down for inability to disclose requested information in a reasonable amount of time, there is usually a historical pattern of non-compliance present. An angry judge then sees a pattern of obstruction and decides to make a statement.
- There still seems to be a legal limbo between countries that affects global US-based companies. US discovery laws may order the disclosure of information from, say, the London office of a company. But stricter British privacy laws may make compliance with that order illegal in England. During the panel the attorneys were asked how that is resolved and it seems the answer is that issue is still up for debate.
There is a lot more depth that was discussed, but needless to say one morning wasn’t enough for me to feel my opinions would be worth much. This was just my first taste of a topic I hadn’t been exposed to before. Just as we had three groups in the room – legal, IT security, and IT communication/collaboration/content – many organizations need to build bridges between these groups before e-discovery issues come to a head.
In my previous posting on How Many Portals Should a Vendor Offer I talked about how both Oracle and BEA have dual portal strategies. Since then I had a conversation with Shane Pearson of BEA on this topic. I still stand by my posting, but Shane did point out that BEA is dedicated to maintaining 2 SKUs. This means they are dedicated to 2 purchasable products to meet different needs. They are moving forward over time to unify the infrastructure underneath them and new add-ons will likely support both.
To clarify my posting, I don’t think BEA has done anything wrong. There’s no magic wand that can be waved when an acquisition occurs to instantly rationalize all the personnel, products, and technologies and integrate everything. My point is twofold:
1. A roadmap should be forthcoming from a vendor within a short time after an acquisition (about 3 to 4 months) to reassure existing and imminent customers that their current path will continue or let them know frankly that a product or technology will no longer be supported so they can plan for migration or another purchase. No one wants to be the last one to buy a product before it is discontinued.
2. It is my opinion that, as a buyer’s advocate, software purchasers should be aware that as much as a vendor tries to make redundant services (caused by dual path strategies) under the covers transpearant, cracks start to form over time. These cracks are caused by fragmentation and take the form of higher costs (more specifically, costs that don’t decrease over time as quickly as market pressures encourage), higher risk that a piece of redundant infrastructure that a user is dependant on will be eliminated, slightly lower support costs (especially as one set of infrastructure becomes more rare in the marketplace), and slightly more difficulty finding consulting and integration services. I don’t know for sure that BEA will encounter this issue, but it is the norm and provides a reason for customers to be pessimistic in the long run.
One of the characteristics of a good, scripted TV show is that it creates its own universe with laws for how people behave and interesting characters. This characteristic applies whether it’s Dallas or Twin Peaks. Another characteristic of a good TV show is the water cooler talk it inspires – the community of like-minded people that can now get in touch with each other and have something to talk about.
It would therefore seem they have synergy with virtual worlds, which thrive on having a unique vibe and a community of like-minded people who want to socialize. In fact, this is exactly what happened with Firefly, a short-lived Fox TV show according to the article “Firefly Reborn as Online Universe” in Wired.
I would be surprised if the upward trend in users of virtual worlds does not increase dramatically over the next 3-7 years. Their capabilities are just being tested and the saturation point is nowhere in sight. As this happens, many facets of our society will get pulled in as they did for the internet. The media giants, like with Firefly, will see the value in maintaining branded virtual experiences as extensions or replacements for their content. It’s been widely reported that enterprises are beginning to explore how to use virtual worlds for collaboration, simulation, e-learning, and recruiting as well. It is a fascinating time to be watching the evolution of this industry.