Wednesday, December 29, 2010

Waterloo, The High Tech Metropolis


Yes, it is the holiday time and I am spending my days with other things than writing blog posts. But I have played a little with this mash-up, which allowed me to show the concentration of high-tech power in the relatively small town called Waterloo, Ontario. Well, it shows the 25 RIM buildings and then some of the other high-tech companies in town, including the OpenText building - actually, soon to be two buildings:

This mash-up is open, which means that you can add to it or correct it. Feel free to do so.
Cheers!

Tuesday, December 21, 2010

Christmas Shopping in the Digital World

Benny Landa, founder of Indigo, wrote back in 1993 that Everything That Can Be Digital, Will Be Digital. Mr. Landa was right. Today music, movies, software, books, and increasingly even magazines are available in a digital form. In fact, the physical versions of these goods are pretty much disappearing. Some faster than others but the trend is clear.

This trend is very exciting. The manufacturing and distribution of digital goods is significantly less expensive and more environmentally friendly. As a result, we have access to a far greater selection of content at usually lower prices and with a far greater degree of convenience.

However, the digital goods can become an issue during a Christmas shopping season. Most of the mentioned goods make for popular Christmas presents – books, DVDs, and CDs are among the most common items gifted. If you want to go digital, however, the gift giving becomes much more complicated.

There is the problem with devices. You want to give a digital movie?  Well, does the person have an Apple TV or a similar device that allows playing movie on a TV set? Nobody wants to watch their movies on a computer screen. If you want to gift an e-book, you better make sure the person has an e-book reader. Oh, which one? The Kindle or the Nook or an iPad? If you want to gift music, you must be confident that the person has an iPod or another MP3 player. And software? Does the person have a computer?  Is it a PC or a Mac? How about the latest OS?

Then, there is the problem with formats. For movies, there is QuickTime, AVI (Audio Video Interleave), Windows Media, MPEG (Moving Pictures Expert Group), or RealVideo. As for eBook, there are about 20 different e-book formats available today. If you think that at least music is all standardized on MP3, dream on! Some people, including yours truly, are much opposed to the strong MP3 compression which leads to fidelity loss. Instead, Apple Lossless and FLAC (Free Lossless Audio Codec) provide a hi-fi alternative to the two dozens of other compression-based formats such as MP3, AAC, WAV, and RAM.

And if this wasn't complicated enough, there is the question of membership and consumption preferences. I'd like to give as a Christmas present an audio book via Audible.com which I am personally addicted to. But if the person does not use Audible, or if he or she doesn't like listening to audio books, the gift will be a flop. Similarly, I thought of gifting a song book (sheet music) via MusicNotes for iPad. Besides the question whether or not the person uses an iPad, the gift would be pointless if she's not into MusicNotes.

In the old, physical world, the only concern was whether or not I could manage to find a gift that the receiver will like. Even if they didn't like it, re-gifting or returns were pretty easy. In the new, digital world, gift giving has become much more difficult. Well, I ended up still giving some paper books as presents this season, even though it is against my Paper-Free World convictions. Digital gifts have simply still too many inherent complications. What we need is more format and device standardization as well as ability to re-gift, try-before-buy and return digital content easily.

Merry Christmas!

Friday, December 17, 2010

Testing Samsung Galaxy

Samsung Galaxy Tab running Android OS
I had the opportunity to test the new Samsung Galaxy Tab over the weekend and so I owe the world a little review. The Galaxy uses the Google Android operating system and this was my first interaction with Android. The Android OS is alright with all the key capabilities pretty much on par with Apple's iOS. I was able to do everything related to the OS (setup etc.) fairly quickly although I always had the feeling that I was thinking like a programmer when using it. What was missing was iTunes with all my music and movies and the ease with which Apple allows me to download the content to my iPad and iPhone.

Note the dark space around the app
I have installed a few applications from the Google App Marketplace which was pretty straight forward and I was able to find easily most applications I use daily on my iPad. Some of the apps made a poor use of the screen size, leaving about 40% of it dark which is a shame. I suspect those apps were written for an Android phone with a much smaller screen and I wish there was the 2x button like on the iPad. If there was, I didn't find it.

The form factor and size of the Galaxy is what it’s all about. With its 7 inch screen, the Galaxy is size-wise between the iPad and the iPhone. I immediately fell in love with the form factor and the ability to hold the device in one hand while typing with the other. That's something I can't do on the iPad which is too big and heavy for that. Weight is, however, still a factor for Galaxy. When trying to read in bed, my arm was getting tired quickly and I ended up holding it with both hands just like the iPad. And so while I love the form factor, the weight and thickness have to come down to the Kindle level before it will really work. We'll see who gets there first.

iPad, Galaxy, and iPhone next to each other
The big difference with Android is support for Flash which makes the Web experience better for the many sites that use Flash. The screen rendering was different than on the Apple devices – Apple adjusts the size for the small screen, effectively rendering a miniature view of the entire page. The Galaxy didn’t change the size, showing instead a snapshot of the content in its original size. That might be eventually better as you would usually end up clicking several times on an iPhone to increase the font size and you end up seeing a snapshot as well. When first landing on a home page, however, it looks kind of messy.

The browser on the Galaxy is bad, just like the browser on my iPad. Seriously, Apple, Google, I want my Firefox!

I didn't test many other features, as my time was limited. I didn't test the phone and video capabilities since I don't care for phones much anymore – the last thing I want to do on my smart phone is to make phone calls. I'm sure I missed many other cool features but I was primarily trying to compare the Galaxy with the iPad that I do use every day. I even did a little “scientific” performance test which you can see in this video:



Obviously, the test results conclude that there is no measurable difference between the two. The Galaxy took longer to render the Flash which iPad didn’t even bother with. When I tried the same test on a site without Flash, the rendering performance was identical.

All in all, Apple does have a worthy competitor in Galaxy. The lack of iTunes and its content is a problem in the consumer space but the form factor might be a winner in the enterprise or for professionals on the go. We'll see how RIM does with the PlayBook early next year – it uses the same form factor but a different operating system. BTW, my Waterloo neighbors, I’d be happy to test a PlayBook for you.

Tuesday, December 14, 2010

Will Content Management Stop WikiLeaks?

WikiLeaks created quite the buzz in the last few weeks, showing us all that in this world we live in, information is not only a valuable commodity but also a huge source of liability. We can debate whether or not we condone what WikiLeaks did but the bottom line remains that there will always be people out there who will be try to misuse our information.

Image: Borrowed from WikiLeaks
As expected, the IT industry, and particularly the enterprise content management (ECM) vendors have been quick to point out that WikiLeaks is yet another Y2K or Sarbanes-Oxley event that will generate a good media scare and – if we are lucky – even a legislative mandate that will stimulate some software purchases. ECM vendors, in particular, have a strong stake here as managing your content properly is the first pre-requisite to prevent it from appearing on WikiLeaks. ECM has a long history with compliance and information governance which deals with some of the issues at the heart of the WikiLeaks problem. However, I wouldn't go as far as to claim that if you deploy ECM, you will prevent your own WikiLeaks.

The WikiLeaks problem is not a traditional content management problem. For years, the ECM vendors have been more focused on the problems related to content inside the enterprise – mostly driven by the need for control and the fear of liability. Records management is all about keeping the authentic version of a content asset for a prescribed period of time with the attitude of 'shred as much and as soon as can'. Information governance adds a framework that addresses additional concerns such as security (well, mostly access control, really), accountability and efficiency but that too is focused on the content inside the company. And eDiscovery, the hot topic of late is primarily concerned with the liability of content that might get discovered inside the company to be subpoenaed as evidence.

The WikiLeaks problem is an issue of content meant to stay “inside the company” but has gotten out, which was never the intent. This is a problem of security as much as content governance and is actually really hard to solve. We don't know if the State Department used any content management system for the content leaked on WikiLeaks but even if it did, that system alone would have hardly solved some of the key issues:

- People security
While it is possible that the government repositories were hacked, it is more probable that it was a people problem. It is very likely that the authorized and trusted people in the government leaked their content out either deliberately or inadvertently. I am not going to speculate about Bradley Manning here but even with all the screening and background checks that the government (hopefully) does, there is still plenty of chance that people with legitimate access mishandle information. Solving this problem is hard if not impossible since someone has to keep the master key and that person could be compromised.

- Leak prevention
Most of the content management repositories are pretty secure. Solid authentication, granular access control, and auditing capabilities are the basics and many vendors go far beyond that with capabilities such as repository encryption and mandatory access control. The problem is that most of the content in an organization is on desktops, in email, on mobile devices – everywhere but in the secure repository. Rights management (or DRM, IRM, ERM) has been offering a decent solution for this problem for years but since it gets in the way of usability, the adoption has been poor.
Content tethering is a promising new approach provided by some ECM systems but even here the adoption is in its infancy. The only reliable approach to solve this problem is elimination of all leak points - no laptops, no flash drives, no DVD burners, no printers and no outbound email and Internet traffic. Obviously this is practical only in organizations that are more concerned with security than productivity (e.g. military, intelligence, etc.). The rest of the world is looking at rights management, content tethering and data loss prevention (DLP) technologies for at least some help.

- Leak detection
Eventually, a security conscientious organization must assume that the content will leak out. Therefore, it is important to discover the leaks quickly to plan a response and to track the leaks back to their source to plug them. Systems that automatically monitor and analyze target sites can help although such systems still struggle with performance (data volume) and reliability of detection today. There are many solutions for monitoring Twitter, Facebook and other social media sites; however, most such systems are focused on marketing drivers such as sentiment detection.
Some solutions do approach the problem from a security and liability angle, though. The tracing of leaks back to their source is also possible today. Watermarks have been used for years for printed documents and a similar technology exists as part of content management solutions today which, together with strong auditing, helps to discover security leaks.

Yes, the WikiLeaks problem has been a wake-up call for many organizations concerned with the security of their data. And as you can see, the problem is difficult if not impossible to solve completely. But the combination of content management and advanced security is a great step towards addressing the WikiLeaks challenge. In the end, there will arguably never be a perfect security solution but security is a game of attrition. Every security measure put in place makes it more difficult and more expensive for the bad guys to breach your defenses. If you don't get complacent, if you keep raising the security bar, you are lowering the chances of a data leak – or a WikiLeak.

Friday, December 10, 2010

Why Chase the Consumers, RIM?

BlackBerry has been declared the loser in the raging smartphone battle. Apparently, Research-in-Motion (RIM) has been losing market share to Apple and Google even though it continues growing. I am impressed that our neighbors in Waterloo keep on fighting as witnessed by the very positive preliminary reviews of the PlayBook tablet device. But one thing continues puzzling me: Why is RIM so determined to fight for the consumer market? Virtually every marketing message the company has been sending this year is about the consumer. They even had the rapper will.i.am as a keynote speaker at the Wireless Enterprise Symposium (WES) this year. No joke - I was there.


BlackBerry has had a complete monopoly on the enterprise smartphone market. Having cracked the code on the first real killer application – e-mail – RIM has dominated the enterprise. And it still does to a large extent. The BlackBerry provides a significantly more secure and enterprise ready platform. Why else would the US military and even the President exclusively use the BlackBerry? It offers a far better bandwidth management (that means lower communication costs, dear CFOs) and the BlackBerry Enterprise Server, which has been already deployed in every enterprise, is RIM’s greatest asset that allows better management of the devices. Apple and Google don’t have anything like that.

But instead of focusing on the enterprise, RIM has been fighting Apple on its home field – the consumer space. And the results are not convincing. Sure, I keep hearing about teenagers loving the free BlackBerry Messenger (BBM) but the tough reality is that I keep seeing more and more iPhones in the enterprise. And that’s a big mistake on RIM’s part. They should not have allowed Apple and Google to enter the enterprise. They should have kept hammering home the message of security, bandwidth cost and manageability which would allow them to keep the intruders at bay. Why, I ask?

RIM might possibly believe that people will want to use only one device and that their consumer preferences will be the decisive factor in selecting which one. I don’t agree with that assumption. I believe that professionals will use the best tool to do their jobs and accept its consumer capabilities – albeit inferior - or that they will simply carry multiple devices.

If I was Mike Lazaridis for a day, I would refocus my development and my marketing on the enterprise. Make the BlackBerry the best device for PowerPoint; add decent Office applications; provide a Bluetooth sync for desktop folders; turn it into a single sign-on token - whatever it takes in the enterprise. And, work with enterprise software vendors such as Saleforce, SAP, Oracle, and [yes, of course] OpenText to make sure their software for BlackBerry is the best and first on the market.

Yes, BlackBerry needs to do a better job at playing music and movies. It needs to provide a far better browser and it needs to aggressively recruit application developers. But fighting Apple on its own turf is like David fighting Goliath without the sling. Differentiation is the name of the game and BlackBerry still has differentiators today. But time is running out and I don’t see BlackBerry displacing any iPhones anytime soon.

Image: Will.I.am as a keynote speaker at WES 2010 (source: Flickr, Official Blackberry Images)

Monday, December 6, 2010

To SaaS or Not To SaaS

A couple of years ago, Nicholas Carr published the book The Big Switch where he proclaimed that Software as a Service (SaaS) is taking over IT. Since then, it’s been a few years now and we are still waiting for the next SaaS killer application since Salesforce.com. Sure, there is Gmail which is supposedly a SaaS replacement for Exchange. But Gmail is basically the same as Yahoo Mail which we have been using since the early 90s – it would be presumptuous to declare that SaaS based email has made the ‘big switch’.

Don’t take me wrong, I am a SaaS optimist. I do believe that you can start a company today in which the entire IT infrastructure consists of a wireless router. But for most existing organizations, the big question is which applications are candidates to be taken on by SaaS and which are not. The following factors should help:

1.Security
One of the most common questions about SaaS applications remains “can we trust a SaaS provider with our confidential data”? But what if your data could be more secure in a SaaS application than inside your enterprise? Many organizations use ADP as a SaaS application for their payroll. I’d trust ADP’s security more than the security of most employers. Similarly, when subjected to a denial of service attack, Amazon may be better equipped to protect you than your own firewall. I believe that security is only a factor in truly high-security environments such as military or national intelligence.
2.Mission Critical
Can we assume that SaaS applications candidates are only the non-mission critical applications? After all, Salesforce.com can hardly be labeled as mission critical. Sure, a Salesforce failure in the last week of a quarter sounds like a bad thing but most of sales pipeline work is not done in the last week and even then the sales reps find a way to get the contract signed using e-mail or fax. Not many organizations that use Salesforce actually process orders in a SaaS application.
3.Legacy
Are SaaS applications more likely to succeed in an area without a lot of history? Replacing incumbent applications, particularly those with significant data legacy is difficult. Sales Force Automation existed before Salesforce.com but it had a very low penetration and most of the Salesforce.com deployments replace spreadsheets. If you have terabytes of transaction data in your ERP, you may not want to move it all out into the cloud. And if you don’t you will end up keeping your on-premise system and not achieve your SaaS objectives.
4.Customization
Are SaaS applications limited to solutions that need no customization? After all, the most widely used SaaS-based application is e-mail which is a perfect example of no customization. Salesforce.com is combating this challenge with APIs and an array of add-on modules but it can hardly be expected that every SaaS solution could do the same. Besides, highly customized Salesforce deployments start resembling the complexity of on-premise applications. Also, the customization needs are correlated to the application maturity and their innovation cycles. Less-mature applications should expect a high pace of innovation with frequent upgrades.
5.Silos
If you entrust your data to a cloud, you have to accept that it will not be managed consistently with the data in another cloud. When applications require a common data store, it can only work if they actually come from the same vendor. I don’t think that all applications need to share a common data model and policies but some for sure do. If they do, SaaS might not be the right approach.
6.Data Volume
The math is pretty simple – the total costs of ownership of a SaaS application is determined by the number of months and megabytes of data stored. If your pricing model includes charges for the volume of data stored, you need to consider that a factor for SaaS suitability. Annual performance reviews in SuccessFactors hardly consume a ton of data. An email archive, however, does, and if it charges by data capacity, you might be looking at a steep bill down the road.
7.Utilization
One of the greatest benefits of running a SaaS application is the ability to leverage a vast infrastructure providing sufficient provisioning for any peak in utilization. That is assuming that your SaaS provider operates an infrastructure shared across multiple customers and that all the customers don’t experience peaks at the same time (e.g. quarter end or tax time). Scaling effortlessly to utilization peaks is important to certain applications and such applications are good candidates for SaaS.
8.Performance
Performance is only a factor in some cases. The performance of SaaS applications is likely to be better than the performance of on-premise web-based applications – the SaaS apps enjoy greater resources and better optimization. Only certain applications in which users are paid by the volume of processed transactions or where they manipulate vast data volumes need to consider performance a factor.

The table below features examples of different content applications, covering a broad spectrum of ECM. I have done a simple and very high level assessment of the factors above. This assessment has to be taken with some care as there is a lot of room for interpretation. But the table suggests interesting results:


As you can see, there are only very few slam-dunks here. Arguably, new product introductions and idea management are well suited for SaaS deployments. A marketing web site and litigation discovery (used for the review, analysis, and production stages of the litigation process) are also likely candidates. Running a terrorist threat assessment or insurance claims processing, on the other hand, seem to be better done on premises. The bottom line, however, is that the answer is almost always ‘it depends’. But it depends on the factors laid out above. Those factors and their weighting should be examined for every application before making the call whether to SaaS or not to SaaS.

Saturday, November 27, 2010

Did Foreigners Help Obama Win?

I wrote about Content Without Borders a few months ago, wondering why media companies continue to create artificial borders on the Internet. Right now, I am sitting at the Zurich airport in Switzerland and my Netflix is telling me that I cannot watch any movies even though I pay a monthly subscription. And so I have to write another blog post. This time, I will touch upon politics although the topic is really social media.

Much has been written about the role of social media in the 2008 US presidential election. The Obama camp has skillfully used Facebook and other social media tools to amass a huge number of supporters, donors and volunteers. Their eventual victory has been at least partly attributed to this social media campaign and since then, marketers have started to take social media very seriously.

What has not received as much attention was the role of foreigners in the 2008 election campaign. Indeed, people from around the world had been widely supporting Mr. Obama as a presidential candidate which became clearly obvious when he received a Nobel Peace Price just weeks after being elected. In 2008, however, the foreign nationals didn’t have to just cheer from the side lines. Thanks to social media, they were able to participate. Sure, foreigners couldn’t vote but they could and did endorse and support their preferred candidate – without any doubt influencing the eventual choice of at least some of the registered voters in the US.

Just think about the implication of this. Technically, these were groups of foreign nationals interfering with a democratic election process of a sovereign nation. And that nation wasn’t just some 3rd world dictatorship; it was the very US of A. Their support in form of public endorsement and sometimes donations was something that wasn’t possible ever before – at least not overtly.

Sure foreign governments were always supporting change in other countries if that suited their interests. Just think of the French support of the American Revolution or the German support of the October Revolution in Russia. But all such support was done either covertly or as part of an open conflict. This time, however, there was neither conflict nor a clandestine government intervention. In fact, the governments had nothing to do with it – the people did it on their own.

This shows that the Internet really is without borders – and it should be. The people of the world are clearly impacted by the domestic and foreign policies of the United States and they should have a right to voice their support – even if they don’t have a right to vote. The Internet natives are changing the rules of life as we know it and politics will not be an exception. Who knows - one day we may need to rethink the definition of voters and of citizenship.

However, I will leave that up to the politicians to figure out. I’m just excited to have the entire world at my fingertips. That is except for my Netflix movies…

Sunday, November 21, 2010

OpenText Is Everywhere - Even On a PlayBook

OpenText announced a new release of Everywhere last week at Content World. And since there was a lot of buzz about a lot of things at Content World, I want to add a few thoughts and comments about this mobility announcement. No, I won't talk about the usual features and benefits of mobile applications. Instead, I took a look under the hood of the product.

This is the second product release of OpenText Everywhere. Its primary focus is on expanding the capabilities that expose the functionality of the underlying ECM Suite with focus on process automation, social interactions, and content access. It is also adding support for Apple’s iPhone and iPad devices, alongside the previously provided support for RIM’s BlackBerry. And in his keynote, OpenText’s CTO Eugene Roman demonstrated an early version of OpenText Everywhere running on RIM’s PlayBook. That was apparently the first time the PlayBook has been shown outside of keynotes delivered by a RIM executive. Yes, OpenText and RIM are both headquartered in Waterloo and so there is an obvious connection there. (Note: The support for Playbook has not been formally announced yet but rather eluded to in a fairly public forum).


The thing that makes Everywhere distinctive is its design. Instead of trying to be a mobile rendition of an existing desktop application, it has been designed natively as a mobile enterprise application. Here are some examples of what makes it enterprise-ready:

- Bandwidth-Management: Managing bandwidth utilization is critical and not just because of performance. Bandwidth on a mobile device costs money. And when you are on international roaming, it costs a lot of money. To avoid spiraling communication costs, the mobile applications has to be designed as much less “chatty” – something that is often not a big concern for desktop applications. When accessing a 20 MB PowerPoint presentation, you don’t really want the file downloaded to your iPhone – the roaming charges could be excessive. OpenText Everywhere solves it by converting the presentation on the server-side into a set of images rendered on the fly. OpenText owns this rendition technology since the 2008 acquisition of Spicer and this technology is now embedded into the OpenText Everywhere application.

- Connectivity: Many of my iPad applications are useless if I don’t have any connection. All I get is an error message which might be OK for a free consumer application but an enterprise application needs to be productive even if when I am on a plane with no connectivity at all. OpenText Everywhere makes this possible to work offline and it queues up my activities for when I reconnect. In addition, Everywhere allows to be configured to work over Wi-Fi or 3G to better manage my connectivity.

- Security: Security is paramount for enterprise applications and it is a major concern for mobile devices accessing your confidential data. Today, IT has to support a multitude of devices with different security capabilities while some are corporate while others are employee owned. OpenText Everywhere leverages the existing security infrastructure of the device for encryption and the ability to wipe clean a lost device. This is much easier with BlackBerry than with other devices but that will be the topic of a future blog post. OpenText Everywhere has been also designed to support existing security policies such as use of a directory-based authentication and the leverage of the highly secure and compliant infrastructure provided by the ECM Suite.

- Usability: A mobile enterprise application cannot be just a scaled down version of a desktop application. To gain user acceptance, the applications need to be designed specifically for the mobile device and ideally leverage functionality of the application the employees use regularly. That means it must not attempt to squeeze the same amount of information and the same number of buttons on each screen. To do this, OpenText Everywhere has been designed with the device in mind and it takes advantage of the unique facilities of each device (e.g. touch screen vs keyboard) and push notifications – again, these concepts don’t really exist on the desktop. As a result, for example the Everywhere screens for search results and ‘my assignments’ list look and behave differently than on desktop – they are optimized for the task at hand and for the device.

I see more and more of my co-workers using mobile devices as their default tool. While mobile enterprise applications are still in their infancy, I expect that the mobile interface will be the default user interface in the not too distant future.

Tuesday, November 16, 2010

Presentation on The Problems Waiting To Be Solved

I wrote an article back in April about various problems that Enterprise Content Management needs to solve. I always thought that the topic would make for a good presentation and I had the opportunity last week to deliver it at OpenText Content World which is our annual user conference. I have received some very encouraging feedback and so I have recorded the presentation to share it with you.



This is not a product pitch and I am not talking about any OpenText offerings here. We at OpenText are aware of these problems and we may be well on the way to solve some but others are still waiting to gain the awareness needed to become a priority – for anyone. Let me know if you agree.

Friday, November 12, 2010

Yes, They Could Be Models

I never thought I would be writing this as I usually stick to technology related topics. But as the OpenText annual customer conference Content World ends today, I took away an interesting experience. Our marketing organization pulled off an awesome show which includes a complete re-branding of the visual imagery at the show. With the start of the show, all our corporate images changed to a new, fresh and contemporary look. And we have consistently changed everything – from the show signage and presentation templates to the web site. Pretty impressive!

As part of the conference, we have run a series of customer focus groups on our branding. I have listened in on several of the sessions and something amusing occurred. When asking about the new images we use, several customers commented that the people in the pictures don’t look real – they look too good as they are clearly models. "They are not the kind of OpenText people you could actually run into at the conference".

This is amusing, because many of the images feature actual OpenText employees. Yes, they are all good looking and smiling - just like models, but they are my co-workers from Waterloo. You don't believe it? Well, just check out this picture of Robyn in front of “her picture” – straight from the show floor. Robyn works in our Corporate Marketing group and she has graciously allowed me to use this picture.


And here is another piece of evidence - this time a picture of Husam who also works in our Marketing group in Waterloo (taken at the office):


So, what’s the take-away? Well, customers are always right but the focus groups might be wrong. They did provide us a lot of valuable feed-back that we will act upon but yes, while the images look like models, most of them are OpenText employees.

By the way, why is there no picture of me, OpenText Marketing?

Tuesday, November 9, 2010

The Perils of Social Media

There has been a lot of buzz last week about the appearance of Firesheep, a simple tool allowing anyone to hijack access to various web sites on public networks. The sites turning out particularly vulnerable are the social media sites such as Facebook, Twitter, or LinkedIn. While many people sound alarm about this tool that makes identity theft a child’s play, Eric Butler, the Firesheep creator has defended his creation as a way to alert the world about the perils of social media.

Eric is right. Firesheep didn’t introduce a new security breach. It merely exposes an issue that has been around for years. The social media sites have to take responsibility for their users’ security and make sure the traffic is encrypted so that hijacking is not possible. After all, my bank’s site uses SSL for the entire session – why cannot Facebook do it?


The ultimate issue, however, are the users themselves. Any information posted on a social media site such as Facebook or Twitter has to be considered public. Facebook has 500,000,000 users – that sounds pretty public to me. Once you have a couple hundred of friends, you cannot consider anything you share with them private or confidential. And you need to be really careful about what you post on Facebook.

Social engineering is a simple hacking technique that uses information posted on social media to gain unlawful access to your private data. The idea is very simple. Your bank and other highly secure sites use your personal information to facilitate automatic password retrieval: mother’s maiden name, name of your pet, or name of the high school you went to. Knowing such tidbits of your personal life is often sufficient to retrieve your password and gain access to your private data. And if you share such information on Facebook, you are making it too easy for the social engineers.

The solution is simple. Don’t ever share any information that could be used to retrieve your password, to compromise your security or your privacy. And don’t consider your Facebook friends a trustworthy group of responsible individuals. There are many articles such as this one available that help you decide what to share and what not to share. Be particularly careful about any personal information that identifies you unambiguously such as e-mail address, home address, or phone number. Such information is a bonanza for hackers. And finally, beware of what others post about you – they may unwittingly disclose such compromising information about you.

All this precaution may still not be enough to prevent your Facebook account from being hacked. The result of such misfortune could be embarrassment or impropriety, possibly very serious. But being careful with your personal data will protect you from possible financial ruin or identity theft. And that’s a pretty good reason to be careful.

And try out Firesheep at your local Starbucks wi-fi network. Once you did, you will think differently about your online privacy.

Image: Scene from the movie The Lives Of Others with the late Ulrich Mühe as a East German Stasi Captain spying on his target. And you might also enjoy this video:

Thursday, November 4, 2010

Open Text and Oracle - The Secret of Ecosystem Strategy

No, I am not going to repeat what’s in the press release. Instead, I would like to comment on what’s behind this story in terms of Open Text’s strategy. Open Text just announced a new level of partnership with Oracle. The deal allows Open Text to license Oracle technology in order to build content solutions for the Oracle ecosystem. The goal for Open Text is to expand its existing set of offerings for Oracle customers. Open Text has similar partnerships in place with SAP and Microsoft.

You may wonder, what’s the secret behind Open Text’s success in partnering with the largest enterprise software vendors? In short, it is the fact that Open Text does not have any stack agenda. Sure, Open Text’s flagship product line is the ECM Suite 2010 but a suite is not a stack. For years, ECM was based on the idea of a comprehensive platform combining everything from document management, records management, and BPM to WCM, DAM, collaboration and social media. Thus, the vendors built such capabilities either organically or by acquisitions. And all this time, their mantra was an integrated architecture in which all the functionality was available on a common stack of technology. Whether or not anyone deployed the software this way was rarely questioned.

But this is where Open Text plotted a different course. While integration is a fundamental characteristic of the Open Text ECM Suite, the offerings don’t necessarily run on a common stack just for the sake of architecture. Instead, the Suite has been designed with customer needs in mind, allowing for fast deployment of typical technology combinations and for quick integration of acquired technologies. And this flexibility, free of a traditional stack agenda, makes Open Text particularly suitable for partnering with other vendors who do have a stack agenda of their own. Being a Switzerland is a fundamental part of Open Text strategy.


To be successful in the environment of an enterprise vendor such as Microsoft, Oracle, or SAP, it is imperative to embrace their own stack. These vendors have established quite significant footprint among their customers and the customers want to leverage their investment as much as possible. And their sales force would fight vehemently at any attempt to disrupt this stack. Open Text ECM Suite has the flexibility to replace its own technology components with that of another allowing it to embrace a stack technology in a way that preserves customer investments and does not alienate that vendor’s sales force.

Most customers deploy ECM solutions to solve their problems rather than to build vendor-designed stacks. So chances are high that they already have some components of a suite from another vendor. That’s particularly true when adding value to SAP, Microsoft or Oracle deployments - ECM solutions in such environments have to embrace the stack of these vendors and often deal with the fact that these stacks include certain content technologies. Specifically, the ECM solutions for SAP need to embrace the NetWeaver architecture, the Oracle solutions must be based on Fusion Middleware and Oracle DB, and Microsoft solutions need to leverage components of the Microsoft stack such as Workflow (WFW) or SharePoint. This means that rather than push its own stack, Open Text has to be able to provide value in a flexible manner, sometimes willing to replace its own technology components with those from the target stack.

Following this strategy, the just announced deal with Oracle allows Open Text to embrace a greater part of the Oracle stack. As a result, Open Text will be able to provide broader set of content solutions for the Oracle ecosystem by leveraging Oracle technologies while adding its own applications to address specific business problems. For more, check out the recent press release.

Picture: Minutes after the Open Text session at the Oracle Open World a few weeks ago (from left to right):
- Andy MacMillan, Oracle Vice President of Product Management (for ECM)
- Rich Buchheim, Vice President of Open Text's Oracle Solutions Group
- John Shackleton, CEO of Open Text

Monday, November 1, 2010

About Italy and the Correlation between SMS and Twitter

I have had some interesting discussions about the use of social media in Italy. Of course Italians are very social people – they love to communicare and exchange personal information. This is not just a stereotype. Italians like to hang out with each other and if they can’t do that in person, social media comes in handy.

According to a March 2010 report by Nielsen, Italians are the world champions in spending time on social media - on average, every Italian has done so for 6 and half hours per month. Facebook has effectively become the virtual piazza, where Italians go to meet with each other and sip on a virtual amaretto.

However, when looking at Twitter, Italy is punching in a much lower weight category. According to the research by Sosomos, less than 0.5% of tweets are being contributed by Italians. Twitter is clearly not particularly popular in Italy which might come as a surprise considering the strength of Facebook.

Originally I thought this was because of the forced brevity of Twitter messages. After all, you can’t speak with your hands on Twitter and, really, how much emozione can you fit into 140 characters? But that argument fails given the massive popularity of SMS messaging with a 160-character limit which Italians use heavily. Italians SMS almost as much as they talk on their mobile phones, which seems to be all the time. (I found a Nielsen report from 2008 that says that only the Russians and Swiss text more than the Italiani).

So maybe it is the apparent lack of personal communication on Twitter. After all, SMS messages are one to one while Twitter posts are one to many. Facebook relationships are bi-lateral while Twitter followers are unilateral. I follow Eric Schmidt but he doesn’t follow me (by the way, why not, @ericschmidt?). The public nature of Twitter posts might seem to be more akin to public speaking than to a dinner-table discussion of Facebook conversations. After all, the heaviest users of Twitter are Americans who love public speaking. The Italians are more into looking good, making a bella figura, over being popular.


I am speculating, of course, and the real reasons may be very different. But I find it fascinating that there are such differences in communication between countries and I also find it notable that there appears to be no correlation between the popularity of SMS and Twitter.

Wednesday, October 27, 2010

Open Text Acquires StreamServe

Open Text announced the acquisition of the Swedish-based company StreamServe today. And since I happen to be in Europe and it is after hours here, I thought I should comment. StreamServe is a strong player in the space traditionally referred to as Document Output Management (DOM) and, with 5,000 customers, StreamServe is certainly one of the leaders in this space.

Document Output Management is a solution that allows customers to create personalized documents for their business communications, most frequently used in the B2C space at organizations that have to communicate with thousands of customers. Often, this communication occurs on a regular basis and automating this process makes sense to achieve efficiencies. Sometimes, the communication is also subject to legal or compliance requirements that require, for example, that all customers are to be informed at the same time to prevent selective disclosure.

The DOM solutions are usually based on a set of business rules that can specify what content is to be included for a particular customer or group of customers. That ability is now increasingly attracting the eyes of marketers who can define some very specific and targeted promotions for particular products or services. And as the CMO is emerging as one of the strong buying centers in the enterprise this year, marketing solutions are heating up – just look at the recent acquisitions of Omniture and Day Software by Adobe or the acquisition of Unica by IBM.

The marketing solution based on the traditional DOM technology is often referred to as Customer Communication Management. At Open Text, this is a great addition to the already powerful marketing offerings based on web experience management (Vignette), portal, digital asset management and social media. And while I expect a lot of the StreamServe opportunities to be focused on efficiency, I am excited about the new types of opportunities this software is going to create for the CMOs when combined with other solutions.

Open Text of course didn’t buy StreamServe just for the technology. SAP partnership, professional services expertise, and geographic presence in Scandinavia – all those reasons make this acquisition very compelling. You can find more about that in the press release.

Friday, October 22, 2010

Geoffrey Moore, AIIM, and the Future of ECM

The industry organization AIIM issued a press release this week about the work they conducted through a task force of the leading vendors in enterprise content management (ECM). For the task force, AIIM recruited Geoffrey Moore, the renowned business analyst and author of business classics such as Crossing the Chasm and Inside the Tornado. I have been fortunate to be part of this task force, working with Geoffrey, the folks from AIIM and my peers from all the key players in the content management industry.

The challenge at hand was to formulate a strategy for ECM in light of some of the disruptive changes the industry is going through. Basically, there is a lot of content being created, shared, and even stored today, that is considered outside the scope of traditional content management applications. This content is the result of online social interactions between people – just think about all the interactions happening on Facebook or Twitter. We are posting comments, exchanging messages, sharing pictures, video clips, and other content assets. While none of us think of content management when we do so, we are effectively creating and sharing content – and that is the essence of content management.

Together with the task force, Geoffrey has formulated a concept that includes the definition of the Systems of Records and Systems of Engagement. The Systems of Record are some of the traditional use cases for ECM, including compliance, archiving, and records management, while the Systems of Engagement include the more end-user facing applications based on web experience, media management, and social media. Check out the findings on the AIIM web site.

This is very much in line with Open Text positioning. We talk about content lifecycle, transactions, and engagement as the key areas of the ECM Suite 2010. In fact, we are well along the path of convergence between the Systems of Record and Systems of Engagement. The Systems of Engagement require the services of the Systems of Record to be enterprise-ready – they need security, compliance, and archiving capabilities.

The real issue at hand is the definition of ECM going forward. There is clearly a need to expand that definition to include additional types of solutions and capabilities to address the “consumerization” of the enterprise and its impact on enterprise content management. I am excited that the AIIM, as a non-profit industry organization, is taking on the leadership role in this endeavor. After all, the goal is to grow the pie because if the pie grows, every slice gets larger.


Images: Geoffrey Moore and the AIIM Task Force.
Courtesy of Mr. John Newton, the Founder of Documentum and Alfresco

Monday, October 18, 2010

The Future of Book Publishing

Two pieces caught my eye recently. First, I read in an article by Julie Bosman in New York Times about the fact that the latest Ken Follett book is more expensive as an e-book than as a hard-cover. Then I saw a blog post by Ron Miller that independently comments on the fact that the pricing of e-books is too high and not in line with customer expectations. And since I have been thinking a lot about the price of content recently, I had to comment.


First, I agree that the current pricing of e-books is wrong. They are simply too expensive. E-books are significantly cheaper to “manufacture” than paperbacks and thus, their price shouldn’t be the same as hard-covers or even higher. If it is cheaper to wait for the paperback than to download an e-book, people will wait for the paperback like they always did in the past.

Compared to paper books, e-books don’t have any cost of material, cost of manufacturing, and presumably a much lower cost of distribution. Could we do without the publishers altogether? Can the authors publish directly through the new breed of distributors with their reader gadgets such as Amazon, Barnes&Noble, and Apple? After all, that’s what’s being done with mobile applications.

Probably not. There are at least two other important functions that the publishers fulfill: quality assurance and promotion. The quality assurance is done through editors who today work for the publishers and their job is it to assure the quality of authors’ work. The authors could have their own editors but those could too easily develop into ghostwriters and so having editors independent from the authors is probably a good idea. That said, the editors could just as well work for the distributors to keep the authors honest.

The second, far more important, job of a publisher is promotion. Starting with talent discovery all the way to blockbuster book tours, the publisher makes sure the right books get noticed. There are about 200,000 books published each year in the US alone and to get noticed is critical. This function is even more important in the world of e-books where some of the traditional means of differentiation such as the book size or binding are not available. We can see the problem with the lack of coordinated promotion in the world of mobile applications – it is extremely difficult for applications to get noticed among the 250,000 apps available on AppStore. But even this role of publishers could be taken over by either the distributors or the authors themselves using PR agencies and other services.

While they still play an important function, the publishers need to understand that they compete with other forms of entertainment and they need to evolve their business models. In the world of digital content, they cannot hold on to the same pricing model as in the paper world. It happened to music and films and it will happen to books too. The publishers have a role to play as promoters but they need to adjust their pricing accordingly. If they don’t, they will become obsolete, just like software distributors and resellers have become for mobile applications. Or maybe, Apple will start selling e-books at $0.99 per chapter…

Sunday, October 17, 2010

Virtual Reality and Real Virtuality

I have written before about modern software and hardware systems that help us practice our skills in new ways in Virtual Reality with Real Pain. As a follow up, I have assembled a small collection of examples that I have stumbled upon since.

The first video is of a painter David Jon Kassan, demonstrating how the popular sketching on iPad can be taken to the next level. Or maybe a few levels above that:



This next video is of a Japanese magician Shinya showing some really interesting tricks that combine the iPad screen magic with off the screen magic – moving seamlessly from virtual reality to the 'real reality'.



The final video shows the band Atomic Tom improvising a real song with real arrangement on a set of iPhones. This definitely takes the iPad jamming to the next level:



As you can see, all these examples have one thing in common. They combine the virtual reality with the real experience. They don't replace a skill through automation but rather they allow us to exercise the skill without having to have what often amounts to very expensive set of tools or materials.

Tuesday, October 12, 2010

The Fallacy of Twitter

I am a big fan of Twitter and other social media. I even wrote an article on the 8 Business Use Cases for Twitter a while ago. But I feel that there is now a social media bubble that might burst anytime soon. There are some overinflated expectations that people have on the amount of influence Twitter might have.

Thought leadership in 140 characters
The Twitter limit of 140 characters per message can be a challenge. Yes, it keeps us succinct and it makes the Twitter interactions very agile, but you won’t be able to convey a whole lot of deep thought into a Twitter message. So it’s no surprise that so many tweets include a link. Most of our ideas do require a little more space. You need to complement your Twitter presence with additional communication to share your thoughts.

More followers does not mean greater influence
Forget Ashton Kutcher, his 5 million followers are not looking for any thought leadership. They are virtual groupies. But when I see a social media guru with 100,000 followers, I get suspicious about the social aspects of those relationships. With 100,000 followers, there is not much social interaction happening – this person is broadcasting. There is nothing wrong with being able to communicate to 100,000 people but that kind of number usually suggests more breadth than depth of thought. If you are after a LOT of breadth, Ashton is your man!

Ignore the manipulators
Twitter is based on unilateral connections. That means that you are followed by people you don’t know and you don’t have to follow yourself and vice versa. I follow Bill Gates and Eric Schmidt and they are not following me back. And that’s OK. But some people expect reciprocity. They follow you and expect you to follow them. And if you don’t, darn you I will “unfollow” you within 48 hours! In fact, there are automated services that facilitate this kind of follower boosting. Don’t get fooled by it – these thousands of followers are worthless.

If you follow too many people, you are not following anyone
You shouldn’t expect to be able to follow everything the people you follow tweet about. But to stand any realistic chance to keep it social, you have to match your number of people you want to follow with the volume of updates they post. Just do the math. If you follow 3,000 people and each tweets 3 times daily, that’s 9,000 tweets per day that you would need to attempt to keep up with. That’s hardly realistic even if you spend your entire day in Twitter. Try to stay engaged and focused on who you really care about.

Twitter is an exciting tool that can really help you to get your message out, position yourself as a thought leader, and be an influencer. It allows you do things not possible before. But avoid the wrong expectations or you end up either disappointed or discredited.

Thursday, October 7, 2010

The Price of Content

I've been pondering the value of content today. Actually, not the value but rather the price. The eternal question, of course, remains whether or not content should be free. With all content being digital and the cost of goods and distribution converging down to zero, it is a tempting proposition. The consumers want it and the authors and publishers fight it to death. This tug of war has been going on for years.

Today, a lot of content is already free. Whereas, some LOOKS free while in reality it is not. Based on its price, there are three main types of content:

1. Free content – the quality of this free content can vary from highly professional to poor. This free content is being created for different reasons:
  • a) Content created by amateurs for the pleasure of creating it. If you share your family pictures on Flickr or if you write a blog about your bird-watching hobby, you are creating content for pleasure.
  • b) Content created by professionals for motives other than money – prestige, recognition, need to share etc. This is the category into which most blogs fall, written by professionals related to their work – like the blog you are reading right now. Also, content created under the Creative Commons license falls into this category.
  • c) Content created by professionals to directly promote other products or services. This includes any marketing web site, catalog or advertisement which may be some of the most costly content assets of all, considering the high production and placement costs.
  • d) Content originaly created by professionals as premium content, but with copyrights either expired or donated into the public domain. This content includes all the works of old masters such as copyright-free e-books or classical music.
2. Content with indirect price - this content does not have a direct price tag but there is a clear indirect price associated with it:
  • a) Content that seems free but you pay for indirectly, with your time and attention. It includes any free content that is subsidized by advertizing. Most news and magazine media sites are financed this way which is not surprising as that's exactly what they do in the physical world.
  • b) Content created by professionals in the pursuit of money. This content doesn't have a direct price but the indirect price in form of labor cost can be quite high and is paid with the expectation that it enables a revenue stream. This is about office documents, emails, spreadsheets, PowerPoint decks that the knowledge workers create, share, and consume every day.
3. Premium content – here, the content is the product. This is content that has an explicit price either à la carte or though some package or subscription fee. This can be about consumer content or about business content. iTunes songs and movies are priced à la carte while Audible also offers e-books through a subscription. An analyst report can be purchased as part of an annual package fee and the design plans for a new building might be part of the overall construction cost.

As we can see, not all content that looks free is really free. And not all content that is created by professionals is premium content. All the content (well, at least most of it) has value, all of it has cost of creation but not all of it costs money.

Wednesday, September 29, 2010

Corus Entertainment and the High Priesthood of Content Management

Content management spans many different types of solutions, applications and functions. But when content is the actual product, the deployed applications represent usually some of the most sophisticated content management solutions out there. Media companies care about their content – it’s what they do. And so it’s not a surprise that content management reaches unseen levels of importance and sophistication in such companies. Yesterday, I had the opportunity to visit Corus Entertainment, a leading media and entertainment company in Canada.

Corus is known as the operator of some 24 television and 50 radio channels as well as being a publisher of children’s books and other content. Among their brands are HBO (Canada) or VIVA as well as Nelvana which is behind the production of children’s books and programs featuring Babar, Bakugan, and Franklin. Just like many other media companies today, Corus has to go through many changes to adjust to the digital content age and one of the steps towards that goal was building a new, high-tech facility in Toronto.

During the press conference at the opening, Corus’ CTO Scott Dyer spoke about the technology behind the new facility. Where most enterprises go through generations of deployments and updates of individual software and hardware components, Corus found itself with a unique opportunity to deploy the entire IT environment from scratch. And so Corus picked the best solutions available including a high capacity network from Cisco, servers from HP, broadcast management from Pilat Media, production workflow from Pharos, broadcast system from Miranda, and digital asset management (DAM) from Open Text. As Mr. Dyer told the reporters, the Open Text software is used for management of digital assets that has since grown into a comprehensive ECM solution managing also documents and other content types. The Open Text Media Management offering is a major part of this environment as it provides the repository that can ingest all the programming, stock and promotional content, manage the metadata and all the related content such as language tracks, subtitle files, still images, scripts, documents, etc.

What Corus does with all this technology is very impressive. The content for 24 TV channels is being ingested on an ongoing basis, resulting in 15-30,000 hours of programming per year. All of the 1,100 employees of Corus have the ingested content immediately available on their desktops via low-resolution proxy browsing and playback provided by Media Management. There are currently 2 petabytes of content – a volume that rivals the Library of Congress – stored on a 3-tiered storage system with robotic tape libraries used as tier 3. All content is ingested into the repository and stored in its original format no matter where it comes from and the conversion to the target format is done on the fly during delivery (e.g. HD to SD conversion).

All of this is done today with off-the-shelf hardware and software – in contrast to the expensive and proprietary media environments of the past. The entire infrastructure is 100% digital – in fact, you cannot find an analog player anywhere in the building. I should also not forget to mention the services provided by Siemens who put it all together – which they have done not so long ago at BBC in Glasgow.

Seeing our products in a sophisticated production environment is always exciting. And what I saw yesterday is a form of high priesthood of content management. With all the technology, Corus claims to have the most advanced broadcast facility in North America and they are probably right. In fact, it was one of the most interesting customer visits I’ve ever done. OK, the top spot belongs to a visit of a Formula 1 racing team facility in Silverstone (UK) but that’s been many years ago.

Thursday, September 23, 2010

What Was Not In the Press Release

Earlier this week, Open Text launched a new major release of its flagship product offering – ECM Suite 2010. I am not going to repeat the information about all the innovation the Suite 2010 introduced - that’s in the press release. I’d like to share a different view.

When I joined Open Text over two years ago, the ECM Suite existed as several integrated products but most of the products were marketed, sold, and deployed as discrete offerings. At that time, the technology foundation for integrating the suite was already in place but only a few of the products were using it. And while the products teams had their work cut out, we had to make it a suite on all fronts.
First, we had to figure out which products do actually belong to the Suite. Over the years, Open Text acquired many companies and so we had some of the ECM technologies in multiple offerings. Sorting out the customer segmentation and the target audience for each offering for was the basis for the Suite definition. Today, when you go to our web site, you can see what’s in the Suite and what is not. Most of the products that are not in the Suite do integrate with it, but those products target different market segments, buyers or channels.

The next issue was branding. The previous Open Text strategy was focused on preserving the brand equity of individual offerings. That has changed with the Suite when we have decided to put all the branding wood behind one brand – “Open Text.” And so almost two years ago, we rebranded all the products on the marketing side, knowing well that it will take until the major release of each of the offerings to get a rebranded UI. This week’s release is concluding this quest – at this point, the products have been indeed rebranded. The old brands are retired and the ECM Suite is here.

Then , there was packaging. The Suite started from many product lines, each with many products, modules, and add-ons. Based on customer feedback, we have done a ton of work to simplify the packaging. We have reduced the overall number of SKUs down to a third by eliminating all unnecessary complexity in packaging (yes, that’s a 67% reduction). We have also devised a handful of base-line packages that represent the core Suite – each combining some of the most commonly purchased product combinations to simplify the buying and deployment experience.

Finally, I have to mention technology. The press release talks about integration and innovation but there was no space to explain the SOA-based services such as common installer, directory services, common authentication, web services, jobs management, file transfer, common administration and many others. This relatively unglamorous stuff is what really makes a huge difference for customers in terms of their cost of ownership. And then there is all the integration based on the repository layer, process automation and user experience via portal, web, mobile, and desktop client with CMIS support.

We didn’t say any of this in the press release since the press supposedly only wants to write about what’s new in terms of technology innovation. But a lot of additional innovation has been introduced by the Suite in a way that is truly beneficial to customers.

Sunday, September 19, 2010

Information Promiscuity and Information Paranoia

There are two types of people in respect to their attitude to information.

The first group is on Facebook and Twitter every day; they use LinkedIn and blog regularly. They are sharing, engaging in a community, socially networking and they don’t mind sharing information. Sharing and socially engaging with others is much more important than arcane privacy concerns. They are open and free spirited and they believe in information value. Information is not a liability, it is an asset. Thus, they don’t ever delete anything – that would be depriving the world of information which is just wrong. Their inbox is full with thousands of emails and they don’t care about filing them. They save every attachment and keep every version of every document. They believe in freedom of information. In fact, they believe that access to information is a fundamental right in a democracy.

But there are other people too out there. They don’t participate in social media, it is just a waste of time – you should talk to someone if you want to be social. They don’t share their personal information as privacy is paramount. They understand that information is a potential liability and so they don’t horde it. They are concerned with every possible legal ramification and so they clean up their inbox every night before going home and they carefully file and categorize information they need to keep. They restrict access to any piece of data they share. In short, they are very conservative about information.

What’s the problem, you wonder? The world is full of left-wingers and right-wingers, right? Well, the problem is that our attitude to information impacts they way we work with others. And so the birds of a feather keep flocking together. They attract each other, creating groups, companies and entire industries that are either predominantly conservative or liberal in terms of their information attitude. Through hiring decisions and candidates’ self-selection, companies are evolving towards being exclusively on the conservative or on the liberal side of the information divide.

So, what’s the future? Two worlds, painted red and blue and divided by an iron curtain? Is one of them eventually going down as a result of information chaos and promiscuity or as result of information paranoia? Well, maybe there is hope. I am encouraged by what the US government is trying to do. The government has been historically on the conservative side of information - they are still digesting the impact of the Freedom of Information Act which rattled every gene of their DNA. But recently, the government has been trying to open up. Driven partially by a presidential directive and partially by austere cost-cutting, the government is trying to combine its traditional security stance with proactive use of social media. And so perhaps it will be the government, finding the right balance and evolving the legislative environment towards unity of the two worlds.

Wednesday, September 15, 2010

iTunes and Content Management in Consumer Applications

Many people these days gush about how awesome their experience is in the consumer space while they complain about the dreaded experience in the enterprise. And content management is one of those enterprise applications with bad reputation. What many do not realize, though, is that most of what we do in the consumer space is content management. Yes, indeed. We edit and share pictures and videos, we rip music onto our iPods, we publish blogs and microblogs and we collaborate and network with each other. All these activities deal with content – text, audio, video, and social media. We just never call it content management as that would make it sound like work. But the reality is that this is exactly the same thing that content management is trying to do in the enterprise. The main difference is that the content is jointly created and consumed by multiple users.

The consumer space also gives us appreciation for some of the difficulties related to ECM. One of the key challenges of ECM is to get good metadata attached to content. What, you don’t care about metadata? Oh yes, you do! Take your iTunes library, for example. All those names of songs, albums, and artists are metadata which allows you to organize your library and to make it useable. When you first import your songs into iTunes, you are likely to see a lot of mess. The names of songs or artists are missing and you comprehend the value of metadata very quickly. And so you spend hours cleaning it up to make sure that it is organized the way you want it. Without keeping your metadata clean, your iTunes library can quickly become a mess again which would severally limit its usefulness. And the same is true for your pictures, videos, or documents.
It's all about metadata.


The issues are the same in the enterprise, except that now we have to coordinate many people to agree on the metadata consistently. And that’s a big challenge. You know how you sometimes have to decide to organize your pictures by date, by location or by event? Well, not everyone will make the same call and with hundreds or thousands of users, the inconsistencies can become significant. To avoid the mess in the enterprise, you need a taxonomy. But that sounds like a boring kind of enterprise software that is so much less cool than the software at home. And yet, the management of content in consumer applications is very similar to enterprise content management. Well, perhaps ECM does handle a tad more complex problems…

Wednesday, September 8, 2010

Are Closed Systems Winning After All?

Ever since the networked PCs started replacing mainframes, openness became the mantra of information technology. Indeed, for years we have been taught and we kept repeating that open systems give customers the ultimate benefit of deriving value from solutions while keeping our options open and prices low. But now, after three decades of pushing open systems, we may be proven wrong by Apple, the company with the overall highest market capitalization and tremendous success.

Back in the 80s, IBM was able to quickly gain huge market share with the open system-based PC in which components from many vendors could be added and swapped. The PC has quickly obliterated all players in the market, including Atari, Commodore, Sinclair, and for the most part Apple. It was apparent, that open system was the winning formula. Or was it?

While killing off existing competitors, IBM quickly faced a new set of competitors such as Compaq, Dell, HP, and hundreds of other clone manufacturers who took away IBM’s market share and who eroded the pricing down to unattractive margins. IBM also learned that giving up the operating system to Microsoft was a huge mistake, even though this move has promoted the success of the PC. In the end, IBM struggled to keep the business profitable and eventually exited it by selling out to Lenovo. Even though IBM made a ton of money initially, the open concept of the PC has failed to make it commercially viable for IBM in the long run.

Apple on the other hand, has held on to its completely closed system. Sure, it took Apple two decades to figure out all elements of the system to make it a success – computer, mobile devices, and content – but they are in an incredibly strong position today. Apple is piling up cash while running circles around any potential competitor.

So is a closed system the way to go? Well, there are skeptics who are already predicting trouble for Apple due to an attack from Google. But while Google is also bursting with cash as it created its own money tree based on advertising revenue, it is easy to see how the various Android phone vendors will kill each other quickly as they drive down prices and margins. That price war may put pressure on Apple but Apple demonstrated an amazing pricing resilience over more than three decades and they have a lock on the customers that Google does not have – content. In the end, the choices may be cheap devices with little content but plenty of advertising or expensive devices with a lot of great content.

While I don’t know the outcome of the iPhone versus Android battle, I keep wondering about the original question. Is an open system commercially viable or is it better to keep the system closed or at least some parts of it? That question is particularly interesting given the current open source movement which is the ultimate embodiment of openness. Is open source as a model commercially viable in the long run? Or is closed or a mixed model the right approach? Well, the time will show. What I do know is that to be commercially viable, both sides of a transaction need to benefit. If one side doesn’t benefit, the long term viability is in question.