Sunday, August 29, 2010

Vacation Shots and Content Overflow

I like taking pictures. Occasionally, I even take a good one, worthy of adorning a wall of an unlucky relative. I started taking pictures when I was a kid and I went through decades of progressing from completely manual cameras to the completely automatic gadgets with all the bells and whistles - autofocus, exposure, built-in flash and even image stabilization. Taking pictures became easy, although no camera on the market offers automatic composition yet, which is pretty much the most important feature of any photograph. But during all of these years, photography had one thing in common – it was relatively expensive and so we were very deliberate about when to press the shutter.

That changed with the advent of digital photography. All of a sudden, we can take pictures without having to pay for film, film development and prints. With flash cards being dirt cheap, we can keep taking pictures without worrying about the cost. We can take a picture of any scene under any conditions – just try it and when it does not turn out, who cares? Bracketing is no longer something only pros can do and action shots can be taken via a sequence every time. With enough pictures taken, even the greenest of amateurs will occasionally get lucky and score an awesome shot worth mounting on the wall. There is no downside, it seems. Or is there?

Well, as someone who’s been hanging around the enterprise content management industry for years, I see at least three problems - storage, liability, and usability:

1.Storage. Storage is cheap, right? Yes, that’s true, at least when it gets down to the cost of your flash card and the hard drive in your home PC. Enterprise IT departments might have a different view but even if you are not an enterprise, you ought to think about backup and recovery and that’s where storage costs add up quickly. What, you don’t have any backup for your pictures yet? Ouch!

2.Liability. Yes, liability is a big issue in the enterprise but increasingly, we come to realize that liability matters in our private lives too. Yes, those college party shots on Facebook might become a drag when applying for a job at a stodgy company. And how many pictures with previous girlfriends or boyfriends are out there? Embarrassed by your baby pictures? Just wait to hear what your kids will say one day! With the proliferation of recording devices, privacy has to be redefined.

3.Usability. Let’s face it. Having to look through 100 pictures of your relatives on vacation was pretty dreadful back then. Today, when you come back with thousands of pictures, who do you expect to look at them? How will you ever find that special moment when all of them are named IMG00043569.JPG? And how many of those pictures do you adjust using iPhoto or Photoshop? Wasn’t easy post-processing such as color adjustment or horizon straightening supposed to be a key benefit of digital photography?

Don’t get me wrong. Digital photography is absolutely awesome. It has changed my life with unprecedented benefits. But the seemingly free nature of digital photographs should not mislead us to think that snapping thousands of 12 MB shots has no consequences. The vast quantities of images taken contribute to content overflow which is one of our top challenges in the information age. And the low cost should be no excuse for snapping thousands of lame shots.

OK, I need to get going now. I have to cull my vacation pictures to get rid of any photographic garbage.

Thursday, August 12, 2010

Environment and the Power Charger

My post today is a step away from my usual topics. I am packing for vacation and being a connected wannabe-hip geek, and so I am packing a bagful of gadgets. There is the SLR camera, the pocket camera, the video recorder, the Flip recorder, the GPS, the DVD player for the kids, my shaver, my blackberry, my wife’s iPhone, my iPod, the iPad, and I am still debating the laptop. That’s not a particularly impressive list, just what a man needs to barely scrape by.

The problem is that every one of these gadgets requires a power supply, a battery charger, a car outlet charger – or sometimes all of them. That’s creates some problems. First, I can’t find many of the adapters since I only use them on vacation trips which are not frequent enough. I have multiple gadgets that have been retired only because I can’t find the charger and getting a new one is either impossible or unreasonably expensive. Second, many of the power supplies require a separate cable which may also be difficult to locate; not to mention an additional cable for data connectivity. Then finally, there is all the added weight and bulk that easily doubles what was originally a reasonably sized bag. That’s just not right.

Why, I have to ask, don’t we have a standardized universal power supply? All gadgets could probably be powered by the same DC voltage. Apple uses the same power supply for my iPod, iPad, and iPhone. And if the gadgets do need a different voltage, the supply should have a switch between 3, 6, 9, and 12 Volt, preferably automatic. The gadgets should have the same type receptacle – or different sizes according to the different voltage requirements. I hate RIM for introducing a new size receptacle in my BlackBerry Bold 2 that requires a separate cable than my cameras. Sorry my RIM neighbors, that alone is an incentive to buy an iPhone!

The cable needs to be detachable from the power supply and it needs to use the USB adapter to connect to it the way that Apple does. That way, the same cable would work for data connectivity with computers and other devices. Basically, I want one power supply and one cable that can act as both power cable and data cable for all my gadgets. Is that too much to ask? No! Oh wait, and of course the whole thing needs to be small and work in any country. Yes, that’s doable – just ask Apple.

I believe that it is absolutely unacceptable that companies such as Sony, JVC, Panasonic, RIM, LG, Olympus, Canon, Nikon, and Kodak are not only NOT working together but they are even using different power supplies and cables for their own devices. That’s crazy! As consumers, we should demand better. We want the vendors to work together and agree on a standard. The USB standard is a good example and proof that it is possible for vendors to agree. The results would not only be a huge convenience but also some major economic benefits for the vendors. They could ship devices without power supplies. There could be an aftermarket for fancy power supplies in different colors, with battery back-up, solar charger, hand crank, etc.

The elimination of power supplies would also represent a major environmental benefit. Imagine all those power supplies that wouldn’t have to end up in the landfill. In fact, the government should step up and drive such standard in the name of the environment. If the vendors are not capable to work together, the government could come up with a useful regulation for once.

Thursday, August 5, 2010

Wave That Really Was a Ripple

Yesterday, Google announced that they will sunset their Wave offering, a cloud based collaboration tool that brought closer Google to the enterprise software space, particularly into an area relevant to Enterprise Content Management (ECM) which certainly got it on my radar screen. Wave was introduced in May last year with a quite a bit of attention - just like everything else Google does. It introduced a number of interesting innovations such as live typing or concurrent editing. Warming up to no check-in/check-out is an interesting concept for a long-time content management aficionado like me. But after just 15 months, Google decided to pull the plug on Wave stating a lack of adoption. What does that actually mean going forward?

First, I am impressed by Google’s resolve to try out new things and kill them early if they don’t work out. This is hard for traditional software companies – read for any company that actually makes a living by selling software. The traditional wisdom is to hang in there for several years, keep adding features and piling up releases and, if failed, kill it in a gracefully unnoticeable way. Not at Google – if something doesn’t fly, they just kill it publicly.

The second possible consequence is related to Google’s strategy in the enterprise market. Wave was possibly their biggest bet on the enterprise. Sure, they have other enterprise offerings such as the search appliance or Google Docs, but these are mostly pieces of their consumer technology painted yellow for enterprise use (yellow is the color of the search appliance). Wave was a pure enterprise offering. There is not much collaboration happening in the consumer space and I don’t see anyone on Facebook craving concurrent editing with old high-school friends. Does the demise of Wave signal the end of Google’s ambition in the enterprise? Probably not, but this is a major mark on Google’s enterprise agenda.

The next result is a concern for cloud computing and its adoption by enterprise customers. Basically, if Google can decide to simply kill an offering containing potentially a ton of your data, anybody could. Yes, sure, Wave wasn’t officially released but no Google offering ever is – Gmail was in beta for years. And yes, Wave was free but I am pretty sure that Google sales reps were already counting their chickens for a paid-for option – just like Gmail has it. Killing off Wave is another argument for cloud skeptics. By the way, I am a cloud fan. But I am dealing with lots of skeptics every day.

Finally, the end of Google Wave has an impact on Microsoft. Wave was probably the competitor putting the most pressure on Microsoft SharePoint, forcing Microsoft to work feverishly towards a cloud-based Office and SharePoint offering. While the Office threat by Google Docs remains acute, the Office infrastructure provided by SharePoint has lost a major competitor. That likely means a massive sigh of relief in Redmond as SharePoint will continue adding the stickiness to Office desktops for which it was originally designed without a threat from Google. In other words, without a SharePoint alternative, enterprise customers will be less willing to jump ship from MS Office to Google Docs.

All in all, Google’s move to sunset Wave is major news and while Wave was just a ripple on the water’s surface, there are strong currents underneath to watch out for.

Friday, July 30, 2010

Virtual Reality With Real Pain

I hurt my shoulder…playing tennis on a Wii.

This experience made me ponder how the concept of immersive virtual reality evolved over time. Ever since the novels such as Neal Stephenson’s Snow Crash or Michael Crichton’s Disclosure, we have had the idea of what virtual reality should be like. It is this cool place where people can do virtually everything and that is way better than the real reality because it was designed by programmers. Since then, we are trying to build such immersive virtual reality in the non-fiction world.

Virtual reality has come a long way from the first oh-so-slender-looking avatars in Yahoo’s Instant Messenger. Second Life became the rage a couple of years ago, allowing our still oh-so-slender avatars to attend product announcements, fly through the cyberspace and flirt with strangers. Second Life has since lost a bit of its luster – at least for now. Frankly, the flying and flirting via mouse clicks wasn’t all that immersive and got pretty boring after a while. So far, Second Life is falling short of the promise of Stephenson’s metaverse.

There are actually some useful applications for virtual reality today – from home decoration and landscaping, all the way to production line design. I saw once a very cool virtual reality application for chemical facilities that enabled the repair crew to analyze best path of access and determine needed equipment in case of a breakdown. But these applications are more focused on modeling the reality rather than the human experience in it – they are not particularly immersive. By the way, Wikipedia offers several good ways of categorizing immersive virtual reality.

I am really excited by the new types of immersive virtual reality applications that combine the cognitive immersion with sensory-motoric immersion, and sometimes even the emotional immersion (to use the Björk/Holopainen categorization). Here, the experience on screen is combined with actual physical activity. Driving and flight simulators were the first of such applications. If you want to see how immersive they can be today, check out the Cyber Speedway at the Sahara Hotel in Las Vegas. Playing tennis on a Wii is a similar kind of immersive virtual reality. This kind of immersion provides a pretty decent physical experience of the activity. OK, the experience might be still lacking the G forces or the sliding stops but that’s perhaps only a temporary limitation. The highly acclaimed Microsoft’s Wave (aka Project Natal) is the next step in this direction.

What’s exciting about the immersive virtual reality is the fact that it offers a more realistic experience than just clicking buttons while staring at a screen. It allows us to actually do things – develop and improve our skills through practice that would otherwise be restricted by physical limitations. And that’s way more engaging than flying through Second Life.

But now it’s time for me to rest my shoulder while I practice piano on my iPad. Or perhaps a round of golf on Wii?

Images:
1. Grand Slam Tennis for Wii by Electronic Arts
2. Professional grade offering by RedBird Flight Simulations, Inc.
3. Playing piano on iPad with JamPad by ONG

Thursday, July 15, 2010

Meeting Clay Christensen and My Innovator's Dilemma

Communitech is an organization devoted to promote the Waterloo Region (Region = County in Canada) as a technology hub, which it is, with over 700 tech companies including RIM, Open Text or the Perimeter Institute for Theoretical Physics. Once a year, Communitech puts on a fabulous Technology Leadership Conference which I got to attend yesterday. The keynote speakers were Fast Company's founder Bill Taylor, Avid Life Media CEO Noel Biderman, and the Harvard Business School professor Clayton Christensen, author of the renowned Innovator's Dilemma book and one of my gurus.

Having seen that Clay Christensen was on the agenda, it took me only about 5 sec to decide that I had to clear my day and attend. And so, I got to see him speaking about the innovator's dilemma which is so acute for so many high tech companies today. The long and the short of the story is that established vendors are often being challenged by bottom feeders who take over the unattractive, low-margin business that the established vendors are more than happy to give up. But as they get good at it, the disruptive innovators start moving up the stack, getting better and better at addressing more complex problems while leveraging the disruptive innovation that challenges the incumbent's pricing and business model.

In the subsequent roundtable discussion, Prof. Christensen explained that there are only two solutions for the dilemma. One of them is to make yourself obsolete by either acquiring or organically building a new business and keeping it separate from the old business. Over time, the new business makes the old one obsolete which is bad for the particular business unit but good for the company. Because, as Clay Christensen said, “…while the business units cannot evolve, the company can”. Failing to do this can only mean a demise as the company is driven into obsolescence by new entrants.

The second way to counter the innovator's dilemma is moving up the stack. The incumbent company has to keep innovating on the top tier, creating more sophisticated capabilities, as it surrenders the bottom to the new arrivals. This is of course only delaying the inevitable but this strategy can work for many years.

We see this happening all the time in the software world. For example, Microsoft Office is being attacked from the bottom by new, lower-cost entrants such as Google Apps or Open Office. And Microsoft keeps countering with innovations on top of the stack, such as the Office infrastructure provided by SharePoint. But as Google or Open Office keep adding features, Microsoft needs to consider disrupting itself with an innovation that would make Office obsolete. Otherwise, Office will become history.

Even my industry, enterprise content management, is not immune to this problem. Established vendors such as Open Text or IBM are being challenged by new entrants coming up with new business models from the bottom. Google Wave or Box.net are examples of disruptive innovations – free and based on a SaaS model – that could become competitors in the long term. So far, they don’t address the kinds of problems that our customers buy our software for and we have been successfully moving up the stack by adding new capabilities along with horizontal and vertical solutions. We, at Open Text, have plenty of ideas to keep doing that for several years. But we are also working on technologies that may completely change the way content is managed. Stay tuned for that.

Meeting Clay Christensen was inspiring and it made me think about his Innovator's Dilemma model again. And that's what is so great about the Communitech Leadership Conference in Waterloo. I saw Seth Godin there last year and I can't wait to see who comes next year. In the mean time, I will be working on the Innovator's Solution...

Friday, July 9, 2010

Sorry IT, the Time of Homogeneous Environments is Over

In the early days of computing, the IT (information technology) world used to be wonderful. There was typically just one computer in the organization to worry about - a mainframe or a mini. The computer was locked in a secure data center and all users used the same hardware and software. This centralized IT environment was relatively easy to manage, control, and secure. IT was in charge and the users couldn’t do anything but work with their applications. Life was beautiful.

But then came Intel and Microsoft and put a computer on every desk and ever since Novell has connected them together, the IT world became a nightmare. (We called it IS back then but I digress). Every user had a different PC and everyone used different software. And what was even worse, the users could tamper with it. They were able to install new applications, add additional hardware such as cameras or external drives, and compromise security by taking the data home, contracting viruses and committing other sins. The IT environments became extremely difficult to manage and impossible to secure.

Then decades later, IT was finally regaining control. After years of desktop dominance, Microsoft was establishing a position where they could control the entire environment: the desktop OS, the desktop apps, the network OS, the database, the Office infrastructure, the server based apps, and even the systems management tools to automatically configure the desktops and prevent users from tampering with them. IT was just inches away from the goal line – Microsoft’s promise of the beautiful life was very tempting.

But Microsoft got distracted - first by AOL and Yahoo!, then by Nintendo and Sony, then by Google, and now by Apple, and Facebook. The results are quite evident – Microsoft is losing its monopoly on the desktop. In my professional life, I have never seen so many iMacs, iPads, iPhones, Androids, and BlackBerrys that people around me use to do their job. These are desktops, tablets and mobile devices that use no Microsoft software at all.

Many people buy these devices on their own, regardless of what the company’s IT standard is. They simply help each other get them integrated with the corporate systems and circumvent IT altogether. And IT is on the defensive again as they have to do what they users want. The Microsoft monopoly and its promise of a homogeneous environment is fading and the good old days of mainframe are not coming back.

The old IT mantra that ”nobody gets fired for buying Microsoft” is shaking. The users are taking over. They are rebelling against Windows, Exchange, and PowerPoint. What they want is iMac, iPad, iPhone and Android and the two hundred thousand applications these systems offer. The time of homogeneous environments is over. Long live the freedom of choice!

Friday, July 2, 2010

Content Without Borders

Since the early days of the World Wide Web, we have understood that the Web transcends national borders. It is not governed by any country and it is open to anyone, anywhere and in any language. We have believed that even a small company can compete with the big ones on the Internet, using it to reach customers anywhere in the world. The Internet has no borders which is what makes it so appealing and which is perhaps the key reason why the Internet has changed everything.

But not everyone sees it that way. I will not dwell on oppressive governments that filter the content or outright prevent their citizens from accessing certain sites. While I decisively condemn this kind of censorship, it is at least not surprising given the agenda of such dictatorships. What really puzzles me is to see such actions from for-profit companies in the free world – the media companies.

In the history of entertainment, the media companies followed a flawed logic according to which they can reduce piracy and increase profits by preventing content from crossing borders. That's relatively easy with physical goods and with content that is language specific. Books are hard to move between countries because they are relatively heavy physical goods and their transport is a major cost factor. Movies on VHS were available only in one particular language which kept them effectively from crossing borders.

But since the dawn of digital content, the situation became a bit more tricky. First, the media companies failed to prevent CDs from traveling. CDs were relatively cheap to transport and they are not language bound. That got even worse since the invention of the MP3 format that made the audio content easy to transport at no cost. And we all know what happened next. The Big Media won the Napster battle but they have lost the music war. Today, Apple makes money on music and the media companies are relegated to be the low cost suppliers.

But Big Media has sworn a revenge. The next battlefield became movies. The emergence of the digital video disk (DVD) created a disruption that prompted an action. Just like CDs, DVDs are relatively cheap to transport and unlike VHS tapes, they are not bound to a single language. In fact, many DVDs in Europe are published in as many as 10 audio and 20 closed captioning languages to lower production cost. DVDs could easily cross borders and to prevent that, media companies invented something new – DVD Regions which make it impossible for DVDs from one region to be played on DVD players in another region. As a result, Europeans can't buy the relatively cheap DVDs in the US and the pirates in China have much more overhead dealing with multiple zone formats. This technique has been somewhat effective even though you can buy a region free DVD player on the Internet for about the same amount as a regionalized one at the store.

The DVD regions are enforced also on the computer DVD drives, which makes DVD ripping more difficult. But there are tools available on the Web that allow you to circumvent this restriction easily. And of course BitTorrent has rendered this problem obsolete – by avoiding DVDs altogether as you can download movies already ripped. Whether or not this is legal has been a subject of an ongoing debate. As consumers, it is our right to digitize our own DVDs to play them on a computer or a mobile device. It is also unacceptable to prevent us from buying content from foreign countries. Today, we can order German DVDs from Amazon.de and have them shipped to the US. But we cannot order those same movies online on iTunes? That's not right and this artificial border only promotes piracy. All in all, the media companies lost again.

But they have not given up on their revenge. Their next frontier is the streamed content. This is the future of personal entertainment and a new set of players are becoming powerhouses in streaming content such as movies and TV shows: Netflix, Hulu, ABC, etc.. However, this time, the content stops at the border. If you try to access your Netflix subscription from abroad, even from Canada, you get an error message saying that “Due to international rights agreements, we only offer this video to viewers located within the United States and its territories.” Yikes!

This is completely unacceptable. It's basically like disabling your iPod when you cross the border. No matter that you pay for your Netflix subscription – you only get the streamed content while in the US. It is only a question of time before users find a way around this. Already today, you can VPN into a US network from abroad or use a Slingbox which circumvents this restriction. The media companies are doomed to lose again, because they simply don't get it. They are in denial of technology and the changes it causes. They have proven it with every innovation, from early phonograph, to VHS video to Internet music on MP3. Each time, they fought the innovation and each time, they had to accept it and lose a ton of money while someone else makes out in the end. They will lose again, again, and again, until they finally accept that the Internet knows no borders and that content needs to be available everywhere.

Wednesday, June 30, 2010

Pictures from the G20 Summit

My post on the G20 Summit and its Technology Infrastructure was very popular and so I have uploaded an additional batch of pictures. Trying out the Flickr code was part of the fun!


Created with Admarket's flickrSLiDR.

I took some of the pictures myself, some were taken by others with my camera and a few are the courtesy of Open Text. The quality varies...

Tuesday, June 29, 2010

G20 Summit and its Technology Infrastructure

This weekend, I had the opportunity to take part in the Group of 20 Summit in Toronto. I should say right up front that I wasn’t rubbing shoulders with the heads of states. In fact, I came nowhere near the actual Summit delegates. But neither did most of the thousands of journalists covering the Summit – they all were relegated to the international media center and that’s where I have spent an interesting couple of days. Yes, this is was the site of the “Fake Lake” – a feature that was bringing the atmosphere of the G8 Summit at Muskoka Lake to the G20 journalists and that was heavily criticized by the press prior to the Summit for its alleged lavishness. It wasn’t lavish and it was actually really well done, if you ask me.

Since my blog is about technology, I should perhaps explain what I was doing at the Summit. As it turns out, Open Text was selected to provide the technology infrastructure for the Summit. As members of the Canadian Digital Media Network, we’ve built a site that is basically a virtual rendering of the media center with the Fake Lake. This site is built using the digital experience management (DEM) technology which manifests our vision for an engaging user interface. Just check it out at vg20net.org. I am sure I will write more about digital experience management and its syndication of tethered content in the near future.

The next part of the infrastructure was the secure social media site for all the journalists and other attendees, who were able to communicate with each other along with dozens of librarians at multiple universities happy to answer any question. And then there is the high security social media environment used by the actual delegates. That was a very interesting experience – having to go through a series of security evolutions while keeping a high level of confidentiality about the project. As the actual work is done prior to the Summit which is basically a photo-op, the application has been used for months leading up to the Summit. And the entire time I wasn’t allowed to talk about it for security reasons, which is rather tortuous for a marketer. In fact, it wasn’t by my choice that I didn’t provide any live Twitter update from the Summit.

Well, it was an interesting weekend – we were showing our software to hundreds of journalists including many on-camera demonstrations. This is the first time a highly secure social media application has been used for such multi-lateral event involving senior diplomats from many countries who have used the environment actively prior to the event. And it was cool to show it to the journalists on an iPad and touch-screen monitors. And I even experienced up close and personal some of the protests and riots in downtown Toronto which was not cool at all.

Images:
1. Top: Me at the Fake Lake in a moment of pure vanity
2. Middle: Honourable Peter Van Loan, Minister of International Trade of Canada getting a demo from
Tom Jenkins, Open Text's Chairman and Chief Strategy Officer
3. Bottom: Me with an iPad showing Open Text Everywhere accessing the G20 social community

Tuesday, June 22, 2010

Man versus Machine

A recent article in Wired Magazine titled “Clive Thompson on the Cyborg Advantage” described the result of a “freestyle” chess tournament in which teams of players competed with help of any computerized aid. What was surprising was that the winner was not the team with a chess grand-master or the team with the most powerful supercomputer. Instead, the team that won was a team that was best able to combine the power of the machine with the human way of thinking.

Years ago, I was dipping into the field of Artificial Intelligence (AI) which was the hype of the time. AI has failed for a variety of reasons. Perhaps it was way ahead of its time but perhaps it attempted to relegate too much decision power to the machines while the human expertise and intuition have always proven superior in the end. And so AI vanished and I’ve moved on to other things like content management.

The problem AI attempted to solve is more than relevant today. Faced with the staggering over-abundance of information, we are trying to find ways in which to use computers to help us make sense of all the data. The first step was making the information retrievable via search. But as soon as we have halfway accomplished that task, we have come to realize that this is not the solution. Virtually every search query produces too many results and the poor humans have to employ their expertise and intuition yet again to weed out the millions of hits.

The next step is to employ machines to automatically analyze and classify the content to reduce the volume of information humans have to deal with. But while such analytics and classification technologies have been around for years, they are still in their infancy. Outside specific applications that deal with limited content volume and scope, we don’t trust the machines yet. Usually, the final decision is up to the humans – just think of the e-Discovery reference model where we find all relevant content and then filter it to reduce the manual review cost. The goal today is to cull the volume of data that humans have to deal with. And that might remain the right approach for some time to come.

The right line of attack might be just like in the freestyle chess match. The solution is to facilitate the best possible interaction between the machines and humans. That needs to be reflected in the software architecture and its user interfaces but perhaps also in the skills required from us, humans. In the near future, it might not be the smartest people who will be most effective but rather those who will be best able to take advantage of the machines to augment their decision making ability.