Sunday, October 30, 2011

Compliance Starts with Explaining Why

I’ve just finished reading a couple of books by Kevin Mitnick, the famous computer hacker and phone freak who, after serving some time in prison, eventually became a security consultant. In his books, Kevin not only describes how amazingly easy it was to dupe employees at various organizations to willingly grant him access to their systems, but he also provides many suggestions for corporate security policies and measures.

The one thing that becomes obvious from reading Mr. Mitnick’s books is that people will comply with policies much more willingly, when they are explained. Why is this policy in place? Don’t just mandate a screen saver with a password protection to increase your data security level. Explain to employees why they need it. People aren’t dumb. With the proper explanation, they will remember and more likely comply.

Whenever I’m flying, I notice how the air travel experience is filled with seemingly contradictory rules and regulations that come with no explanation. For example, I have to take my laptop out of the bag for a security check while all my other electronics, including the iPad, can stay in the bag. Why? During take-off and landing, I have to turn off all electronic devices even though I can’t really turn off my digital watch nor can I turn off my iPod Nano. Again, there is no explanation provided and I see more and more people simply ignoring the rule altogether.
I can see very similar challenges with enterprise compliance. The HR department makes employees take mandatory training on business ethics but rarely, is there any explanation provided as to why we are taking these course. The reason is probably not that the HR department suspects us to be taking bribes or contracting out work to our relatives. The reason is more likely that by making us take the training, the company reduces its own liability. That’s a good reason and the employees should be told.

The same thing happens with adding metadata, classifying content, and completing compliance related work steps. We create rules but rarely do we take the time to explain why. What benefit will the organization gain?

The results are often disappointing: poor quality, lack of consistency or simply a complete refusal. Such results become very costly for the organization and practically impossible to remedy after the fact. People don’t follow the rules because they were never really told why should they bother.

Yet the solution is often amazingly simple. Give your employees the rationale behind the rules and most of them will try to do the right thing. You may not get a 100% compliance nor the perfect quality but you are going to experience measurable improvements.

Because good compliance starts with explaining why.

Tuesday, October 25, 2011

Security Makes Things Hard

Consumerization of the enterprise is sweeping the technology world today. Just look at all the unsupported iPads, iPhones, and Macs around your office. Employees are more and more frequently discovering that the cool technology they use at home can be used quite effectively in the office - with or without IT support. Enterprise software is not an exception.

Content management often gets compared with the consumer experience. No, I am not talking about the design elements of the user interface. Those are usually based on highly personal and often hard to define user preferences. I guess, some folks might even like the ribbons in Office... I’m talking about the interaction, the process of creating, collaborating on, sharing, and using content.

Take search, for example. How many times have we heard that we would like to see enterprise search be just like Google web search. But trust me, you don’t want it the same. On the Web, Google has it easy. It’s finding content that wants to be found. In fact, the content often really, really wants to be found. Some content owners want their content to be found so much that they spend millions of dollars on search engine optimization (SEO) and on Google ads to make sure their content can be found on the World Wide Web.

In the enterprise, nobody is search-engine optimizing their documents to make sure they can be found. You can also not pay Google to make sure your document called “Corporate Strategy” will be found every time somebody tries to search for those words. The search engines are working much harder to find the relevant content. Only metadata and proper classification can help - and most organizations struggle to handle metadata consistently. But wait, there is another major challenge in the enterprise: security.

All my Facebook friends could see this post...
For security reasons, we often don’t want every user to find every relevant content. In the enterprise, some users are not privy to certain information and thus they should not be able to find the documents. In fact, they should not even be able to see the document titles in the search results as the document names alone could give away too much information. Just imagine your employees finding a document titled “Corporate Restructuring”. To prevent that, the result set has to be filtered by permissions before sending it back to the requesting application.

This post on our internal deployment of OpenText Pulse was only visible to a few.
Another example that shows how content in the enterprise is different is social software. When you upload a file or link on Facebook, Facebook announces it to all your friends or followers. In the enterprise, that is yet again not acceptable. When I upload a file called “Acquisition Proposal”, only those of my coworker-followers who have the permission to see such a document should be alerted about it by the social software. You must not have to select a predefined group or circle of friends; it has to happen automatically - the software has to validate the user permissions before showing the alert to anyone.

Enterprise software is different. Security makes it much more difficult to expose the right information to the right people which is critical in the enterprise. Actually, I’d argue that security is quite important in the consumer space too and I hope that the technologies related to security and privacy make the jump from the enterprise to the consumer world. Consumerization of the enterprise need a bit of ‘enterprization of the consumer world’.

Monday, October 17, 2011

Voice Recognition Is About to Re-Wire Our Brains

Voice-based data input to a computer is not a new idea. While the keyboard, mouse, and more recently, gestures, have been the primary way of interacting with computers, the idea of voice-based interaction is as old as the HAL 9000, the talking computer from Space Odyssey. Software such as Dragon Naturally Speaking (since 2005 owned by Nuance) or IBM’s ViaVoice have been around for almost two decades and let’s not forget the often infuriating Interactive Voice Recognition (IVR) used by most telephone support departments today.

Voice recognition software stole the spotlight last week when Apple released its new iPhone 4S with built-in Siri software. Embedding voice recognition directly into the operating system is a major milestone and having it included in a mobile device makes perfect sense as we can see from the video commercial by Apple. Only the TV set is a device that needs voice control even more – I am still waiting for the kind of interaction Marty McFly (Michael J. Fox) was using in the Back to the Future II movie. In fact, Siri for Apple TV is rumored to be on the way and Microsoft recently demonstrated voice-based movie search on Xbox 360.



So, how come we have not been talking to our computers for the last decade since the technology was there? Well, part of it was the accuracy of the recognition. When I used Naturally Speaking Back in the 90s, I had to train the software to understand me which was a lot of work for meager results. We all know the frustration with any IVR based system: “Sorry, I didn’t quite catch that. Could you please try it again?”. And while Siri represents the next generation of voice recognition, plenty of stories about the funny results that its use can result in circulated on the Web immediately after the new iPhone was released.
Source: STST
With increased computing power and better software algorithms, the quality is becoming less of an issue. One day, the software might even understand dialects or foreign accents like mine. But I suspect that’s only part of the adoption challenge. The other part lies in our ability to express our thoughts verbally to a computer. Most of our verbal communication is not very straightforward and we even enjoy taking our time before coming to the point. In places where communication has to be clear and precise such as military orders, radio protocol, or business negotiations, it is only possible after many hours of training. Naturally, people don’t speak that way.

However, just some 30 years ago typical managers didn’t have computers on their desks. They would spend several hours each day responding to correspondence by dictating letters onto a tape which their secretaries would later transcribe on a typewriter and later on a word processor and eventually on a PC using Word Perfect. Another 10 years before that, the dictation was done in real-time and the secretary had to know short-hand to keep up. It took years before the PC made it to the manager’s desk. What amazes me today is that the managers were able to dictate complete letters in full, well articulated sentences.

For most of us, that’s not so easy anymore. Today, we have a generation of PC users spoiled by the editing power at our fingertips. Most of us, knowledge workers, formulate our sentences as we write them and since it is so easy to rephrase any sentence or start from the beginning, we do it all the time. I’ve been observing many people doing this and I know that I am not alone. Most humans, even professional writers, would have a difficult time dictating in complete sentences. Giving commands to the computer such as search requests is one thing but authoring text via voice recognition requires a new skill set that is underdeveloped in most of us today. We know from the past that we humans are capable of such skills but the last 30 years of PC revolution have re-wired our brains differently.

Now Siri and other voice recognition software may be starting a new era. An era where we can – and perhaps must - express ourselves verbally in a new way. Let’s see how it goes. [computer, strike last sentence] Ehm…

By the way, when is Siri going to be available on iPhone 4?

Sunday, October 9, 2011

The Courage to Lead

What else could be the topic of my blog post this week other than paying tribute to Steve Jobs. All the writers have written countless obituaries this week about this great man, his life and his work. Today, I want to write about a particular aspect of Apple’s strategy - Steve Jobs’ strategy - that has really impressed me over the years.

It is the ability to pursue the future by letting go of the past.

When a new technology arrives that is capable of replacing an old one, the typical approach for a technology company is to hedge its bets. Start embracing the new while continuing to support the old. You don’t want to disrupt anyone, you don’t want to leave anyone behind, you want to smoothly transition from one technology to another. That means that for several years and revisions, your product comes with duplicate, redundant technologies to make this smooth transition possible.

Image: Jonathan Mak
For example, many PCs today still ship with a built-in 56kb modem even though hardly anyone knows how to use dial-up to access the Internet anymore. But you have to support the modem in case some grandma in Minnesota still doesn’t have DSL. After all, she might select someone else’s make of PC and that would be bad, right?

That’s not how Apple operates. That’s not how Steve Jobs pursued the future. In his world, when a new technology comes around that it better than the old one, you just go for it. You want to speed up the transition. You want to drag everybody with you, even that grandma in Minnesota. A leader has to lead and Steve Jobs never hesitated to do so.

When the first Macintosh came on the market in 1984, it had a graphical user interface (GUI) instead of the then usual command line interface. There was no command line anymore on the Mac - everything was done though the GUI. Windows 7, in contrast, still has a command line interface available just in case you feel like typing “C:>ipconfig /renew Local Area Connection 2”. OK, the "cmd" program is bit more hidden now than it used to be, but Windows has opted for a long smooth transition from DOS. Apple just went for it.

The Mac also came equipped with the relatively new 3 and ½ inch diskette drive and no longer with the then much more common 5 and ¼ inch drive. The 3 ½ “ diskettes were far superior to the “floppy disks” but PCs would be shipping for another decade with dual drives for both 3 ½“ and 5 ¼“ diskettes. Again, most PC makers opted for a smooth transition while Apple just went for it.

Shortly after that, Steve Jobs was famously ousted from Apple and not many bold moves happened until he came back. In the mean time, he became a billionaire by taking Pixar public and eventually selling it to Disney. Then he also sold NeXT to Apple and in 1997, he was back at the helm.

When the iMac shipped in 1998, it came without a diskette drive. No diskettes, only a CD-ROM drive (later a DVD drive) and a USB slot. That was bold and controversial back then. How are people supposed to exchange files without diskettes? Using the network or a flash memory wasn’t the way people usually did it back in 1998. For many more years after, that PCs used to come with a diskette drive and most people had a box of diskettes next to their PC (if you are over 30, you still have that box somewhere in the attic, just admit it).

In 2001, the iPod was launched with some amazingly bold limitations. It would only play files in the MP3 format (and in the Apple Lossless format which I am a big fan of). Remember, back in 2001, there was a plethora of audio formats used to play music including Microsoft’s WAV, Real Audio format (.RA), Sun’s AU format and others. But Apple said, forget it, we go with MP3 which was popularized by Napster and we all followed. Most of the other formats are disappearing today.

The other famous format bet that Apple made, is the bet against Flash on mobile devices. We are still not quite sure how this one ends up but the history shows that Apple usually gets its way.

Talking about mobile devices, I have to mention the iPad. When it first launched in 2010, it drew plenty of skepticism for coming only with a wireless Internet connection. No diskettes, no CDs, no DVDs, no USB slot...heck, not even a SD card slot. Many of us are still moaning that we want at least a SD card slot but we are happily buying our iPads anyway.

My final example is the Apple TV 2. When it was released in September 2010, Apple decided that the old way of hoarding content in your home library is no longer sustainable as the HD movies are too big. And so they shipped the new Apple TV without a drive for storing movies. Instead, renting is the way to go - the only way to go. Last week, Apple announced their iCloud service, and so I suspect that a cloud-only music player might be coming soon and we will stop hoarding music too.

From a vendor point of view, all these moves were incredibly courageous and for most vendors they would be considered huge gambles, well beyond the comfort level. Yet they all follow the principle that Apple and Steve Jobs embodied for decades - decide what’s best for your customers and have the courage to deliver it. Lead, don’t ask for directions! Even if it takes courage to lead.

To pursue the future, it is good to let go of the past.

Stay foolish.

Monday, October 3, 2011

High Fidelity Pictures

Today, I will discuss some exciting innovations in the world of photography. My interest was awakened by a recent article in The Economist titled Cameras Get Clever. It described some of the leading edge developments in the field of photography. Among them was a technology called high dynamic range (HDR) which can enhance your everyday pictures by overlaying three separate shots and then using the processing power of the camera to get the best exposure from each one of them.

Basically, a normal shot will have areas that are either too dark or too bright because the camera has to set the exposure based on a compromise across the entire image or a selected area. With HDR, the camera takes three separate shots within fractions of a second - one with exposure for the high tones, one for the low tones and one for middle tones. Then, the camera overlays these three pictures using algorithms that combine the best exposure from all three shots. The result are images that have a much better balance of tones and colors than the single shot based on a compromise. Or, do they?

Well, it was easy to test since the standard camera built into iPhone has the HDR feature today. Not many people know it but there is a button in the top center of the screen that allows you to turn on HDR which results in two pictures for each shot. One is a “normal” picture based on the compromise exposure while the other one is improved by HDR. Here is my test, using our dog as a model:

This is a  'normal' picture with a compromise-based exposure 
This is the same picture with a high dynamic range (HDR)
Now, which shot is better? The HDR shot clearly has a better balance. The dog’s face is not just black and white, it has some tone depth in it (number of shades). That comes at the cost of contrast and color richness which we can see in the normal picture. The problem is that on a bright sunny day, the colors really were very bright and the contrast was very strong. And, the dog is black and white, not gray. I checked.

That leads me to my point. With all the power of modern camera and post-processing software such as Photoshop, Aperture, iPhoto, or Picasa to name just a few, who is to decide that the picture I take should have more depth in the low or high tones? Why are the colors all wrong in artificial light? Why are the shots on a beach often overexposed? I understand all the technical issues behind it but what I want is a picture that looks exactly the way reality did.

Back in the late 60s, the electronics industry came up with a notion of high fidelity (hi-fi) which meant that the music you heard from your record, tape, radio, or CD was supposed to sound exactly as in the studio. What I want is hi-fi for pictures. I want the assurance that my picture will look the way I see the real world in that very moment.

Sure, there will be artists - and consumers - who will want to distort the reality for special effect just like there are artists who feel that they have to add a dramatic sky into every picture. That is the right of any author and it should always remain that way. But 99% of all pictures taken are not meant to be art. They are meant to visually capture reality. The real reality. My tiny little (yet incredibly powerful) camera is full of features that can alter the picture - from color accent to fish-eye effect. But there is no button for “authentic picture”.

Go ahead and test the HDR functionality on your iPhone. Features like HDR are important because they allow us to push the boundaries. Some of the innovations in the world of photography are just incredible and I can’t wait to use them. When I read the article mentioned above, I got very excited about Lytro and other technologies. The impact of these technologies could be amazing. Just like the change that digital cameras brought upon us when they replaced film.

As for my test above, I prefer the “normal” picture because it looks more like the actual scene I remember.

Wednesday, September 28, 2011

Kindle Fire - The Price Is Right

Today, Amazon announced the long awaited Kindle Fire, a new tablet based on the Android mobile operating system. While the announcement was expected, the aggressive price has caught many by surprise. $199 for an Android-based tablet makes it the most aggressively priced tablet on the market. But is that the right price? Let’s take a closer look!

The $199 Kindle Fire could be a game changer
The entire tablet market should be grateful to HP for having recently conducted the largest price elasticity of demand (PED) test ever. If you remember, HP launched their TouchPad with the base price of $499.99. They reportedly manufactured 270,000 of the TouchPads, but after several months of trying, they had sold less then 10% of their inventory (25,000). After pulling the plug on the device, they sold the remaining inventory within hours for $99 a piece.

Being a marketer (and a techno geek), I actually tried to compute the price elasticity of demand for the TouchPad, coming up - as expected - with a high negative number. I got -549, but it’s been many years since business school (note: yes Professor, I have simplified my case by ignoring any substitutes, necessity factor, purchase power, brand loyalty, blah, blah, blah...doesn’t matter in this case). That low of a number means that the price is highly elastic which in turn means that buyers strongly respond to price changes. Duh. It also means that to optimize revenue (or better to grab as much market share as possible), you have to price the product near its marginal cost. Again, not a surprise - you make something and you want to sell as much of it as possible, you keep the price as low as possible to the cost of making it. Duh.

The cost of making the iPad is approximately $229 and we have to assume that Amazon could be in the same range. They can probably sell similar volumes as Apple has and thus have a similar negotiating power with their suppliers. The Kindle Fire might be a tad cheaper to manufacture with less memory and a smaller size. That suggests that at $199, Amazon is pricing the device at or just below cost - as my little price elasticity test suggests.

BTW, if you dismiss my simple price elasticity calculation and instead want to believe the efficient markets theory, you should remember that the discounted TouchPad was selling on eBay for $250. That seams to be the optimal price point at which the demand and supply clear.  Even if Amazon is losing $50 on each Kindle as the Piper Jaffray analyst Gene Munster suggests, it doesn’t matter. Amazon is not looking for any margin contribution from the Kindle Fire (or any Kindle). They are looking for market share and units sold - eyeballs. The margin comes from the content which Amazon can sell to the people who have their devices.

This is also the reason why the other Android vendors such as Samsung, Acer, HTC, LG, Dell, etc cannot play this game. Even if they can match the cost of manufacturing their devices, which I doubt as their volumes are not anywhere near the millions of iPads Apple sold and the millions of Kindle Fires that Amazon is likely to sell, they still need the devices to generate positive margin contribution. They don’t have any content to sell to offset that. Apple, on the other hand, is sitting in the perfect spot today by getting margin contribution from both, the content and the devices. Beat that, … [everybody else]!

The content is critical. The Nook by Barnes&Noble is a comparable reader to the Kindle Fire and it has been on the market for several months now, priced at $250. But since B&N doesn’t have anywhere near the reach of Amazon with its online store, the device hasn’t made a dent into the market shares. Amazon’s new Kindle Fire is no slam dunk but given the success of the original Kindle, we have to assume that they are now a serious player. This is particularly true given the aggressive price of the device with an entire content ecosystem under their control - anything from e-books, magazines, to music and movies.

And so what will happen next? Amazon Kindle Fire is likely going to grab some significant market share in the tablet market. Apple may experience some pricing pressure but will still have the benefit of the Apple brand and user experience - the same brand that permits Apple to charge a premium on their iMac and MacBook computers. While the prices might come down a little, I wouldn’t expect any $199 iPads anytime soon.

All other manufacturers are in trouble. Microsoft must quickly deliver a tablet at all costs because with Kindle Fire, there is another runaway tablet on the market using zero code from Microsoft. All the Android vendors, except for Motorola, who’s now engaged to Google, have even more reason to reconsider their Android bet. They are going to be forced into a price war with Amazon which they can’t win given Amazon’s scale and pricing power. Switching into the Microsoft camp might be the only sustainable move left for them. But the longer Microsoft waits, the less market share will be left to grab. As for RIM, the PlayBook looks like an official failure now and their options are shrinking fast...

Sunday, September 25, 2011

Customizations - Heaven or Hell?

There are many traits that make enterprise software different from consumer software or even software packages used by small organizations. Scalability, security, and the ability to integrate with other software usually come to mind. But none of them are as polarizing as the ability to customize enterprise software deployments.


The idea is pretty simple. As organizations compete with each other, they want to tailor the deployed solutions to match their business processes and other organization-specific needs. Enterprise software vendors usually design their software in a way that allows for a significant amount of customization with technologies such as modular architecture, web services, application programming interface (API) and software development kits (SDK).

You might think that all of this is going away in the new world where software is delivered as a service (SaaS). It is certainly true for the SaaS software that targets the small and medium sized businesses or simple generic applications. But if we consider the leader in SaaS - Salesforce.com - as the sign of the things to come, we must realize that most of the Salesforce deployments today are being heavily customized.

Customizations are important. In fact, a big part of the appeal of open source software is the ability to significantly customize it; even re-write entire functionality modules given that the developers have the actual source code. Of course, customizations matter in the world of commercial ‘closed source’ software just as much.

Customizations, however, come at a price. Not only does a typical enterprise deployment often require an investment into professional services that comes at a multiple of the cost of the software licenses, customizations also carry a significant hidden cost.

Every time the software goes through an upgrade cycle, the customizations have to be upgraded as well. There is no easy way around it even if the vendor provides tools to make the work easier. Those are your customizations, they are a one-off type of software and nobody but you can upgrade them. Often the work to migrate the customizations can be significant. If the customizations actually contain significant amounts of original code, migrating them may be akin to a complete re-write.

Consequently, customers tend to struggle to keep up with the vendors who are trying to maintain the pace of innovation. It is important, that the customers are allowed to do that - not to keep up. The ability to skip a version is becoming a critical requirement for enterprise software. Many vendors handle it by providing the notion of safe-harbor releases that ensure that from here, you can move to the next level at your own pace.

In the end, there is no magical solution. Customers shouldn’t avoid customization because they do need the competitive advantage that highly customized software can provide. In many industries such as insurance or financial services, there is very little differentiation possible on the product side. Car insurance is just car insurance and mortgage is just a mortgage. Only the customer experience and the process efficiency can differentiate competitors. Those differentiators require customized software. But customers have to think beyond the customizations of the current release. The ease of migrating customizations is one of the key issues overlooked by many vendors in their slick demos.

Sunday, September 18, 2011

Consumarization of BPM

Business process management (BPM) is a high growth market that delivers significant return-on-investment to customers who deploy these solutions to improve their efficiency. Still, many people think that BPM is just a boring back-end technology. I know that hipness lies in the eye of the beholder but I’d argue that social media or gamification are getting more attention than back-end technologies such as data warehousing, archiving, or BPM.

There are several new trends in BPM that make it just as exciting as Jive Software except much more profitable. One of the trends is social BPM which employs social networking capabilities to allow for better decision-making in business processes. Long gone is the era of business processes that attempted to cover every single permutation of possible conditions to route the task in a predetermined path. Too many exceptions were typically the result and, in the end, the majority of decisions are best done by humans. It is the social software that can quickly help to identify and get together the right experts to help them make a decision.

Another important trend in BPM is mobility. As much as mobile devices are becoming the primary user experience, not everything we do in the office has the same appeal for a mobile user. Reviewing or editing documents works well on a tablet but it becomes pretty tedious on a smartphone. Interacting with a business process on a smartphone, however, makes a lot of sense and it is exactly here where a lot of customers realize the greatest benefits from mobility. So many process steps used to sit idle, waiting for the user to get back to the office since the email-based alert didn’t provide enough context to make a decision. By taking BPM mobile, the process apps are easily tailored to make users very effective to handle any process tasks.
Even submitting travel expenses can be pretty cool
Finally, BPM is also moving to the cloud. Besides the obvious appeal of shortening the deployment cycles by hosting the BPM software as a service, BPM can also benefit from making its functionality available to users easily. In any business process, it can happen that a particular expert needed for a specific task cannot participate since he or she doesn’t have access to the system. The cloud based approach makes BPM easily accessible. A good example of a cloud based activity is collaborative process design which frequently requires many stake holders to participate, even if some of them will not be involved in the actual process execution.

Clearly, the consumarization of the enterprise has reached BPM just like so many other disciplines of enterprise software. And who says that BPM is just a bunch of back-office technology? With the latest trends such as social BPM, mobility, and cloud, BPM is becoming rather hip. And it continues providing very compelling benefits such as higher efficiency, lower cost, or better quality.

Saturday, September 10, 2011

HTML5 vs Native Apps

The mobile world is split today into mobile apps and the mobile web. With some 400,000 apps available on the iTunes App Store today alone, it appears that native mobile apps are the preferred way of user experience on mobile devices. It might, however, not stay that way when HTML5 becomes widely adopted. Or will it?

The HTML5 standard - developed jointly by the World Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG) - is currently being heralded as the panacea for all problems ranging from the current limitations of browser-based applications to the support of mobile devices. Many developers - and by that I mean software companies of the likes of Microsoft - use HTML5 as the magical answer for all questions related to mobility. “What’s your platform strategy for your mobile apps”? “HTML5”! 

Is it really going to be that easy?

HTML5-based LinkedIn
HTML5 promises many improvements including better handling of video, semantic elements and offline data support - all able to greatly advance the capabilities of web applications. From the user experience standpoint, it allows building sophisticated and appealing web applications - just check out the recently released mobile apps for LinkedIn or Vudu.

One of the greatest challenges in the development of mobile applications is the great variety of operating systems and types the mobile devices that a developer has to support. Even though the list got shorter since the demise of Symbian and webOS, the developers need to consider more than just supporting the top two leaders, iOS and Android, with BlackBerry and Windows Phone 7 duking it out for the 3rd place.

Besides the fact that even the iOS and Android operating systems come in different flavors that may force developers to build multiple versions of their apps (i.e. iPhone vs iPad and the already numerous Android derivatives), the better apps also want to take full advantage of the hardware capabilities of the device. The obvious differences are the screen size and resolution. Going forward, however, the device manufacturers will increasingly compete with each other by adding many more device capabilities.

Already today, some devices have a front facing camera, rear facing camera, keyboard, accelerometer, GPS chip, USB port, memory card slots, and Bluetooth communication. In the near future, we can expect many more capabilities including near field communication (NFC) sensor, proximity sensor, biometric sensor, temperature sensor, and distance meters which are surely important for all golfers, right? The successful apps will have to take advantage of these capabilities and since every device is different, they will need to exist in multiple versions.

Could HMTL5 add support for such hardware features? No, HTML5 is probably not going to fix this problem. It is very unlikely to expect that the HTML5 standard could evolve to cover all such device specific innovations and even if it did, it wouldn’t be able to keep up with the pace of innovation. Yes, there will be successful apps and services that will make it big without using the device-specific hardware capabilities. But many apps will want to take advantage of everything the device has to offer and such apps will need to exist in multiple versions. And the native apps approach will be the only way to build such apps in the foreseeable future.

Monday, September 5, 2011

The Future of Interactive TV Has Arrived

Back in January, I’ve decided to say ‘adios’ to my cable TV provider and “cut the cord”. As I have written about before, being a cord cutter is not that difficult as most of the movie and TV show programming is readily available from alternate sources such as iTunes, Boxee, Hulu, or directly from the TV networks’ sites. Most, except for live sports.

I can live without live sports to a certain degree but that doesn’t stop me from trying to find non-cable solutions. Particularly, I like watching  the Grand Slam tennis tournaments and so I have been pleased to see that the online coverage has been on the rise this year. The current online live streaming coverage from the US Open, however, beats cable hands down.

Internet streaming on the TV screen (photo of my TV set)

The US Open video streaming is offered free of charge, supported by relatively unobtrusive advertising, and delivered by IBM which is not a big surprise since IBM has been sponsoring major sports events this way for years. However, the capabilities of the video stream, which IBM calls the US Open PointStream, go beyond anything we’ve seen in the past.

You chose which match to watch (screenshot)
For example, PointStream streams all matches live and you can interactively select which match you want to watch. Your regular cable TV makes that selection for you and you are not always going to like it since you will inadvertently miss certain matches. On PointStream, you are in charge of deciding which match you want to watch. In fact, if I want to watch two matches at a time, a picture-in-picture feature is available.

Picture-in-picture feature (screenshot)
PointStream goes far beyond that, though. With a mouse click, you can see the score results from all courts where play is under way. You can also see real time statistics and analysis for the match you are watching. IBM is effectively combining the video stream with a business intelligence type of analytics.

Real-time match analysis whenever you want (screenshot)
You can pause the play or rewind it - features we all love from Tivo. You can engage in online chat with other viewers. This particular feature would have been much more useful if it was actually streaming the Twitter conversations related to the #uso11 hashtag, though.


Real-time match statistics a mouse click away (screenshot)
The bad part is that living in Canada, I had to VPN into the US to trick the system into believing that I am a US based viewer. That’s doable and I gladly pay for the VPN service but these artificial borders are infuriating. To watch the action on my big screen TV, I had to connect my iMac with the TV set via a 20 foot-long HDMI cable. That works fine although it is not a very elegant solution. I’d rather had the live TV streaming provided directly via my Apple TV. I hope Apple’s new CEO Tim Cook is already working on it.

Who says you can't watch live sports without cable? (photo)
All in all, I am very impressed by what is possible today. I have an average speed Internet connection at home (8.1 Mbps according to speedtest.net) which gets further degraded by the VPN service (6.72 Mbps) and yet I am able to watch the action just like on TV. And with programming choices and features available, the experience is way better than the traditional TV. Yes, the future of interactive television has started. And I have a message to you, the cable and satellite TV companies - interactive streaming is what I want. Evolve or die!

PS: Rafael Nadal won the match.