Sunday, October 30, 2011

Compliance Starts with Explaining Why

I’ve just finished reading a couple of books by Kevin Mitnick, the famous computer hacker and phone freak who, after serving some time in prison, eventually became a security consultant. In his books, Kevin not only describes how amazingly easy it was to dupe employees at various organizations to willingly grant him access to their systems, but he also provides many suggestions for corporate security policies and measures.

The one thing that becomes obvious from reading Mr. Mitnick’s books is that people will comply with policies much more willingly, when they are explained. Why is this policy in place? Don’t just mandate a screen saver with a password protection to increase your data security level. Explain to employees why they need it. People aren’t dumb. With the proper explanation, they will remember and more likely comply.

Whenever I’m flying, I notice how the air travel experience is filled with seemingly contradictory rules and regulations that come with no explanation. For example, I have to take my laptop out of the bag for a security check while all my other electronics, including the iPad, can stay in the bag. Why? During take-off and landing, I have to turn off all electronic devices even though I can’t really turn off my digital watch nor can I turn off my iPod Nano. Again, there is no explanation provided and I see more and more people simply ignoring the rule altogether.
I can see very similar challenges with enterprise compliance. The HR department makes employees take mandatory training on business ethics but rarely, is there any explanation provided as to why we are taking these course. The reason is probably not that the HR department suspects us to be taking bribes or contracting out work to our relatives. The reason is more likely that by making us take the training, the company reduces its own liability. That’s a good reason and the employees should be told.

The same thing happens with adding metadata, classifying content, and completing compliance related work steps. We create rules but rarely do we take the time to explain why. What benefit will the organization gain?

The results are often disappointing: poor quality, lack of consistency or simply a complete refusal. Such results become very costly for the organization and practically impossible to remedy after the fact. People don’t follow the rules because they were never really told why should they bother.

Yet the solution is often amazingly simple. Give your employees the rationale behind the rules and most of them will try to do the right thing. You may not get a 100% compliance nor the perfect quality but you are going to experience measurable improvements.

Because good compliance starts with explaining why.

Tuesday, October 25, 2011

Security Makes Things Hard

Consumerization of the enterprise is sweeping the technology world today. Just look at all the unsupported iPads, iPhones, and Macs around your office. Employees are more and more frequently discovering that the cool technology they use at home can be used quite effectively in the office - with or without IT support. Enterprise software is not an exception.

Content management often gets compared with the consumer experience. No, I am not talking about the design elements of the user interface. Those are usually based on highly personal and often hard to define user preferences. I guess, some folks might even like the ribbons in Office... I’m talking about the interaction, the process of creating, collaborating on, sharing, and using content.

Take search, for example. How many times have we heard that we would like to see enterprise search be just like Google web search. But trust me, you don’t want it the same. On the Web, Google has it easy. It’s finding content that wants to be found. In fact, the content often really, really wants to be found. Some content owners want their content to be found so much that they spend millions of dollars on search engine optimization (SEO) and on Google ads to make sure their content can be found on the World Wide Web.

In the enterprise, nobody is search-engine optimizing their documents to make sure they can be found. You can also not pay Google to make sure your document called “Corporate Strategy” will be found every time somebody tries to search for those words. The search engines are working much harder to find the relevant content. Only metadata and proper classification can help - and most organizations struggle to handle metadata consistently. But wait, there is another major challenge in the enterprise: security.

All my Facebook friends could see this post...
For security reasons, we often don’t want every user to find every relevant content. In the enterprise, some users are not privy to certain information and thus they should not be able to find the documents. In fact, they should not even be able to see the document titles in the search results as the document names alone could give away too much information. Just imagine your employees finding a document titled “Corporate Restructuring”. To prevent that, the result set has to be filtered by permissions before sending it back to the requesting application.

This post on our internal deployment of OpenText Pulse was only visible to a few.
Another example that shows how content in the enterprise is different is social software. When you upload a file or link on Facebook, Facebook announces it to all your friends or followers. In the enterprise, that is yet again not acceptable. When I upload a file called “Acquisition Proposal”, only those of my coworker-followers who have the permission to see such a document should be alerted about it by the social software. You must not have to select a predefined group or circle of friends; it has to happen automatically - the software has to validate the user permissions before showing the alert to anyone.

Enterprise software is different. Security makes it much more difficult to expose the right information to the right people which is critical in the enterprise. Actually, I’d argue that security is quite important in the consumer space too and I hope that the technologies related to security and privacy make the jump from the enterprise to the consumer world. Consumerization of the enterprise need a bit of ‘enterprization of the consumer world’.

Monday, October 17, 2011

Voice Recognition Is About to Re-Wire Our Brains

Voice-based data input to a computer is not a new idea. While the keyboard, mouse, and more recently, gestures, have been the primary way of interacting with computers, the idea of voice-based interaction is as old as the HAL 9000, the talking computer from Space Odyssey. Software such as Dragon Naturally Speaking (since 2005 owned by Nuance) or IBM’s ViaVoice have been around for almost two decades and let’s not forget the often infuriating Interactive Voice Recognition (IVR) used by most telephone support departments today.

Voice recognition software stole the spotlight last week when Apple released its new iPhone 4S with built-in Siri software. Embedding voice recognition directly into the operating system is a major milestone and having it included in a mobile device makes perfect sense as we can see from the video commercial by Apple. Only the TV set is a device that needs voice control even more – I am still waiting for the kind of interaction Marty McFly (Michael J. Fox) was using in the Back to the Future II movie. In fact, Siri for Apple TV is rumored to be on the way and Microsoft recently demonstrated voice-based movie search on Xbox 360.



So, how come we have not been talking to our computers for the last decade since the technology was there? Well, part of it was the accuracy of the recognition. When I used Naturally Speaking Back in the 90s, I had to train the software to understand me which was a lot of work for meager results. We all know the frustration with any IVR based system: “Sorry, I didn’t quite catch that. Could you please try it again?”. And while Siri represents the next generation of voice recognition, plenty of stories about the funny results that its use can result in circulated on the Web immediately after the new iPhone was released.
Source: STST
With increased computing power and better software algorithms, the quality is becoming less of an issue. One day, the software might even understand dialects or foreign accents like mine. But I suspect that’s only part of the adoption challenge. The other part lies in our ability to express our thoughts verbally to a computer. Most of our verbal communication is not very straightforward and we even enjoy taking our time before coming to the point. In places where communication has to be clear and precise such as military orders, radio protocol, or business negotiations, it is only possible after many hours of training. Naturally, people don’t speak that way.

However, just some 30 years ago typical managers didn’t have computers on their desks. They would spend several hours each day responding to correspondence by dictating letters onto a tape which their secretaries would later transcribe on a typewriter and later on a word processor and eventually on a PC using Word Perfect. Another 10 years before that, the dictation was done in real-time and the secretary had to know short-hand to keep up. It took years before the PC made it to the manager’s desk. What amazes me today is that the managers were able to dictate complete letters in full, well articulated sentences.

For most of us, that’s not so easy anymore. Today, we have a generation of PC users spoiled by the editing power at our fingertips. Most of us, knowledge workers, formulate our sentences as we write them and since it is so easy to rephrase any sentence or start from the beginning, we do it all the time. I’ve been observing many people doing this and I know that I am not alone. Most humans, even professional writers, would have a difficult time dictating in complete sentences. Giving commands to the computer such as search requests is one thing but authoring text via voice recognition requires a new skill set that is underdeveloped in most of us today. We know from the past that we humans are capable of such skills but the last 30 years of PC revolution have re-wired our brains differently.

Now Siri and other voice recognition software may be starting a new era. An era where we can – and perhaps must - express ourselves verbally in a new way. Let’s see how it goes. [computer, strike last sentence] Ehm…

By the way, when is Siri going to be available on iPhone 4?

Sunday, October 9, 2011

The Courage to Lead

What else could be the topic of my blog post this week other than paying tribute to Steve Jobs. All the writers have written countless obituaries this week about this great man, his life and his work. Today, I want to write about a particular aspect of Apple’s strategy - Steve Jobs’ strategy - that has really impressed me over the years.

It is the ability to pursue the future by letting go of the past.

When a new technology arrives that is capable of replacing an old one, the typical approach for a technology company is to hedge its bets. Start embracing the new while continuing to support the old. You don’t want to disrupt anyone, you don’t want to leave anyone behind, you want to smoothly transition from one technology to another. That means that for several years and revisions, your product comes with duplicate, redundant technologies to make this smooth transition possible.

Image: Jonathan Mak
For example, many PCs today still ship with a built-in 56kb modem even though hardly anyone knows how to use dial-up to access the Internet anymore. But you have to support the modem in case some grandma in Minnesota still doesn’t have DSL. After all, she might select someone else’s make of PC and that would be bad, right?

That’s not how Apple operates. That’s not how Steve Jobs pursued the future. In his world, when a new technology comes around that it better than the old one, you just go for it. You want to speed up the transition. You want to drag everybody with you, even that grandma in Minnesota. A leader has to lead and Steve Jobs never hesitated to do so.

When the first Macintosh came on the market in 1984, it had a graphical user interface (GUI) instead of the then usual command line interface. There was no command line anymore on the Mac - everything was done though the GUI. Windows 7, in contrast, still has a command line interface available just in case you feel like typing “C:>ipconfig /renew Local Area Connection 2”. OK, the "cmd" program is bit more hidden now than it used to be, but Windows has opted for a long smooth transition from DOS. Apple just went for it.

The Mac also came equipped with the relatively new 3 and ½ inch diskette drive and no longer with the then much more common 5 and ¼ inch drive. The 3 ½ “ diskettes were far superior to the “floppy disks” but PCs would be shipping for another decade with dual drives for both 3 ½“ and 5 ¼“ diskettes. Again, most PC makers opted for a smooth transition while Apple just went for it.

Shortly after that, Steve Jobs was famously ousted from Apple and not many bold moves happened until he came back. In the mean time, he became a billionaire by taking Pixar public and eventually selling it to Disney. Then he also sold NeXT to Apple and in 1997, he was back at the helm.

When the iMac shipped in 1998, it came without a diskette drive. No diskettes, only a CD-ROM drive (later a DVD drive) and a USB slot. That was bold and controversial back then. How are people supposed to exchange files without diskettes? Using the network or a flash memory wasn’t the way people usually did it back in 1998. For many more years after, that PCs used to come with a diskette drive and most people had a box of diskettes next to their PC (if you are over 30, you still have that box somewhere in the attic, just admit it).

In 2001, the iPod was launched with some amazingly bold limitations. It would only play files in the MP3 format (and in the Apple Lossless format which I am a big fan of). Remember, back in 2001, there was a plethora of audio formats used to play music including Microsoft’s WAV, Real Audio format (.RA), Sun’s AU format and others. But Apple said, forget it, we go with MP3 which was popularized by Napster and we all followed. Most of the other formats are disappearing today.

The other famous format bet that Apple made, is the bet against Flash on mobile devices. We are still not quite sure how this one ends up but the history shows that Apple usually gets its way.

Talking about mobile devices, I have to mention the iPad. When it first launched in 2010, it drew plenty of skepticism for coming only with a wireless Internet connection. No diskettes, no CDs, no DVDs, no USB slot...heck, not even a SD card slot. Many of us are still moaning that we want at least a SD card slot but we are happily buying our iPads anyway.

My final example is the Apple TV 2. When it was released in September 2010, Apple decided that the old way of hoarding content in your home library is no longer sustainable as the HD movies are too big. And so they shipped the new Apple TV without a drive for storing movies. Instead, renting is the way to go - the only way to go. Last week, Apple announced their iCloud service, and so I suspect that a cloud-only music player might be coming soon and we will stop hoarding music too.

From a vendor point of view, all these moves were incredibly courageous and for most vendors they would be considered huge gambles, well beyond the comfort level. Yet they all follow the principle that Apple and Steve Jobs embodied for decades - decide what’s best for your customers and have the courage to deliver it. Lead, don’t ask for directions! Even if it takes courage to lead.

To pursue the future, it is good to let go of the past.

Stay foolish.

Monday, October 3, 2011

High Fidelity Pictures

Today, I will discuss some exciting innovations in the world of photography. My interest was awakened by a recent article in The Economist titled Cameras Get Clever. It described some of the leading edge developments in the field of photography. Among them was a technology called high dynamic range (HDR) which can enhance your everyday pictures by overlaying three separate shots and then using the processing power of the camera to get the best exposure from each one of them.

Basically, a normal shot will have areas that are either too dark or too bright because the camera has to set the exposure based on a compromise across the entire image or a selected area. With HDR, the camera takes three separate shots within fractions of a second - one with exposure for the high tones, one for the low tones and one for middle tones. Then, the camera overlays these three pictures using algorithms that combine the best exposure from all three shots. The result are images that have a much better balance of tones and colors than the single shot based on a compromise. Or, do they?

Well, it was easy to test since the standard camera built into iPhone has the HDR feature today. Not many people know it but there is a button in the top center of the screen that allows you to turn on HDR which results in two pictures for each shot. One is a “normal” picture based on the compromise exposure while the other one is improved by HDR. Here is my test, using our dog as a model:

This is a  'normal' picture with a compromise-based exposure 
This is the same picture with a high dynamic range (HDR)
Now, which shot is better? The HDR shot clearly has a better balance. The dog’s face is not just black and white, it has some tone depth in it (number of shades). That comes at the cost of contrast and color richness which we can see in the normal picture. The problem is that on a bright sunny day, the colors really were very bright and the contrast was very strong. And, the dog is black and white, not gray. I checked.

That leads me to my point. With all the power of modern camera and post-processing software such as Photoshop, Aperture, iPhoto, or Picasa to name just a few, who is to decide that the picture I take should have more depth in the low or high tones? Why are the colors all wrong in artificial light? Why are the shots on a beach often overexposed? I understand all the technical issues behind it but what I want is a picture that looks exactly the way reality did.

Back in the late 60s, the electronics industry came up with a notion of high fidelity (hi-fi) which meant that the music you heard from your record, tape, radio, or CD was supposed to sound exactly as in the studio. What I want is hi-fi for pictures. I want the assurance that my picture will look the way I see the real world in that very moment.

Sure, there will be artists - and consumers - who will want to distort the reality for special effect just like there are artists who feel that they have to add a dramatic sky into every picture. That is the right of any author and it should always remain that way. But 99% of all pictures taken are not meant to be art. They are meant to visually capture reality. The real reality. My tiny little (yet incredibly powerful) camera is full of features that can alter the picture - from color accent to fish-eye effect. But there is no button for “authentic picture”.

Go ahead and test the HDR functionality on your iPhone. Features like HDR are important because they allow us to push the boundaries. Some of the innovations in the world of photography are just incredible and I can’t wait to use them. When I read the article mentioned above, I got very excited about Lytro and other technologies. The impact of these technologies could be amazing. Just like the change that digital cameras brought upon us when they replaced film.

As for my test above, I prefer the “normal” picture because it looks more like the actual scene I remember.