Monday, November 28, 2011

Are You Ready for the Cloud?

Cloud computing has been the marketing topic of 2011. You could hardly attend a conference without being bombarded by predictions of how cloud computing is going to revolutionize our technology landscape. Indeed, having your data in the cloud is quickly becoming a necessity in the time when we are dividing our computer time among multiple devices.

Yet companies have been a bit more conscientious rushing to the cloud. Sure, there have been stories about many users and departments signing up for various cloud-based services such as collaboration, file-sharing, or project management. But not many enterprises have ripped out their existing on-premise solutions in favor of cloud-based offerings yet.

There are reasons why enterprises are careful. Security concerns are usually being mentioned as the top concern. The data in the cloud is not under your control and so it is less secure, right? Actually, I’m not sure I buy that argument. In fact, the cloud vendor most likely has better security in place than most enterprises could ever afford to deploy.

A much bigger issue is the data control and ownership. First, there is the issue with employee-owned devices that end up containing corporate data. In case of a device theft or employee departure, the company isn’t allowed to wipe the device and has no control over the data. That is a problem for corporate security and legal liability.

The second issue related to data ownership is the protection provided by the cloud service providers. Take Google Gmail, for instance, which is being used by many employees. The Section 11 of the Terms of Service contains the following paragraph:

By submitting, posting or displaying the content you give Google a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through, the Services.

That clause alone made me think really hard about how much am I willing to use Gmail for communication with my tax accountant or investment advisor.  

And then there is the Patriot Act issue which forces US based companies to comply with law enforcement requests to hand over your data. Dropbox’s Privacy Policy, for example, includes the following passage:

We may disclose to parties outside Dropbox files stored in your Dropbox and information about you that we collect when we have a good faith belief that disclosure is reasonably necessary to (a) comply with a law, regulation or compulsory legal request; (b) protect the safety of any person from death or serious bodily injury; (c) prevent fraud or abuse of Dropbox or its users; or (d) to protect Dropbox’s property rights.

Good faith belief that disclosure is reasonably necessary” - that isn’t exactly the Swiss Banking Act, is it? While it may be the law in the US, it may also be beyond the tolerance threshold of many companies - particularly those from European countries that have a much less casual attitude towards data security and privacy.

As a result, companies are being very careful when taking advantage of cloud based services - particularly those that primarily cater to consumers. Such services will be likely supplemented by private-cloud based offerings that provide similar capabilities under the organization’s full control.

Also, a hybrid cloud approach might be used more often to address corporate concerns. One customer recently told me that they are moving their users to a cloud based email except for critical functions such as the financial and legal departments and their entire executive team.

This kind of approach may result in lower capital expenditures, but probably higher overall costs and complexity. Well, welcome to the Cloud Age!

Monday, November 21, 2011

When Algorithms Go Wrong

Earlier this year, PC World published an interesting article about the key algorithms that rule the World Wide Web. These algorithms include everything from the Google search and Facebook friends stories, to Amazon’s recommendations and even the eHarmony’s matchmaking algorithm. Very interesting stuff, particularly when you consider the economic impact of such Internet services today.

One of the algorithms is the algorithm that drives ad presentment - the idea is to present you with the most relevant ad based on your profile date. Or, actually, with the ad that you are most likely to click on. But a few weeks ago, I had an interesting experience with Facebook. First, Facebook decided, for no particular reason, to present me with ads all in German. My first reaction was actually positive. Among the few bits of information that I have volunteered to Facebook is the fact that I studied at an university in Germany and occasionally, I even respond to a friend’s post in German. And so I thought that Facebook is so smart that it is trying to appeal to the ‘international-man-of-mystery’ side of me.

But last week, all the ads turned into French. Well, I do happen to get by in French but I am pretty sure that I have not volunteered any info about my French connection to Facebook. Sure, I have friends all over the world, including France, but that’s not enough for even the smartest algorithm to label me as a target for French ads. Besides, I could hardly be expected to act on an ad offering me a skydiving experience in France next weekend. Clearly, something has been going wrong with the Facebook algorithm.

When an algorithm serving ads goes wrong, it is perhaps a laughable matter. After all, nobody gets hurt, right? Well, nobody, except for the companies that paid a ton of money for their ads to hit the right audiences. There have been plenty stories in the past about the innocent looking algorithm changes in Google that end up having a devastating effect on many businesses. If you build your online business that depends on the organic Google search driving your traffic, you can find yourself very quickly out of business when that stops working.

With their tremendous reach, it is perhaps time for the Web's Major League players to start realizing the economic power they have. With thousands and often millions of companies depending on them, Google, Facebook, Amazon etc. have to take their responsibility seriously.

This responsibility starts with the profile data integrity, customer privacy, information security and also the algorithm dependability. Algorithm changes can be very controversial as we've seen when Klout changed its algorithm a couple of weeks ago. It’ one thing to gamble with your own fortune, quite another thing to gamble with the fortunes of those who depend on you. Too many livelihoods are at stake. Abusing this responsibility may be perhaps the greatest risk Internet businesses face today.

Sunday, November 20, 2011

The Future Upon Us

All the innovation and converging technology trends will likely have a major impact on what we can do but also on our culture, our behavior, our ways of interacting with each other and with the technology itself. This presentation discusses some of such changes that we need to get ready for.

This a narrated recording of the presentation I have delivered as an OpenTalk on November 17, 2011 at the OpenText Content World in Orlando, FL.

Saturday, November 12, 2011

I Want My iDishwasher

For years now, the world has been raving about the success of Apple products. Not just the computer platforms iMac, iPad and iPhone but also gadgets such as iPod and Apple TV have enjoyed a phenomenal success. Most pundits and consumers agree that the design, user experience and ease-of-use are key factors to their success.

For months now, the world is expecting the next line of devices from Apple that will surely yet again turn an established industry upside down. The device the iTV, or at least that’s what we think it will be called, is supposed to be an Internet-enabled TV set, no doubt seamlessly integrated with the iTunes store and all the other Apple gadgets in my house. iTV will most likely be a runaway success in an industry with many players, no differentiation and cut-trout margins. Apple will apply its magic and a boring TV set will become a must-have gadget at double the price of a regular TV set from Sony or Panasonic. My wife - who is not really the type of geek I am - is already planning on where to set it up in our house.

But I am seriously hoping for more. Looking around my house, I see many devices and appliances that need the Apple magic really badly. In our kitchen, we have a modern stove. It has 22 buttons plus a 10-key numeric keypad, not counting the knobs for the gas burners. I don’t know what all of those buttons are for. Nobody knows. Basic tasks such as preheating the oven require multiple button sequences which is usually done by trial-and-error. This is the MS-DOS v2.0 equivalent of a stove. I want an iStove.  

The iStove would have very simple controls, designed for what people do with a stove - cooking, baking, heating up food etc. The controls would be logically arranged and the operation would be easy to learn with no need of a manual. Think about the differences between the controls on the old MP3 players and the iPod. I want the iStove to be like the iPod.  

What’s more, the iStove would be cool looking. It would become the central point of the kitchen. I would love it just like I love my iMac, iPad, iPhone, and the iPod Nano that I’m wearing like a watch.

I want more than just the iStove. I want the iDishwasher. We have a brand new dishwasher that we absolutely hate. I want a dishwasher from Apple that I could love. I also want the iRefrigerator, iWaterSoftener, iWasher, iDryer, and iFurnace. I don’t want any more mysterious buttons, knobs and dials. I don’t know what they do and I don’t want to be spending hours figuring it out. I just want those devices and appliances to work. That’s all.

I am very encouraged by Nest and their new learning thermostat designed by former Apple designers. I’m pretty sure I will buy one as soon as it’s available. My current thermostat is very sophisticated with many programmable options but it is a pain to control. It is so hard that we rarely bother and instead either suffer in the cold or waste money and energy on heating. I love the idea of an iThermostat.

The way Apple has shaken up one industry after another is great for us consumers. Many of these industries have been piling up cash for years without ever caring about the customer. The home appliance and consumer electronics are such industries. For years, they have been competing with each other on useless features like a glass-top range (which sucks, by the way). As a result, it takes my ultra-modern TV set almost two minutes to boot up - longer  than it used to take those vacuum tubes to warm up. It’s time for the Apple magic to shake things up. I can’t wait and I will buy those gadgets!

Monday, November 7, 2011

Parallels between Document Capture and Voice Recognition

I was doing some research about the history of document capture last week. As I was reading about the early imaging machines capable of scanning 30 checks or lottery tickets per second, I came to realize an interesting parallel between the world of document capture and voice recognition.

At first, the purpose of document capture was just creating a readable image of a paper document which could be electronically stored and shared. That alone was a big improvement in efficiency. The analogy in the audio world would be the creation of the MP3 standard which allowed us to make inexpensive recordings of music and share them easily via services such as Napster. Too easily, complained the entertainment industry over and over, until Apple came and took over their business.

The next milestone in image capture was optical character recognition (OCR) which allowed us to extract the text from the image and make it searchable. Intelligent character recognition (ICR) augmented these capabilities by extracting hand-written text. That was particularly important to those high-volume imaging systems processing millions of checks or lottery tickets. In the audio world, the OCR and ICR capabilities are akin to the speech recognition software such as Naturally Speaking by Nuance or IBM’s ViaVoice. The purpose of this software is to convert speech into searchable text - just like OCR.
OCR and voice recognition are both about searchable text
Finally, document capture evolved to the point where it became possible to automatically detect the document type through document recognition (i.e invoice, application, job application, or travel expenses) and subsequently extract the actual data value from the document. Not just text, but rather metadata fields such as billing address, date, total, or payment terms. As a result, document capture can be connected directly with process automation software such as workflow or business process management (BPM) to gain even greater efficiencies from automated document processing.  

In the audio world, the analogous technology is voice control or the recently introduced personal voice assistant Siri by Apple. The idea of this software is to issue voice commands together with the dictation (voice-to-text capture). The commands can make the computer perform a task or a process step. Many phones understood basic voice operations such as “Call home” but those are just shortcut commands comparable to bar-codes and QR codes in the document capture world.


Understanding the meaning from natural language without learning predefined commands takes voice recognition to a different level. Such voice control has been featured in many sci-fi movies from Space Odyssey to Avatar but remains so far mostly in the experimental stage. Microsoft promised to ship a new version of Xbox with voice control for task such as movies or music search which could be extremely useful. Siri appears to be the first intelligent voice control-based software entering the mass market with capabilities such as scheduling appointments, searching for music, sending messages, or checking the weather.

The voice recognition technology has been following a similar innovation trajectory as document capture. Today, software such as Siri raises voice technology onto a level that is on par with the state of the art in document capture. It will be interesting to see what innovations will emerge in both of these worlds. In the mean time, we should practice the interaction with a computer in natural language because Voice Recognition is about to Re-Wire our Brains.

Sunday, October 30, 2011

Compliance Starts with Explaining Why

I’ve just finished reading a couple of books by Kevin Mitnick, the famous computer hacker and phone freak who, after serving some time in prison, eventually became a security consultant. In his books, Kevin not only describes how amazingly easy it was to dupe employees at various organizations to willingly grant him access to their systems, but he also provides many suggestions for corporate security policies and measures.

The one thing that becomes obvious from reading Mr. Mitnick’s books is that people will comply with policies much more willingly, when they are explained. Why is this policy in place? Don’t just mandate a screen saver with a password protection to increase your data security level. Explain to employees why they need it. People aren’t dumb. With the proper explanation, they will remember and more likely comply.

Whenever I’m flying, I notice how the air travel experience is filled with seemingly contradictory rules and regulations that come with no explanation. For example, I have to take my laptop out of the bag for a security check while all my other electronics, including the iPad, can stay in the bag. Why? During take-off and landing, I have to turn off all electronic devices even though I can’t really turn off my digital watch nor can I turn off my iPod Nano. Again, there is no explanation provided and I see more and more people simply ignoring the rule altogether.
I can see very similar challenges with enterprise compliance. The HR department makes employees take mandatory training on business ethics but rarely, is there any explanation provided as to why we are taking these course. The reason is probably not that the HR department suspects us to be taking bribes or contracting out work to our relatives. The reason is more likely that by making us take the training, the company reduces its own liability. That’s a good reason and the employees should be told.

The same thing happens with adding metadata, classifying content, and completing compliance related work steps. We create rules but rarely do we take the time to explain why. What benefit will the organization gain?

The results are often disappointing: poor quality, lack of consistency or simply a complete refusal. Such results become very costly for the organization and practically impossible to remedy after the fact. People don’t follow the rules because they were never really told why should they bother.

Yet the solution is often amazingly simple. Give your employees the rationale behind the rules and most of them will try to do the right thing. You may not get a 100% compliance nor the perfect quality but you are going to experience measurable improvements.

Because good compliance starts with explaining why.

Tuesday, October 25, 2011

Security Makes Things Hard

Consumerization of the enterprise is sweeping the technology world today. Just look at all the unsupported iPads, iPhones, and Macs around your office. Employees are more and more frequently discovering that the cool technology they use at home can be used quite effectively in the office - with or without IT support. Enterprise software is not an exception.

Content management often gets compared with the consumer experience. No, I am not talking about the design elements of the user interface. Those are usually based on highly personal and often hard to define user preferences. I guess, some folks might even like the ribbons in Office... I’m talking about the interaction, the process of creating, collaborating on, sharing, and using content.

Take search, for example. How many times have we heard that we would like to see enterprise search be just like Google web search. But trust me, you don’t want it the same. On the Web, Google has it easy. It’s finding content that wants to be found. In fact, the content often really, really wants to be found. Some content owners want their content to be found so much that they spend millions of dollars on search engine optimization (SEO) and on Google ads to make sure their content can be found on the World Wide Web.

In the enterprise, nobody is search-engine optimizing their documents to make sure they can be found. You can also not pay Google to make sure your document called “Corporate Strategy” will be found every time somebody tries to search for those words. The search engines are working much harder to find the relevant content. Only metadata and proper classification can help - and most organizations struggle to handle metadata consistently. But wait, there is another major challenge in the enterprise: security.

All my Facebook friends could see this post...
For security reasons, we often don’t want every user to find every relevant content. In the enterprise, some users are not privy to certain information and thus they should not be able to find the documents. In fact, they should not even be able to see the document titles in the search results as the document names alone could give away too much information. Just imagine your employees finding a document titled “Corporate Restructuring”. To prevent that, the result set has to be filtered by permissions before sending it back to the requesting application.

This post on our internal deployment of OpenText Pulse was only visible to a few.
Another example that shows how content in the enterprise is different is social software. When you upload a file or link on Facebook, Facebook announces it to all your friends or followers. In the enterprise, that is yet again not acceptable. When I upload a file called “Acquisition Proposal”, only those of my coworker-followers who have the permission to see such a document should be alerted about it by the social software. You must not have to select a predefined group or circle of friends; it has to happen automatically - the software has to validate the user permissions before showing the alert to anyone.

Enterprise software is different. Security makes it much more difficult to expose the right information to the right people which is critical in the enterprise. Actually, I’d argue that security is quite important in the consumer space too and I hope that the technologies related to security and privacy make the jump from the enterprise to the consumer world. Consumerization of the enterprise need a bit of ‘enterprization of the consumer world’.

Monday, October 17, 2011

Voice Recognition Is About to Re-Wire Our Brains

Voice-based data input to a computer is not a new idea. While the keyboard, mouse, and more recently, gestures, have been the primary way of interacting with computers, the idea of voice-based interaction is as old as the HAL 9000, the talking computer from Space Odyssey. Software such as Dragon Naturally Speaking (since 2005 owned by Nuance) or IBM’s ViaVoice have been around for almost two decades and let’s not forget the often infuriating Interactive Voice Recognition (IVR) used by most telephone support departments today.

Voice recognition software stole the spotlight last week when Apple released its new iPhone 4S with built-in Siri software. Embedding voice recognition directly into the operating system is a major milestone and having it included in a mobile device makes perfect sense as we can see from the video commercial by Apple. Only the TV set is a device that needs voice control even more – I am still waiting for the kind of interaction Marty McFly (Michael J. Fox) was using in the Back to the Future II movie. In fact, Siri for Apple TV is rumored to be on the way and Microsoft recently demonstrated voice-based movie search on Xbox 360.



So, how come we have not been talking to our computers for the last decade since the technology was there? Well, part of it was the accuracy of the recognition. When I used Naturally Speaking Back in the 90s, I had to train the software to understand me which was a lot of work for meager results. We all know the frustration with any IVR based system: “Sorry, I didn’t quite catch that. Could you please try it again?”. And while Siri represents the next generation of voice recognition, plenty of stories about the funny results that its use can result in circulated on the Web immediately after the new iPhone was released.
Source: STST
With increased computing power and better software algorithms, the quality is becoming less of an issue. One day, the software might even understand dialects or foreign accents like mine. But I suspect that’s only part of the adoption challenge. The other part lies in our ability to express our thoughts verbally to a computer. Most of our verbal communication is not very straightforward and we even enjoy taking our time before coming to the point. In places where communication has to be clear and precise such as military orders, radio protocol, or business negotiations, it is only possible after many hours of training. Naturally, people don’t speak that way.

However, just some 30 years ago typical managers didn’t have computers on their desks. They would spend several hours each day responding to correspondence by dictating letters onto a tape which their secretaries would later transcribe on a typewriter and later on a word processor and eventually on a PC using Word Perfect. Another 10 years before that, the dictation was done in real-time and the secretary had to know short-hand to keep up. It took years before the PC made it to the manager’s desk. What amazes me today is that the managers were able to dictate complete letters in full, well articulated sentences.

For most of us, that’s not so easy anymore. Today, we have a generation of PC users spoiled by the editing power at our fingertips. Most of us, knowledge workers, formulate our sentences as we write them and since it is so easy to rephrase any sentence or start from the beginning, we do it all the time. I’ve been observing many people doing this and I know that I am not alone. Most humans, even professional writers, would have a difficult time dictating in complete sentences. Giving commands to the computer such as search requests is one thing but authoring text via voice recognition requires a new skill set that is underdeveloped in most of us today. We know from the past that we humans are capable of such skills but the last 30 years of PC revolution have re-wired our brains differently.

Now Siri and other voice recognition software may be starting a new era. An era where we can – and perhaps must - express ourselves verbally in a new way. Let’s see how it goes. [computer, strike last sentence] Ehm…

By the way, when is Siri going to be available on iPhone 4?

Sunday, October 9, 2011

The Courage to Lead

What else could be the topic of my blog post this week other than paying tribute to Steve Jobs. All the writers have written countless obituaries this week about this great man, his life and his work. Today, I want to write about a particular aspect of Apple’s strategy - Steve Jobs’ strategy - that has really impressed me over the years.

It is the ability to pursue the future by letting go of the past.

When a new technology arrives that is capable of replacing an old one, the typical approach for a technology company is to hedge its bets. Start embracing the new while continuing to support the old. You don’t want to disrupt anyone, you don’t want to leave anyone behind, you want to smoothly transition from one technology to another. That means that for several years and revisions, your product comes with duplicate, redundant technologies to make this smooth transition possible.

Image: Jonathan Mak
For example, many PCs today still ship with a built-in 56kb modem even though hardly anyone knows how to use dial-up to access the Internet anymore. But you have to support the modem in case some grandma in Minnesota still doesn’t have DSL. After all, she might select someone else’s make of PC and that would be bad, right?

That’s not how Apple operates. That’s not how Steve Jobs pursued the future. In his world, when a new technology comes around that it better than the old one, you just go for it. You want to speed up the transition. You want to drag everybody with you, even that grandma in Minnesota. A leader has to lead and Steve Jobs never hesitated to do so.

When the first Macintosh came on the market in 1984, it had a graphical user interface (GUI) instead of the then usual command line interface. There was no command line anymore on the Mac - everything was done though the GUI. Windows 7, in contrast, still has a command line interface available just in case you feel like typing “C:>ipconfig /renew Local Area Connection 2”. OK, the "cmd" program is bit more hidden now than it used to be, but Windows has opted for a long smooth transition from DOS. Apple just went for it.

The Mac also came equipped with the relatively new 3 and ½ inch diskette drive and no longer with the then much more common 5 and ¼ inch drive. The 3 ½ “ diskettes were far superior to the “floppy disks” but PCs would be shipping for another decade with dual drives for both 3 ½“ and 5 ¼“ diskettes. Again, most PC makers opted for a smooth transition while Apple just went for it.

Shortly after that, Steve Jobs was famously ousted from Apple and not many bold moves happened until he came back. In the mean time, he became a billionaire by taking Pixar public and eventually selling it to Disney. Then he also sold NeXT to Apple and in 1997, he was back at the helm.

When the iMac shipped in 1998, it came without a diskette drive. No diskettes, only a CD-ROM drive (later a DVD drive) and a USB slot. That was bold and controversial back then. How are people supposed to exchange files without diskettes? Using the network or a flash memory wasn’t the way people usually did it back in 1998. For many more years after, that PCs used to come with a diskette drive and most people had a box of diskettes next to their PC (if you are over 30, you still have that box somewhere in the attic, just admit it).

In 2001, the iPod was launched with some amazingly bold limitations. It would only play files in the MP3 format (and in the Apple Lossless format which I am a big fan of). Remember, back in 2001, there was a plethora of audio formats used to play music including Microsoft’s WAV, Real Audio format (.RA), Sun’s AU format and others. But Apple said, forget it, we go with MP3 which was popularized by Napster and we all followed. Most of the other formats are disappearing today.

The other famous format bet that Apple made, is the bet against Flash on mobile devices. We are still not quite sure how this one ends up but the history shows that Apple usually gets its way.

Talking about mobile devices, I have to mention the iPad. When it first launched in 2010, it drew plenty of skepticism for coming only with a wireless Internet connection. No diskettes, no CDs, no DVDs, no USB slot...heck, not even a SD card slot. Many of us are still moaning that we want at least a SD card slot but we are happily buying our iPads anyway.

My final example is the Apple TV 2. When it was released in September 2010, Apple decided that the old way of hoarding content in your home library is no longer sustainable as the HD movies are too big. And so they shipped the new Apple TV without a drive for storing movies. Instead, renting is the way to go - the only way to go. Last week, Apple announced their iCloud service, and so I suspect that a cloud-only music player might be coming soon and we will stop hoarding music too.

From a vendor point of view, all these moves were incredibly courageous and for most vendors they would be considered huge gambles, well beyond the comfort level. Yet they all follow the principle that Apple and Steve Jobs embodied for decades - decide what’s best for your customers and have the courage to deliver it. Lead, don’t ask for directions! Even if it takes courage to lead.

To pursue the future, it is good to let go of the past.

Stay foolish.

Monday, October 3, 2011

High Fidelity Pictures

Today, I will discuss some exciting innovations in the world of photography. My interest was awakened by a recent article in The Economist titled Cameras Get Clever. It described some of the leading edge developments in the field of photography. Among them was a technology called high dynamic range (HDR) which can enhance your everyday pictures by overlaying three separate shots and then using the processing power of the camera to get the best exposure from each one of them.

Basically, a normal shot will have areas that are either too dark or too bright because the camera has to set the exposure based on a compromise across the entire image or a selected area. With HDR, the camera takes three separate shots within fractions of a second - one with exposure for the high tones, one for the low tones and one for middle tones. Then, the camera overlays these three pictures using algorithms that combine the best exposure from all three shots. The result are images that have a much better balance of tones and colors than the single shot based on a compromise. Or, do they?

Well, it was easy to test since the standard camera built into iPhone has the HDR feature today. Not many people know it but there is a button in the top center of the screen that allows you to turn on HDR which results in two pictures for each shot. One is a “normal” picture based on the compromise exposure while the other one is improved by HDR. Here is my test, using our dog as a model:

This is a  'normal' picture with a compromise-based exposure 
This is the same picture with a high dynamic range (HDR)
Now, which shot is better? The HDR shot clearly has a better balance. The dog’s face is not just black and white, it has some tone depth in it (number of shades). That comes at the cost of contrast and color richness which we can see in the normal picture. The problem is that on a bright sunny day, the colors really were very bright and the contrast was very strong. And, the dog is black and white, not gray. I checked.

That leads me to my point. With all the power of modern camera and post-processing software such as Photoshop, Aperture, iPhoto, or Picasa to name just a few, who is to decide that the picture I take should have more depth in the low or high tones? Why are the colors all wrong in artificial light? Why are the shots on a beach often overexposed? I understand all the technical issues behind it but what I want is a picture that looks exactly the way reality did.

Back in the late 60s, the electronics industry came up with a notion of high fidelity (hi-fi) which meant that the music you heard from your record, tape, radio, or CD was supposed to sound exactly as in the studio. What I want is hi-fi for pictures. I want the assurance that my picture will look the way I see the real world in that very moment.

Sure, there will be artists - and consumers - who will want to distort the reality for special effect just like there are artists who feel that they have to add a dramatic sky into every picture. That is the right of any author and it should always remain that way. But 99% of all pictures taken are not meant to be art. They are meant to visually capture reality. The real reality. My tiny little (yet incredibly powerful) camera is full of features that can alter the picture - from color accent to fish-eye effect. But there is no button for “authentic picture”.

Go ahead and test the HDR functionality on your iPhone. Features like HDR are important because they allow us to push the boundaries. Some of the innovations in the world of photography are just incredible and I can’t wait to use them. When I read the article mentioned above, I got very excited about Lytro and other technologies. The impact of these technologies could be amazing. Just like the change that digital cameras brought upon us when they replaced film.

As for my test above, I prefer the “normal” picture because it looks more like the actual scene I remember.