Thursday, December 27, 2012

Are We Teaching Obsolete Skills?

This blog post is about education yet it has to do with technology in many ways. The issue at hand is the current educational system and the wrong focus we place on teaching obsolete skills.

Indeed, most of our education is wasted on learning the hard skills like calculus and memorizing the formulas for amino acids. Sure those skills are probably critical to those of us who go on to become mathematicians or chemical scientists but that’s only a very few of us. On the other hand, all of us need almost daily more of the soft skills such as effective communication, public speaking, negotiation and leadership. With a few exceptions, these skills are not taught today on any level of education.

Every engineer needs to know how to use one of these, right?
It’s ironic, if you ask me. We get all this education to be able to live more prosperous lives yet prosperity rarely comes from spelling or algebra. Prosperity usually comes from the ability to sell yourself, negotiate a decent salary and communicate well about what you have accomplished. Sure, you need to know some spelling and algebra to avoid looking like an idiot but we should accept the fact that most of us work in front of a computer all day long. That computer provides plenty of assistance for spelling and multiplication. However, the computer can’t take over conflict resolution, problem analysis or financial planning. Yet we all need those skills every day!

By the way, how is our current education doing at teaching us to use that computer? Well, not that great. In fact, most knowledge workers are expected to learn those skills along the way while doing homework. The results are knowledge workers with no concept of data structure, drowning in information overflow, and in general, suffering from some degree of technophobia. Today, the basics of PowerPoint design are much more marketable skills than calculus.

I watch my children learning their spelling and multiplication everyday - years of hard training that should be reduced by half. Technology is changing the learning needs and yet our learning system is adapting way too slowly. Nobody is teaching how to use an abacus or a slide rule anymore even though those skills were considered essential some 50 years ago. Yet our kids spend years learning to write in cursive which nobody uses anymore.

Russian abacus.
There is a difference in education between the continents today. Europe and particularly Asia are putting even more emphasis on the hard skills, producing brilliant engineers who struggle to land a job. America is at least a small step ahead in teaching the soft skills. In general, Americans appear to be much more at ease at public speaking than their European or Asian counterparts. Guess what, the US educational system teaches this soft skill from kindergarten starting with “show-and-tell” - something that European kids rarely do.

As much as America’s worried about losing its edge on the international scene, at least its educational system is a little more relevant. No wonder that American universities are always among some of the most prestigious in the world. Clearly, teaching soft skills is not just an American challenge and other countries might face it even more.

My point is that we are teaching skills today, that were relevant back in the 1950s. Or, back in the 1850s. Our education system has to keep up with the technical innovation of the present time. In fact, to be truly effective, we should be teaching a curriculum now that will be relevant when our kids actually enter the workforce. Today, they start their first jobs with academic skills that are irrelevant to actually do that job!

Images: Wikipedia Creative Commons and public domain.

Sunday, December 16, 2012

Can We Solve the Security Dilemma?

I recently wrote a blog post about the need to strike the right balance between security and convenience. In this post, I'd like to examine the ways to find that balance amid ever raising security requirements. The challenge lies in the fact that the traditional security measures such as strong passwords are becoming increasingly insufficient. The computing power available to every hacker today is simply so immense that brute-force attacks are rather easy to execute. Note: a brute-force attack is an encryption decoding technique that uses vast computing power to quickly try all possible combinations of characters - until the right key is found.

So, how do we overcome this problem?

The solution isn’t easy, particularly given the security and convenience trade-off. Strong passwords force us to use longer passwords and passphrases that have to include a combination of letters, numbers, special characters, etc. that cannot be found in the dictionary. We all know that such passwords are less convenient, particularly when entering them on a smartphone but the benefit of this trade-off is higher security. Alas, not much higher, as strong passwords can be still broken with brute-force attacks.

Multi-factor authentication takes things to the next level by combining passwords with another authentication mechanism such as one time passcodes or tokens. My bank, for example, gave me a one-time passcode generator the size of a credit card that I use for some of the more important transactions. I don’t need it to check my account balance but I do need it for money transfers. That, by the way, is a good example of the security-convenience balance in a practical use case.

The next level of security can be provided by biometrics. Today, retina scans are the way the government identifies citizens at border crossings who use the Global Crossing or Nexus service. It seems to work and for a long time I thought this would solve the authentication problem for good. However, the biometric signatures can be falsified and even stolen which not only compromises the security but also introduces a new identity theft challenge. No, I am not talking about stolen fingers and eyeballs like we see in the movies - I am talking about the series of data points that biometric scanners look for. Same is true for a DNA-based authentication, by the way. I am not aware of any practical DNA authentication use cases outside of science fiction today, but the signature files for DNA samples could be falsified or stolen just like any password.

Biometric security could be even more vulnerable as a result of genetic research. There are various initiatives underway today to build an open source library of decoded human genomes for the purposes of genetic research. That is a great cause which I fully support. However, there may be a dark side to it - as there usually is with any scientific discovery. I am not a genetic scientist but I wonder if the human genome could be used to reproduce biometric features such as fingerprints, retinas, or DNA samples. After all, a lot of the genomic research is aimed at the ability to reproduce vital human organs...

One day, we might be voluntarily or involuntarily implanting chips into the human body for the purposes of strong, fast, and secure authentication. Some of this is already happening today. We are chipping our pets to find them when they get lost. We are tagging prisoners under home confinement. We are traveling with passports containing our biometric data. A chip using some type of RFID technology could transmit our identity to various applications to identify us. The chip could do so frequently - perhaps every few seconds - to continuously validate the identity of the user. That is, until someone finds a way to falsify the chip signature...

Clearly, solving the security dilemma is not easy. Just like any high stakes game, there may never be a perfect solution. Instead, it will be a race. We will keep inventing better authentication while trying to stay a step ahead of the bad guys. Every time the good guys invent a new security measure, the crooks will find a way to beat it. Hence a new level of security has to be invented - without completely sacrificing convenience. And so it will keep going round after round.

Sunday, December 9, 2012

2012 Predictions Scorecard

It’s the end of the year, the time when many pundits like to publish their predictions for 2013. I have already started working on mine but since I am not an industry analyst, I like to first revisit how I did with my Content Management Predictions for 2012. So, here is the scorecard for my 2012 predictions:

1. Big Data will be the hype of the year
Boy did I get this one right! There is hardly a day without some article published about the Big Data revolution. Throughout 2012, Big Data was the solution for the problem - any problem. You take 10 experts and you’ll get 10 definitions of Big Data. In reality, most people started saying Big Data when they just meant ‘data’ or when they meant ‘understanding the data’ which really means analytics. Yet, no conversation could go on and no press article could be written without mentioning Big Data. Big Data became the hype of the year.
Verdict: Hit, Score: 1/1

2. “Social” becomes a feature
This prediction has also come true. Salesforce already had released Chatter last year, now SAP has Jam, and Oracle has different social offerings integrated with the respective applications: Oracle Social Relationship Management, Oracle Social Network, Oracle Social Marketing, etc. OpenText (my employer) ships today OpenText Tempo Social as well as capabilities such as Social BPM which is a social-based decision-making step in a business process. The stand-alone social software market is being rapidly consolidated with players such as Yammer acquired by Microsoft and the once red-hot Jive trading below the level from 12 months ago.

My prediction that SharePoint 15 - now called SharePoint 2013 - would be the catalyst for this featurization of social software has also come true. Well, at least that was the message about Yammer that Microsoft offered at the SharePoint Conference 2012.
Verdict: Hit, Score: 2/2

3. SharePoint will solve every problem, again
My prediction was that Microsoft would freeze the market in 2012, with aggressive marketing of the not yet shipping SharePoint 2013. That’s what happened with every previous version of SharePoint and it was not a stretch to expect that it would happen again. Yet, Microsoft has had a different idea. They have bet the farm on Office 365, Windows 8, and Surface. SharePoint didn’t get anywhere near the attention of the years past. In fact, Microsoft recently increased the pricing of SharePoint by 15% which makes me speculate that they have reached the point of market saturation. This move suggests that Microsoft came to the conclusion that new features no longer help to add new customers. I’ve failed on this prediction as SharePoint is obviously no longer a strategic priority for Microsoft (I’m sure the SharePoint product team will disagree with me but well, my blog my opinion...Besides, I’m losing a point here, OK?)
Verdict: Miss, Score: 2/3

4. Rise of the hybrid cloud
Throughout 2012, it became apparent that the cloud is the way to go. Many original concerns related to cloud deployments such as security have been set to rest. That said, customers are in no rush to move their existing applications, and certainly not existing data into the cloud. That leads ultimately to discussions about what information should reside in the cloud and what should remain on premises. A private cloud is a popular alternative when concerns about issues such as legal discovery and data sovereignty arise - as the public cloud services are usually fairly ignorant about such issues. Finally, I also see that some of the mature cloud vendors developed many on-premises add-ons and integrations - just see how Salesforce is being integrated with on-premises ERP and Marketing Automation software. All of that mix of public, private, and on premises deployments is basically the idea behind a hybrid cloud.
Verdict: Hit, Score: 3/4

5. Cloudy outlook for open source
My argument here was that the cloud would obscure the open source argument - if I’m running my software in the cloud, who cares if it is open source or proprietary, right? On one hand, I stand behind my prediction. Customers using cloud services such as Evernote or Dropbox don’t care whether such services are based on open source software or proprietary code. That said, many of the clouds have been heavy adopters of open source technology, primarily motivated by the need to keep the cost as low as possible. That actually promoted open source to some degree in 2012. Also, my point above about integrating cloud applications with on-premises software makes open source cloud applications interesting for developers again. Hence, this one is a tie.
Verdict: Tie, Score: 3.5/5

6. Consumerization is here to stay
Oh yes, consumerization has taken hold in the enterprise. The new term is “bring your own device” or BYOD. If Big Data was the top buzzword on 2012, BYOD was a close second. Consumerization arrived and it is wreaking havoc in the enterprise. The plethora of mobile devices in the enterprise is actually a much lesser problem than the consumer-class services that are being used by employees with no regard to corporate policies, regulations, legal exposure or compliance. I expect that fixing this issue will be a major source of my paycheck over the next ten years.
Verdict: Hit, Score: 4.5/6

7. End of convergence
My argument was all those electronic gadgets will not be replaced by your smartphone. This is  one that many pundits might disagree with. I’ve been reading about how smartphones are replacing cameras and GPS devices. Yes, they do, when you don’t have a camera handy and forget to bring your GPS! Similarly, the iPad didn’t replace my laptop and I have my little Canon camera always with me. While the Swiss Army Knife is very cool and every guy wants to have one, it doesn’t replace your bread knife, butter knife, and carving knife.
Verdict: Hit, Score: 5.5/7

8. HTML5 won’t kill apps
On November 19th, Apple supposedly reached 1 million apps submitted to the App Store. Those are native apps. There is nothing wrong with HTML5 and it will gain a huge popularity but no, it hasn’t replaced the native apps in 2012.
Verdict: Hit, Score: 6.5/8

9. Tipping point for analytics
Analytics have been enjoying a big buzz in 2012. Mostly because of Big Data - analytics seem to be the universal cure for all aches related to Big Data. In fact, when people say Big Data, they usually mean “understanding the data” and that’s where analytics comes in. Analytics are hot and a lot of innovations occurred in 2012. At OpenText, we've released Auto-Classification - a new product based on a powerful content analytics technology. Other vendors are following suit. Yet, analytics have not quite entered the mainstream as I had predicted. It’s happening but it takes longer and I’ll call it a tie.
Verdict: Tie, Score: 7/9

10. ECM, what’s next?
I had predicted that the industry’s quest to find a replacement term for ECM would continue but that we would stick with ECM yet again. We did. The vendors tried various terms. AIIM’s “systems of record” and “systems of engagement” terminology actually stuck, but it didn’t replace ECM. In fact, even the hip new vendors like Box are now talking about Content Management. OpenText introduced its new positioning leading with Enterprise Information Management (EIM), but ECM remains a key EIM category. ECM is still the term that rules.
Verdict: Hit: Score: 8/10

Well, that’s it. The score of 8 out of 10 is not bad, is it? This has been an exciting year. The convergence of many technology trends continued and their impact on the enterprise started to take shape. 2013 will be even more interesting, I’m sure! I plan to publish my 2013 predictions in the first week of the new year. Until then, Merry Christmas and a happy new year!

Sunday, December 2, 2012

Microsoft at an Inflection point

The year 2012 will go down in history as a major milestone for Microsoft. The once dominant company started the year under tremendous pressure after continuously losing users, market share, and hipness to a new breed of vendors driving the perfect storm of change. This change is based on the shift to mobility, social software and cloud computing and companies such as Apple, Google, LinkedIn, Dropbox, and Amazon are Microsoft's new arch-enemies.

Microsoft wasn't sitting idly by watching the market forces unfold, though. In fact, it is hard to find any blemish on Microsoft's execution in 2012 and the company delivered on all key battlegrounds. With Office 365, Microsoft demonstrated that they are all-in at betting the farm on the cloud. With the purchase of Yammer (and Skype before), they are making great strides to become a force in social software. Microsoft Windows 8 operating system is attractive, innovative, and differentiated. With the help of Nokia, Microsoft delivered some impressive smartphones and the Microsoft Surface tablet is receiving good reviews (well, until they announced the pricing earlier this week).

Measured by their product execution, Microsoft turned things around in 2012 and the company is cool again. The problem is, however, that Microsoft continues losing users and market share. According to comScore, Microsoft mobile OS market share has continued declining and remains in the irrelevant territory with 3.2% (IDC gives them 3.6% which is about the same - really bad). Gartner predicts, that 90% of enterprises will skip Windows 8 and consumers are wishing for Apple devices under the Christmas tree. Finally, early indicators suggest that the Surface is not selling well either.

Why is that, you wonder? Microsoft designed a perfect combination of operating system, devices, social tools, cloud, and even enterprise applications. All of it is beautifully integrated and does (almost) everything you need. The problem is, that it only works with Microsoft.

That's right. Just like in the old days when the Wintel architecture used to dominate the market with well over 90% market share, Microsoft continues building products that assume we live in a Microsoft-only world. Do you want to access any of the Microsoft consumer services? Well, you need a Microsoft email account like Hotmail. Do you want to work with a Microsoft application from an iPad or an Android device? Tough luck, you will always be a second class user at best. Wanna search? Get used to Bing!

Microsoft's continued insistence on this puritanical approach to architecture is creating an interesting dilemma. A Microsoft-only environment may work great but you will only find such an environment on the Microsoft campus. If you use any non-Microsoft platforms, the appeal of Microsoft’s closely-integrated architecture diminishes fairly quickly. That includes pretty much everyone as over 96% of users today have a mobile device running something other than Windows.

Microsoft is finding itself in completely unfamiliar territory. The world is not all about Microsoft anymore. Do you want to share documents with friends? Chances are much higher that they have a Dropbox account rather than a Microsoft SkyDrive account. Do you want to create a professional community of interest? Everybody has a LinkedIn account while Microsoft Live has...does it still exist? Do you want to use SharePoint from a Mac or an iPad? Tough luck! You may be better served by another ECM vendor.

Microsoft’s finding itself on an inflection point. While they are delivering some very competitive products, those products have been built for a Microsoft-only world that no longer exists. To address this problem, Microsoft will have to open up and mandate all its groups to go multi-platform. That might be their only chance to start gaining market share again. That's a tough pill to swallow for a company that has single-vendor architecture in its DNA. It is a particularly difficult move, given that Microsoft's top competitor, Apple, has persevered through decades of single-digit market share to become the world’s largest company - based on a puritanical single-vendor architecture!

Sunday, November 25, 2012

Security and Convenience - The Balance Matters

In our world, where information is the ultimate strategic resource, security is important. Very important. But security usually stands in the way of productivity and convenience.

Take something like strong passwords and the need to change them regularly. We could significantly increase the system security if we mandated very long, strong passwords with 256 characters and if we mandated them to be changed every day. The data would be very safe with these passwords. Of course remembering such passwords would be highly inconvenient, if not impossible, and changing them daily would be annoying. Want even higher security? How about 1024 character long passwords that have to be changed every hour?

Practical security today has to reach a compromise; a balance between security and convenience. We have to keep pushing the barriers on security without annoying users so much that they either give up or develop behavior that actually compromises the security altogether. In my password example above, people would lose productive time every day and they would likely have no choice but to write the password down every morning on a piece of paper kept right next to their monitor. All of those passwords lying around would severely compromise the security of the system which would achieve exactly the opposite from the intended result. If you are interested in learning more about password related challenges, I recommend reading the recent Wired article titled Kill the Password: Why a String of Characters Can’t Protect Us Anymore.

Clearly, there is a constant tradeoff that we have to make between security and convenience. However, not every organization is the same in terms of how strong their security needs to be  and how much inconvenience they can impose on their employees. I often meet customers who are on very different points of the spectrum, from very casual to utterly paranoid.

Of course nobody will admit that they have a casual attitude towards security. However, consider the differences between retail, manufacturing, and, yes, many technology companies which often get by with relatively simple security (I know, there are always exceptions) versus organizations such as military installations, intelligence agencies, and nuclear facilities. These operate on a completely different security level and have no choice but to impose a lot of inconvenience on their employees.

Think about all of the employees working at Internet startups in Silicon Valley and about how much security hassle you could put them through - not much! James Bond, on the other hand, never tires of opening the cafeteria doors using his palm and voice print. Apparently, high security standards come with some jobs (or companies).

What’s important is that one size doesn’t fit all when it comes to security. Different organizations face different security problems and their solutions have to be adjustable. For example, a two-factor authentication may be appropriate in some environments while a biometrics based authentication is a good fit in others.  Getting the balance right between security and convenience is important - the balance matters!

Sunday, November 18, 2012

Social Is Now a Noun

Inspired by the tremendous growth of Facebook and its endless ability to compel users to share, communicate and engage, companies have been trying to convince their employees to do the same at work. They hope that the same type of technology will help employees to share, communicate and engage inside the enterprise just as they do in the consumer world. Sometimes it works, although many companies are learning that just because you build it, they won’t necessarily come.
Social software is a very hot space right now.
What’s interesting, though, is how the industry struggles to find the right way to describe what it is we are doing here. The idea is not new, actually. Collaboration has been around for well over a decade and the benefits this new breed of social software offers is very similar to what collaboration did back in the early days of eRoom and OpenText Livelink. Heck, Lotus Notes has been called collaboration - the history of collaboration goes back to 1989! But of course we can’t use the old name ‘collaboration’ for this new hip, social software, can we?
Dedicated collaboration/social software is becoming rare
So the industry went on a long journey, searching for the right term. We started with extended collaboration, extended enterprise collaboration, collaboration software for the enterprise, team collaboration, and content collaboration and that apparently wasn’t cool enough, even though all these terms are still being used by various vendors. Then we borrowed the term social networking since that was how we used to refer to the thing we did on Facebook back then. That didn’t last very long and new terms came along including social software, social communities, social workplace, social business, and social collaboration. At some point, the industry even briefly toyed with the idea to seriously call this software category the Facebook for the Enterprise.

That, thankfully, didn’t take hold and so the journey continues. The latest trend is using just the word ‘social’. Yeah, I know, it is an adjective but old rules like grammar shouldn’t stand in the way of progress and world domination. And so, social became a noun.

More and more, social capabilities are built into enterprise applications
Well, maybe, the search will be over soon. It is becoming increasingly apparent that ‘social’  is becoming a feature rather than an industry. Social capabilities are increasingly becoming integrated into other enterprise software - from content management, business process management, customer experience management, to CRM and ERP. So, perhaps we don’t have to worry about what to call the space because it is not a space at all - it is an integral part of enterprise applications.

Wednesday, November 7, 2012

The Only Hope for Privacy?

In his interview with TechCrunch in early 2010, Facebook founder and CEO Mark Zuckerberg famously proclaimed that privacy is no longer the social norm. Well, not so fast, Mark. Some of us still think that privacy is important. But Mr. Zuckerberg has a point too. Protecting privacy is becoming increasingly difficult in the Facebook era.

It’s not just Facebook and the information that we voluntarily disclose. We are being increasingly tracked, often without knowing about it. From the websites we visit, our physical location via smartphone tracking, to the ubiquitous TV cameras on city streets - our moves are being recorded and the volume of information about us continues to grow.

So it appears, that our future will be - just like Mr. Zuckerberg predicted - devoid of any privacy. Every one of us will always be monitored by the modern incarnation of the Orwellian telescreen which will continue collecting huge quantities of information about us. Yet the growing volume of information may be our best hope for keeping some privacy after all. Let me explain.

From the film adaptation of Orwell's 1984
Powerful computers can be used by governments and corporations - the good guys and bad guys alike - to weed through all that information collected about you. Monitoring anyone particular is relatively easy but monitoring everyone to find someone or something particular is becoming increasingly difficult. There is just so much information! Finding anything is becoming a tough chore that requires some serious computing power. In other words, collecting a ton of information about you without the capacity to decode and analyse it is pointless.

In addition, the information is increasingly encrypted and comes in formats that are not easy to search and analyse. We all know that any encryption can - at least in theory - be decoded using a brute force attack. But we also know that the higher the level of encryption we apply, the harder it is to decode the data using brute force. This has been an ongoing cat-and-mouse game in which the larger and larger volume of data with increasingly stronger encryption demands more and more computing power to decode and analyse it.

Back in August  2011, I wrote about how the massive amount of recorded video surveillance was making it actually harder to apprehend the suspects after the Summer 2011 riots in London. Contrast that with the famous scene from the Philip Kaufman movie The Unbearable Lightness of Being where the secret police is indicting people based on a handful of photographs after the Prague Spring uprising of 1968. A couple of photos were relatively easy to analyse while terabytes of video have made it practically impossible.

Today, there are a few key choke points on the Internet, such as the intercontinental submarine cables, and it is feasible that a hostile foreign government could tap into them to capture and decode all the data. Back in 2010, China allegedly re-routed and hijacked a large portion of US Internet traffic. But to do anything meaningful with all that data, they’d need to build a really powerful supercomputer. By the time it’s built, that supercomputer will likely become obsolete - the volume of data is simply growing so quickly that the brute computing power is having a tough time keeping up.

So as it turns out, the growth of information volume could become an effective defense against spying and monitoring. Perhaps that works also on a smaller scale. One ‘bad picture’ on Facebook might cause you trouble for years to come, particularly if that’s the only picture of you there is. However, if it is one of 10,000 pictures of you, chances are the compromising one will not emerge during a cursory background check, provided that most of them are “good”.

This approach might even provide an effective defense strategy in an eDiscovery case, where the court subpoenas all information relevant to a given lawsuit. When complying with the subpoena results in a body of evidence comprised of 10 documents, the opposing party will have it easy to find what they need. If the court request, however, yields 10 million documents, the opposing party may need to reconsider whether or not they want to pay their lawyers $500 per hour to review all of that evidence.

Perhaps privacy does stand a chance afterall - when we drown the surveillance in a sea of data.

Thursday, October 25, 2012

Managing Paper in the Enterprise

Today, we observe the World Paper-Free Day to remind ourselves that we all are on a mission to get rid of the paper waste in the enterprise. I am a strong proponent of reducing the use of paper, even if I sometimes struggle. I read most of my books and magazines on my iPad, I use tools such as Evernote to take notes, and I have pretty much never any cash on me. If you look around my office, you’d probably score me as an 8 out of 10 on being paper-free except for my bookcase full of books. I really like books...
Many of our customers, however, struggle going paper-free. Indeed, flipping the switch from one day to another might be a little daunting. In reality, this transition has to be made easy to be realistic - this is more often a paper evolution than a revolution. That reminds me of some of the scenarios where our customers manage paper in the enterprise today:

1. Inbound
This is the most obvious situation where our customers deal with paper. This is the front-line in the war on paper. Many of our customers still receive paper-based information via snail mail and fax. They use our capture software to scan the paper documents right in the mailroom and to automatically extract as much information out of the scans as possible via optical character recognition (OCR) and data extraction which recognizes important data in the document (i.e. address, date, PO number etc.) to extract the metadata. Same thing happens with faxes that are captured using our fax software where the same OCR technique can be applied.

2. Outbound
At the tail end of many business processes is a new piece of content that has been produced to communicate to the stakeholders. This communication comes in two forms:
- Publishing - which is a form of communication using the same content asset(s) for a given target audience (more than one person). Publishing can occur online, on a portal, via mobile devices, email, etc. but it can also happen using paper - for example as a book or a marketing brochure.
- Customer Communication Management (also known as Output Management) which is a communication that has been personalized for a single individual. An example of such communication can be a utility bill which contains data about your monthly charges but it can also include useful, personalized tips on how to lower your next bill. This type of communication can again occur via a multi-channel delivery, one of which is often paper.

3. Physical Records
Managing records involves often the capability to manage physical records as well. The physical records - usually pieces of paper, but sometimes also objects such as police evidence - need to be kept the same way as electronic records, except that they don’t fit into a digital content repository. With physical records, the cost of storage is a major issue and records disposition usually means freeing up physical space on a shelf in a warehouse where those boxes of physical records are stored. The physical warehouse space is a major cost factor and many customers are approaching us today with projects to convert existing physical records stores into electronic records en masse.

4. Paper Processes
Yes, I know that the main idea of business process management (BPM) is to route information quickly from step to step and task to task - which is ideally done in an electronic form. But a few of our customers have to live with the paper-based process for now and yet they find inefficiencies in using BPM to track the status of each process instance. The workers complete their tasks on paper and then they “check off” the task in the BPM system to alert the next person that a task is coming. I know, I know...this is not the kind of BPM I usually recommend to our customers but I’ve seen it happen. Actually, this approach still delivers many of the BPM benefits. The manager can monitor the status of all the workers and processes, the processes can be optimized, the bottlenecks can be identified and the work teams can be re-aligned as needed. Those are some really cool benefits of BPM. Still, the plan is usually to add the capture software to get rid of paper altogether!

These are some of the use cases where our customers deal with paper - often as an intermediate step on the way to a paper-free enterprise. The paper-free vision is a great one but we will be probably dealing with paper for a long time. Any step that moves us in the right direction deserves a credit.

Here is to a Paper-Free World! 

Tuesday, October 16, 2012

These Filler Words

We marketers live by making up names - names of markets, products, and technologies. In the English language, names are easily created by chaining words together. New terms can be created very easily: mountain standard time (MST), automated teller machine (ATM), and Securities and Exchange Commission (SEC) are just a few examples of how nouns and adjectives can be strung together in English to create smart sounding new terms and names. Acronyms such as NBC, CIA, NFL, LAX, JFK, SAT, BTW, CEO, and USA are part of our everyday language.
Technology marketing often resembles the Alphabet Soup. (Source)
The problem is that technology marketers like to fall in love with three letter acronyms. Consequently, the terms and names they coin have to consist of three words. Random Access Memory, Content Distribution Network, Subscriber Identity Module are just a few examples of such three word names. Consequently, we have three letter acronyms such as RAM, CDN, SIM that dominate our technology language. We love it so much that we even have an acronym for the term ‘three letter acronym’: TLA.

It seems that sometimes we even add an unnecessary word just to make a name consist of three words. For example, I think that Enterprise Content Management could do without the word ‘enterprise’. Since there is no Consumer Content Management (unless you count Picasa and iTunes), we could easily get by with just Content Management. Similarly, I don’t see much difference between Business Process Management and just Process Management. Indeed, the words ‘enterprise’ and ‘business’ are often being added without much reason. We say ‘business ethics’ where just ‘ethics’ would do perfectly fine.

But there are even worst transgressions of this kind. When I hear Advanced Case Management, I have to chuckle. ‘Advanced’ as opposed to Retarded Case Management? Or, how about Extended Data Processing? ‘Extended’ as opposed to Limited Data Processing? And then, there is the omnipresent word ‘Management’. Marketing Automation Management? Hmm... Marketing Automation would probably do. Customer Experience Management? I vote for Customer Experience!

Don’t take me wrong, I am not saying that all three letter names are wrong. Supply Chain Management is a perfectly good term and none of the three words can be dropped. Similarly, there is a difference between Asset Management and Digital Asset Management. There is definitely a place for three and even four letter names. But what I am suggesting is that we should examine the meaning before we get carried away by the language rhythm, melody, or whatever it is that makes us construct sometimes ridiculously sounding names.

Yet, there is hope. I see examples of new industry terms that consist of just two or even one word. Cloud Computing, Virtualization, Analytics - here are some very new industry terms we have settled on without messing them up. I’m sure that we could have coined Advanced Virtualization Management or Extended Information Analytics but we didn’t. Simplicity and logic have prevailed.

I know, it’s too late to reverse the course of history. The 10-year old terms such as Enterprise Content Management or Business Process Management will hardly be changed at this point. Although, if you follow my blog, I usually write just ‘content management’. Yes, with lower case characters because back in school I learned that only proper names should be capitalized. (Sigh...) Anyway, let’s create names and terms for new technologies that are simple, easily understandable, and free of redundant words!

Here is to good marketing!

Tuesday, October 9, 2012

Abstinence is not a Solution

Facebook just announced 1 billion active users last week. One BILLION is an incredible number. There is hardly any other product in the world that has 1 billion customers. OK, maybe Coca-Cola but I don’t drink coke, I do Facebook...
We all know that Facebook and other social media revolutionized the social interactions between people. Social media enabled us to be much more connected with friends and create relationships with new people. Facebook makes things possible and easy that were not possible before. The announcement, however, prompted me to think about a neglected factoid: There are 7 billion people on the planet and if only 1 billion are on Facebook, what the heck are the other 6 billion doing?

Yes, sure, among those 6 billion are many babies and other folks who are not using any computer. But still, there must be at least 3 billion people that have so far resisted Facebook. WHY???

I have many such Facebook abstinents among my friends and no amount of encouragement has moved the needle so far. Usually, they tell me that they don’t want to waste their time on frivolous conversations about what people had for dinner tonight. Another common argument is the narcissistic nature of many of the Facebook posts - it is more about advertising yourself than any other cause. Yet, the most frequent argument against Facebook is security and privacy. Disclosing any kind of information in such public forum will ultimately compromise your privacy and security. Won’t it?

Well, I have some news for you, dear Facebook abstinents. One of the greatest security threats that you can expose yourself to is not being on Facebook. Surprised? I’m serious!

If you don’t claim your own identity on Facebook (and other social sites), you expose yourself to someone else doing it in your name. It is really easy today to join Facebook as John Smith with a validated email address on Gmail or Yahoo Mail. The bad guys can establish a pretty decent profile with pictures, engage with your colleagues from remote locations, and collect a lot of personal data about you from others. Before you know it, they can pretend to be you. This is social engineering in the social media world.

With Facebook Connect, it gets even scarier. Facebook Connect is increasingly the preferred method of authentication to many other sites and services. Now, the perpetrator who has stolen your identity on Facebook gets access to many other sites on the Internet - as you!

The moral of the story is very simple. As much as a you may not be a fan of Facebook, it is important to establish your own identity online. Because if you don’t do it, someone else might do it for you...

Sunday, September 30, 2012

The Family Album of the Facebook Generation

When I was a kid, my parents had a small camera and, like most parents, they took many pictures of their offspring. As a result, there are a couple of family albums and a shoebox full of family pictures somewhere in the basement. Among those pictures are a couple hundred photographs of me.

Now, fast forward to the present time. I have literally thousands of pictures of my children. Those pictures are easily shared with other family members on flash drives and via Dropbox and often uploaded to Facebook or Flickr. When our children are grown, they will live in a world where their lives are well documented in pictures. Really well.

Yep, I have thousands of pictures of my kids...and those pics last forever!
What’s more, all those pictures will be fairly broadly distributed - our kids will have limited control over where their pictures are used. The pictures will be in many hands - many people will have a copy. Being camera shy just won’t fly.

This development is the result of two major events. First, the advent of digital photography has made taking pictures significantly less expensive - almost free. They are not entirely free as we are paying for the storage and often for the transmission cost. But compared to what pictures used to cost, they are pretty much free today. Back in the days of negatives and prints, each picture had an explicit price. A roll of 35mm film used to cost about $8 and the development plus those 36 prints would cost about $12 - that means each picture came to approximately $0.50. That made even the most avid photographer quite selective about when to squeeze the trigger!

The second event was the convergence of cameras and mobile phones. For years now, most mobile phones and smartphones include a camera and since pretty much everybody has a mobile phone today, everybody is a photographer. There are over 6 billion mobile phones out there and a significant portion of them have a built-in camera (at least 50%). On top of that, millions of digital cameras from the point-and-shoot to the fancy digital SLR cameras are sold every year. In the days of film cameras, there were only very few photographers among any group of people: weddings, group travel, or sports events. Today, everybody is taking pictures at all times. Some events in front of large audiences (i.e. concerts) have been completely transformed by the constant flashes from thousands of cameras.

All of the sudden, photography is free and ubiquitous and the result is predictable. Our lives are being documented like never before. Approximately 250 million pictures are being uploaded onto Facebook every day which is almost 25% of all pictures taken worldwide. In 2011, an estimated 375 billion pictures were taken in the world. That’s over 52 pictures for every single human being each year - at least one picture each week. Given the likelihood that the picture taking is concentrated into a smaller percentage of the world population, the likely number of pictures is much higher. Every one of the 950 million Facebook users uploads almost 100 pictures per year. That’s right, “uploads”, not “takes”. I’m guessing that if one out of every 10 pictures taken ends up on Facebook, the average Facebook user might be taking about 1,000 pictures a year. That’s 18,000 pictures before a child has a chance to hide in college from the parental picture taking.

That’s a pretty big shoe box. Our lives are documented way more than any generation before. And, we need to learn to live with it.

Tuesday, September 18, 2012

Darwin Meets the Innovator's Dilemma - in the Cloud

In his book Dealing with Darwin, Geoffrey Moore - the one of the Crossing the Chasm fame - has explained the difference between the complex systems and volume operations. According to this concept, technology vendors fall into one of two categories. The complex systems vendors focus on a relatively small number of high-value, high-touch transactions that are delivered in the form of sophisticated, customized solutions, usually integrated with other systems.
Geoffrey Moore's model for Complex Systems vs Volume Operations
The volume operators are doing exactly the opposite. They deliver relatively simple, inexpensive solutions through low-touch transactions - no direct sales force but resellers, retailers or online sales. These solutions come with no customization, no integration with other systems, and a limited feature set - one size fits all. While there are many scenarios in between (i.e. small business offerings), Geoffrey Moore suggests the the more a vendor is focused on one or the other extreme, the more effective the business model. IBM and Oracle are examples of complex system vendors while Apple and Google are volume operators.

The most important point that Moore makes is that vendor business models become so optimized for one or the other business architecture that crossing from one side to the other is impossible. Having started on one side of the model, the vendor’s business model, business processes, and key performance metrics are completely hard-wired towards the particular model that makes switching practically impossible.

Geoffrey Moore at an AIIM project
Now, let’s mesh the Moore model with another one - the Innovator’s Dilemma by Clayton Christensen. Professor Christensen suggests that disruptive innovations will always be attacking the incumbents from the bottom up - by providing low-end solutions for the less demanding customers and thus flying under the radar of the incumbent market leaders - until they gain the critical mass and sufficient functionality to challenge the incumbents.

Clayton Christensen's Innovator's Dilemma model 
OK, time to put the two models to work - in enterprise software. The established vendors including IBM, Microsoft, and Oracle are supposedly being challenged by the disruptors coming from the lower end of capabilities - just like the Innovator’s Dilemma predicted. Those disruptors are companies such as Salesforce, Google, Dropbox and others. They all have one thing in common - they are cloud based. But how do they do it when we look through the Geoffrey Moore lense?

Salesforce is a cloud based disruptor that has initially targeted the sales force automation (SFA) market and later the customer relationship management (CRM) market with a cloud based solution. Salesforce has clearly started as a complex system from day one and they have continued evolving in that direction. Their initial customer base were mostly smaller companies and departments but they continued focusing on complex systems - evolving towards more valuable and more complex deployments. Salesforce never had to shift from one side to another on the Geoffrey Moore model. Today, a typical Salesforce deployment involves integration to marketing automation and enterprise resource planning systems.

Microsoft started as a complex systems vendor with enterprise on-premise offerings such as Exchange and SharePoint (note: I’m discussing the enterprise software here, not their Xbox business). To take on the cloud challenge seriously, Microsoft created Office 365 - a cloud based offering that is clearly going in the direction of volume operations on the Moore model. That actually explains why Microsoft uses different branding for the cloud based solution and why they are not particularly worried about the integration between Office 365 and the on-premise offerings. While Microsoft shouldn’t be able to switch from the complex systems model to a volume operations model, they are applying their considerable financial resources to power through those challenges, ignoring the business model altogether.

Clayton Christensen during his visit in Waterloo, ON
Google and Dropbox started as cloud-based offerings focused purely on volume operations - on the consumers. The consumer focus and free price helped them to grow their user base quickly, often infiltrating the enterprise. But the offerings have been clearly designed as consumer software aiming to attract as many eyeballs as possible at the least possible cost. That means basic feature set, no customizations, no integrations, no direct sales force - simply one size fits all service.

While vendors such as Google, Dropbox - and also Apple, Amazon, Evernote, etc. - have a good formula to drive user adoption and even penetrate the enterprise, their business model has been designed to cater to the consumer and not to the enterprise. Enterprises need, I repeat “need” customization and integration with other systems. Just think of managing user lists and groups. Sharing content on Dropbox with your friends might be easy, sharing something with all the employees in Sales or Marketing in your company is much less trivial. You can’t manage all the user groups by hand and thus you need to integrate with other existing systems - i.e. directory services and HR Management system. Enterprise software can do that. Consumer software can’t.  

The consumer vendors might be penetrating the enterprise but today, they don’t have any enterprise offerings.

PS: This post has been inspired by a spirited discussion during the last AIIM Board meeting. I love these conversations with my fellow Board members!