Monday, December 30, 2019

Has Martech Killed the CMO?

Video killed the radio star and marketing technology might have "killed" your CMO. Let me explain. 

Ten years ago, Marketing was still more art than science but with the emergence of new marketing automation technology from companies such as Marketo (Adobe), Eloqua (Oracle), and ExactTarget (Salesforce), marketing changed. No longer some creative voodoo, Marketing is now all about processes with very measurable metrics. The CMOs finally have a way to justify their budgets by measuring every step of the pipeline-building process with metrics around MQLs, SALs, SQLs, and their conversion ratios. Marketing is now responsible for something tangible rather than the logo, color palette, and business card design. Marketing is "lives and dies" with the pipeline!  

Except that Marketing doesn’t own the pipeline. Sure, Marketing is responsible for the “top of the funnel", the earliest stages where we gather a list of prospects and filter them down based on their likelihood to buy. But when those prospects become opportunities, the Sales team takes over. Simply put, Marketing owns the early stages of the pipeline, Inside Sales owns the middle stages, and Field Sales owns the final stages:

Stages of the Go-to-Market pipeline 
While all the MQLs and SQLs are useful metrics, what the business really cares about is revenue, win rate, and the number of new accounts. The most meaningful pipeline metric is the revenue forecast and the CMO doesn’t own that. Who does? The head of Sales. Not only does he/she own all the middle and late stages but, in most companies, Marketing only supplies somewhere between 30-70% of the leads. The remaining leads are generated by Sales, channels partners, and strategic partners – all of which are usually run by the head of Sales. That’s perhaps one of the forces behind the recent emergence of a new role, the Chief Revenue Officer (CRO), who’s overseeing the entire go-to-market effort and thus truly owns the pipeline. The CRO is almost always a sales leader; hardly ever a marketing person. 

And so, many CMOs are now finding themselves reporting to the CRO and not to the CEO anymore. With that, the CMO might be a “Chief Officer” in the title but he/she is a peer to the head of Inside Sales and the different Regional Vice Presidents of Sales. While that doesn’t mean that the CMOs are no longer needed, they and their entire Marketing teams don’t see it as a promotion that they have been absorbed into Sales. In a way, the new marketing technology that finally allowed them to measure their contribution to the company’s bottom line led to a restructuring that moved them down a step. 

But what gets lost in this restructuring are all the other things that the CMOs used to do before over-pivoting towards the pipeline. On a high level, Marketing has three main roles and the go-to-market activities are just one of them. The other two are awareness and product marketing and there is a huge value in these activities that goes far beyond the scope of the CRO. Unfortunately, Marketing doesn’t have a single system of record for those activities and their contribution is very difficult to measure. They only system of record that measures their performance is the Marketo/Salesforce combination that tracks the effectiveness of their contribution to the go-to-market effort.  

The awareness activities put the company on the map and increase its brand recognition. Marketing still needs to run PR, thought leadership, advertising, and influencer programs but as those programs don’t directly contribute to the pipeline, they are not top-of-mind for the CRO. To compensate, some companies created a separate role for a Chief Communication Officer, who reports to the CEO, but even further diminishes the role of the CMO.  

The myopic focus on the pipeline also tends to neglect all the product marketing functions. Sure, the product marketing team is probably still creating the content assets for go-to-market campaigns but all those infographics, case studies and videos tend to be very transactional. Yet there are so many great companies that struggle to articulate what they actually do and how it is different from everybody else. This is not about product-level messaging and also not about “selling benefits, not features”. It’s about the corporate narrative. Without a compelling corporate narrative, the messaging is usually watered down to generic benefits around growing revenue or improving margins. Unfortunately, everybody claims they do that. Yet a marketing machine driven by pipeline goals has little time to spend on corporate messages and the people who could do that are too far down in the hierarchy to influence the corporate narrative.   

The same can be said for other strategic marketing roles such as segmentation, branding, pricing, and even analyst relations – the CRO will say they are important but since they only have an indirect impact on the next quarter’s revenue, they will not be a priority. You’ll have an all-day meeting with a Gartner analyst and both the CRO and CMO are bored to death and leave the meeting early to spend their time on some pipeline-related activities. And do you know who notices that? The Gartner analyst!  

Marketing technology has completely revolutionized Marketing over the last decade. What used to be a function that had to consistently justify their budget and existence is now being seen as a critical part of the go-to-market effort with very measurable contribution. This transformation yielded a much better alignment with Sales, greater predictability of revenue, and usually a higher budget as Marketing can show a direct causality between budget and pipeline. All of that is fantastic but looking at the role of Marketing only as a tactical pipeline generating function omits significant strategic contributions to the company that should really matter to the CEO. Because Marketing is not just about the pipeline.

Thursday, October 31, 2019

IoT, Digital Twins, and the Search for Recurring Value

Just a year ago, the Internet of Things (IoT) was all the buzz. Every analyst, vendor, and pundit was debating the massive potential of connecting to the Internet not millions but billions of machines and devices. After all, the idea that all machines will become smart, connected, and generate a lot of valuable data promises to revolutionize how we operate and service all those ‘things’. The IoT industry trend became so powerful that it conceived a slough of sub-trends, including Home Automation, Smart Cities, and Industry 4.0 - each with its own vision and ecosystem of vendors, experts, and conferences.

Fast forward 12 months and the IoT hype has faded. The mainstream media is now talking about artificial intelligence, privacy, and augmented reality and IoT doesn’t even make the list of the top technology trends. Once hot IoT platform vendors such as Uptake, C3, and GE Digital became quiet and the big vendors such as Salesforce, Oracle and SAP have reprioritized their IoT initiatives. Where IoT was the leading story of many conference keynotes last year, it is hardly being mentioned this year.

What happened?

The obvious answer is that IoT was, just like many other over-hyped trends, ahead of its time. The adoption lags well behind vendors’ narrative and sometimes, the technology isn’t quite yet doing what the marketing messages promise. A few flat fallen POC projects can quickly pour cold water over a hot trend. However, there is something more fundamental about the IoT problem and it isn’t the complexity or the maturity of the technology. 

The problem also doesn’t lie in a lack of awareness - in fact, most companies would kill for the level of buzz IoT has been getting as a category. IoT adoption is actually very rapid in the consumer world. Just count all the smart speakers, thermostats, switches, smoke detectors, doorbells, and cameras in your house. 

The greatest challenge related to the IoT adoption in the industrial world has to do with the analysis of the data. With today’s state of technology, it’s relatively easy to connect the machines and collect petabytes of sensor data. Sure, most equipment out there doesn’t have any sensors nor connectivity but retrofitting this equipment with smart electronics is not that difficult and is less and less expensive. The problem lies in those huge volumes of data. What do we do with it?

This type of big data is rather expensive to store, manipulate and analyze. The expectation for  IoT applications is to provide a real-time or near real-time analysis, which is not that simple given the massive data volumes. Many companies need to really spend time designing their data management architecture and in particular, decide what data should stay in the cloud and what should be stored on-premises. Yes, I am a cloud believer but not everything will happen in the cloud. The elasticity of the cloud is useful to handle workload peaks but the cost can add up very quickly. This is where a hybrid architecture can make a lot of sense.

Also, most of the data is only useful when viewed as a trend over time and storing and analyzing time series data is not trivial. Most traditional databases are designed to capture a value for each field while time-series databases need to capture multiple values for that field, each with a time stamp. Managing the timestamp/value pairings efficiently makes time stamp databases particularly useful for analyzing trends, which is critical for IoT applications. But such databases systems are often complex and expensive.

Finally, the trend analysis itself is perhaps the greatest challenge. Sure, the basic idea is simple: your sensor measures the temperature of a particular component and if that reaches a certain threshold, you sound an alarm. But let’s face it, this example is trivial. Your machine already does that without any IoT infrastructure - just think of all the warning lights in your car. To get some value out of your IoT investment, you need to raise the bar on the data analytics.

What you need is a digital model of your machine where you can analyze the machine holistically, combining data from multiple sensors, and examining how they influence each other. You can call it a digital twin, digital simulator, digital avatar, or cyber object - but you need it. You need to build this model to analyze your sensor data in a way that yields a recurring benefit that justifies the IoT investment. You will end up with a specific model for every type of machine and to build it, you need a data scientist but also someone who really, really understands the machine and its inner workings. And that’s the challenge. That’s why there are not that many digital twin models available.

The digital twin is not about just pointing at a machine learning system at a data lake to see what you can learn from the data. Sure, you will able to discover new, previously unknown patterns or relationships this way but that will likely yield a one-time benefit. For example, you might discover a particular vulnerability in a specific component, which can be extremely valuable. But once you have redesigned the part and fixed the problem, the IoT data no longer delivers value. The value of all that IoT investment was a one-time benefit. What you need is a recurring value because without recurring value, nobody will pay any recurring cost for your IoT solution.

Ultimately, the recurring benefit yields the ROI that justifies the substantial cost of your IoT investment. For example, the recurring benefit can come in the form of a predictive maintenance application that determines when and what type of service should be performed to prevent any unplanned downtime or performance degradation. Now, that can save a lot of money but only if you have a digital twin model that can make such predictions from all that IoT data.

I remain extremely bullish on IoT. The analysts are estimating that there are already 7 billion IoT devices worldwide. That’s more than PCs! IoT can bring the transformational power of the internet to a huge number of end-nodes, creating an amazing benefit. But we are not quite there yet.

Tuesday, September 17, 2019

How to Estimate TAM


Assessing the Total Addressable Market is a key element of every business plan. TAM should, however, not be confused with the actual current market size as I have explained in my previous post called How To Size a Market.  In short, TAM represents the maximal potential market. So, if everyone who could possibly buy your product bought it, times the price of each unit - that is the total addressable market.

Think big when calculating your TAM
TAM calculations are in general not concerned with your ability to execute and with the market’s readiness. That means, when calculating your TAM, you are not worried about your geographical presence, your competitive win rate, your ability to avoid discounting, the strength of your brand, or your products’ maturity and quality. All those factors (and more) ultimately reduce the TAM down to what is your current revenue.

Between those two data points lie other metrics that are sometimes used such as Serviceable Available Market (SAM) or Serviceable Obtainable Market (SOM). Those metrics are basically derived by applying some of the execution constraints on the TAM like a set of filters. While interesting, those metrics become very subjective and specific to any organization. For instance, your absence in certain geographies reduces your SAM but that absence is a result of your recent decision-making and it may or may not be easily revisited (i.e. by entering geographies such as Japan, China, or Africa). That’s why TAM is the most common metric because it avoids all such nuances and recent strategic and tactical business decisions.

Finding TAM

So, how do you determine your TAM? First, search around and see whether it already exists. In established markets, one of your competitors, an industry analyst, or an investment bank might have already published a TAM, whether or not they disclose how they came up with it. If you find such data point, it’s your lucky day. Senior management and investors love to quote Gartner or Goldman Sachs and their numbers are hardly ever questioned.

There are a few other possible data sources including large system integrators (Deloitte, Accenture, etc.) as well as some of the online collections of useful and less useful statistics such as eMarketer and Statista. Obviously, publications such as The Economist, The Wall Street Journal, and Business Insider also frequently quote useful data points. It’s a good habit to collect the articles with relevant data points for when you might need them.

But, let’s face it, you are probably reading this article because you have not been successful finding your market’s TAM and you are stuck. Well, if you can’t find it, you have to calculate it.

Calculating TAM

There are multiple methods of calculating TAM. Each one involves certain judgment calls and educated estimates. This is key – to calculate TAM, you will have to rely on your market expertise and make some estimates. Just like I discussed in my previous blog post on calculating the market size, your educated estimates will be much better than no data at all. After all, that’s exactly what the analysts at Gartner, Deloitte, and Goldman Sachs do.

The three methods I will discuss here are:

1. Bottom Up Calculation
The bottom up method is based on your licensing model and estimates the maximum number of licenses available in the world.

2.  Top Down Estimate
The top-down method is based on the share of valet from the overall worldwide spending in a given sector.

3. Economic Impact Estimate
This method is based on the estimate of the economic impact of your product and what companies might be willing to spend to capture that benefit.

I’m sure that there are other methods out there but these three I have found most practical. So, let’s take a closer look:

Bottom-Up Calculation

This is my favorite method of estimating the TAM because it tends to be the most accurate and relies on data that is hopefully available with some level of accuracy. Simply put, this method counts the maximum number of licenses that you could possibly sell. If your licensing is by household, you count the number of households. If you are licensing by sales person, you count the number of sales people. And if your licensing is based on number of wind turbines, you count all the wind turbines out there.

Let’s take a specific example. Let’s say that you are manufacturing a black box device for aircraft. Your licensing is basically per aircraft and hence you need to the data on the number of aircraft in the world. That data exists – it won’t take you long to find a number of data sources providing the annual production for key manufacturers, the active fleet for each airline, etc. You as the experts in this space should know those data sources and be able to assess their validity.

Now, that you have the maximum possible number of licenses, you multiply it by the price of your unit and voila, that’s your TAM. Of course you can get more granular. Let’s say that your product is only for commercial aircraft and not for private jets. You can calculate your TAM based on that. Or maybe the long haul jets require two units – you can adjust your TAM accordingly.

But remember, as you are adjusting your TAM to fit your specific product, you might cross the line from TAM to SAM or SOM because some of these filters are based on your business decisions that have reduced your addressable market. For example – not selling to private jets might be a smart GTM focus but they need your black boxes just as much as the commercial jets and your TAM should reflect that. While it is important to put some boundaries around your company’s or product’s opportunity, don’t restrict that opportunity based on tactical thinking.

The challenge of this method is that the number of possible licenses might not exist or that it is not precise enough. Let’s say you license by number of sales people, but your product is only relevant to those who are on the road every day. Or maybe it’s only for sales people selling insurance. Finding that data may prove much more difficult. Still, there are many sources worth checking out: Gallup, Nielsen, Pew Research Center, US Bureau of Economic Analysis, US Bureau of Labor Statistics, US Census Bureau, Data.gov, Reuters Data Dive, and many others.

Another way to get to relevant data is your own customer base. Let’s say that you have a product licensed for your customers’ IT helpdesk and so you need to know their number of IT helpdesk workers in the world. Analyzing your own customer data, you might be able to determine that your customers have on average 8% of their employees in IT and out of those, 25% work the helpdesk. Knowing this ratio is extremely useful and if you have at least a few hundred customers, it is very accurate.

With that ratio, you will need to establish the employee population in your target market. If you sell to Financial Services, it’s easy to find out that there are 6.3 million workers in that sector. If 2% of them work the IT helpdesk, you have your TAM.

If you can’t find the employee population in your target market, you can get it from the number of companies and their respective employee numbers. That data is available from data sources such as Dun & Bradstreet, Lightning Data (Salesforce), LinkedIn, NAICS.com, and others.

A similar process works for other licensing models - number of vehicles, terabytes of data, or megawatts of energy produced. Sure, sometimes you may need to estimate some of the data points. This is where your own expertise comes in. It’s OK to estimate but always, document your estimates and data sources. That way, anyone who doesn’t agree with any of your decisions can follow your logic and adjust accordingly if they think they know better.

Top-Down Estimate

The top down estimate is based on the share-of-valet calculation for your product. The basic idea is that from a macro-economical standpoint, there is a finite amount of money spent on certain goods or services. That spend is often well documented as several analyst firms publish the total annual spend on markets such as IT, retail, travel, advertising, etc.

I’ll stick to the IT sector, since this is a blog on technology and that’s what I know. Here, firms like Gardner, Forrester and IDC regularly publish data on the worldwide market spend for IT technologies. Let’s take Gardner – they forecast it for the next five years in their quarterly IT Spending Forecast. That number is not going to grow because of your product, no matter how amazing it is. That means, you need to take some of the share of valet from the all other products. In other words, you’ll need to convince the IT buyers to spend some of their precious budget on your product at the cost of all the other IT spend. You need to take over some share of their valet.

The Gardner forecast gives you some amount of granularity, using a two-layer taxonomy for software and providing the data for each of the categories and sub-categories. For example, under CRM, Gardner forecasts the following sub-categories: Customer Service and Support, Digital Commerce Platforms, Marketing, and Sales. This is very helpful to further narrow your available share-of valet down to the respective subcategory.

Your product will likely occupy an even narrower category and you will need to estimate the share your category will take from the Gardner taxonomy. Let’s say your product is a CPQ solution (Configure, Price, Quote), which clearly falls under the Sales sub-category under CRM. Now, the good news is that because this is a category recognized by Gardner (with its own Magic Quadrant) there is likely more data available, including the current market size and maybe some breakdowns by geography and vendors. Gardner doesn’t publish this data but they track it and they will share it if you have the right subscription.

If that data doesn’t exist, you need to estimate. List all of the solution types in a given sub-category and assign them percentages based on your best judgment. Again, your educated estimate is going to be better than no data at all. But in general, this method is more useful for market sizing and forecasting than to estimate the TAM. Still, having the understanding of the overall market spend and its taxonomy is useful for TAM calculation.

Economic Impact Estimate

The economic impact estimate is trying to assess the value your solution has for the customers and estimate how much they would be willing to pay for that value. The basic logic is that if a particular product lowers your cost by 10%, customers would be willing to pay, say, 1% of that cost to realize that benefit. This rationale makes a lot of sense economically, however, I consider this method rather unreliable.

First, companies are really mistrustful of any promise of hard cost savings or revenue growth. All the ROI calculators in the world are usually met with severe skepticism. On top of that, companies are very reluctant to promise you a share of their savings or growth. What they want is a predictable and elastic operating expense that they can throttle up and down as needed. That’s why all the software is moving to the cloud – it shifts any risk towards an operating expenditure.

Still, the TAM calculation based on the economic impact estimate makes sense on a macro level. For example, you can estimate the impact of the entire IT infrastructure on a particular sector – i.e. how much does technology make a difference in banking or in retail. Or perhaps the impact of a major industry trend such as mobility or IoT. But as you start getting more granular, it’s hard to defend that your particular product has made all of the contribution to the bottom line. That’s why using this methodology can lead to unreasonable TAM estimates.

As you can see, there is more than one way to go and if you are serious about estimating your TAM, I do recommend you try them all. Don’t expect the results to be same but hopefully at least within the same number of digits. If they differ by order of magnitudes, you might have to revisit some of your estimates.

Good luck estimating and…trust your judgment!