Thursday, October 31, 2019

IoT, Digital Twins, and the Search for Recurring Value

Just a year ago, the Internet of Things (IoT) was all the buzz. Every analyst, vendor, and pundit was debating the massive potential of connecting to the Internet not millions but billions of machines and devices. After all, the idea that all machines will become smart, connected, and generate a lot of valuable data promises to revolutionize how we operate and service all those ‘things’. The IoT industry trend became so powerful that it conceived a slough of sub-trends, including Home Automation, Smart Cities, and Industry 4.0 - each with its own vision and ecosystem of vendors, experts, and conferences.

Fast forward 12 months and the IoT hype has faded. The mainstream media is now talking about artificial intelligence, privacy, and augmented reality and IoT doesn’t even make the list of the top technology trends. Once hot IoT platform vendors such as Uptake, C3, and GE Digital became quiet and the big vendors such as Salesforce, Oracle and SAP have reprioritized their IoT initiatives. Where IoT was the leading story of many conference keynotes last year, it is hardly being mentioned this year.

What happened?

The obvious answer is that IoT was, just like many other over-hyped trends, ahead of its time. The adoption lags well behind vendors’ narrative and sometimes, the technology isn’t quite yet doing what the marketing messages promise. A few flat fallen POC projects can quickly pour cold water over a hot trend. However, there is something more fundamental about the IoT problem and it isn’t the complexity or the maturity of the technology. 

The problem also doesn’t lie in a lack of awareness - in fact, most companies would kill for the level of buzz IoT has been getting as a category. IoT adoption is actually very rapid in the consumer world. Just count all the smart speakers, thermostats, switches, smoke detectors, doorbells, and cameras in your house. 

The greatest challenge related to the IoT adoption in the industrial world has to do with the analysis of the data. With today’s state of technology, it’s relatively easy to connect the machines and collect petabytes of sensor data. Sure, most equipment out there doesn’t have any sensors nor connectivity but retrofitting this equipment with smart electronics is not that difficult and is less and less expensive. The problem lies in those huge volumes of data. What do we do with it?

This type of big data is rather expensive to store, manipulate and analyze. The expectation for  IoT applications is to provide a real-time or near real-time analysis, which is not that simple given the massive data volumes. Many companies need to really spend time designing their data management architecture and in particular, decide what data should stay in the cloud and what should be stored on-premises. Yes, I am a cloud believer but not everything will happen in the cloud. The elasticity of the cloud is useful to handle workload peaks but the cost can add up very quickly. This is where a hybrid architecture can make a lot of sense.

Also, most of the data is only useful when viewed as a trend over time and storing and analyzing time series data is not trivial. Most traditional databases are designed to capture a value for each field while time-series databases need to capture multiple values for that field, each with a time stamp. Managing the timestamp/value pairings efficiently makes time stamp databases particularly useful for analyzing trends, which is critical for IoT applications. But such databases systems are often complex and expensive.

Finally, the trend analysis itself is perhaps the greatest challenge. Sure, the basic idea is simple: your sensor measures the temperature of a particular component and if that reaches a certain threshold, you sound an alarm. But let’s face it, this example is trivial. Your machine already does that without any IoT infrastructure - just think of all the warning lights in your car. To get some value out of your IoT investment, you need to raise the bar on the data analytics.

What you need is a digital model of your machine where you can analyze the machine holistically, combining data from multiple sensors, and examining how they influence each other. You can call it a digital twin, digital simulator, digital avatar, or cyber object - but you need it. You need to build this model to analyze your sensor data in a way that yields a recurring benefit that justifies the IoT investment. You will end up with a specific model for every type of machine and to build it, you need a data scientist but also someone who really, really understands the machine and its inner workings. And that’s the challenge. That’s why there are not that many digital twin models available.

The digital twin is not about just pointing at a machine learning system at a data lake to see what you can learn from the data. Sure, you will able to discover new, previously unknown patterns or relationships this way but that will likely yield a one-time benefit. For example, you might discover a particular vulnerability in a specific component, which can be extremely valuable. But once you have redesigned the part and fixed the problem, the IoT data no longer delivers value. The value of all that IoT investment was a one-time benefit. What you need is a recurring value because without recurring value, nobody will pay any recurring cost for your IoT solution.

Ultimately, the recurring benefit yields the ROI that justifies the substantial cost of your IoT investment. For example, the recurring benefit can come in the form of a predictive maintenance application that determines when and what type of service should be performed to prevent any unplanned downtime or performance degradation. Now, that can save a lot of money but only if you have a digital twin model that can make such predictions from all that IoT data.

I remain extremely bullish on IoT. The analysts are estimating that there are already 7 billion IoT devices worldwide. That’s more than PCs! IoT can bring the transformational power of the internet to a huge number of end-nodes, creating an amazing benefit. But we are not quite there yet.

Tuesday, September 17, 2019

How to Estimate TAM


Assessing the Total Addressable Market is a key element of every business plan. TAM should, however, not be confused with the actual current market size as I have explained in my previous post called How To Size a Market.  In short, TAM represents the maximal potential market. So, if everyone who could possibly buy your product bought it, times the price of each unit - that is the total addressable market.

Think big when calculating your TAM
TAM calculations are in general not concerned with your ability to execute and with the market’s readiness. That means, when calculating your TAM, you are not worried about your geographical presence, your competitive win rate, your ability to avoid discounting, the strength of your brand, or your products’ maturity and quality. All those factors (and more) ultimately reduce the TAM down to what is your current revenue.

Between those two data points lie other metrics that are sometimes used such as Serviceable Available Market (SAM) or Serviceable Obtainable Market (SOM). Those metrics are basically derived by applying some of the execution constraints on the TAM like a set of filters. While interesting, those metrics become very subjective and specific to any organization. For instance, your absence in certain geographies reduces your SAM but that absence is a result of your recent decision-making and it may or may not be easily revisited (i.e. by entering geographies such as Japan, China, or Africa). That’s why TAM is the most common metric because it avoids all such nuances and recent strategic and tactical business decisions.

Finding TAM

So, how do you determine your TAM? First, search around and see whether it already exists. In established markets, one of your competitors, an industry analyst, or an investment bank might have already published a TAM, whether or not they disclose how they came up with it. If you find such data point, it’s your lucky day. Senior management and investors love to quote Gartner or Goldman Sachs and their numbers are hardly ever questioned.

There are a few other possible data sources including large system integrators (Deloitte, Accenture, etc.) as well as some of the online collections of useful and less useful statistics such as eMarketer and Statista. Obviously, publications such as The Economist, The Wall Street Journal, and Business Insider also frequently quote useful data points. It’s a good habit to collect the articles with relevant data points for when you might need them.

But, let’s face it, you are probably reading this article because you have not been successful finding your market’s TAM and you are stuck. Well, if you can’t find it, you have to calculate it.

Calculating TAM

There are multiple methods of calculating TAM. Each one involves certain judgment calls and educated estimates. This is key – to calculate TAM, you will have to rely on your market expertise and make some estimates. Just like I discussed in my previous blog post on calculating the market size, your educated estimates will be much better than no data at all. After all, that’s exactly what the analysts at Gartner, Deloitte, and Goldman Sachs do.

The three methods I will discuss here are:

1. Bottom Up Calculation
The bottom up method is based on your licensing model and estimates the maximum number of licenses available in the world.

2.  Top Down Estimate
The top-down method is based on the share of valet from the overall worldwide spending in a given sector.

3. Economic Impact Estimate
This method is based on the estimate of the economic impact of your product and what companies might be willing to spend to capture that benefit.

I’m sure that there are other methods out there but these three I have found most practical. So, let’s take a closer look:

Bottom-Up Calculation

This is my favorite method of estimating the TAM because it tends to be the most accurate and relies on data that is hopefully available with some level of accuracy. Simply put, this method counts the maximum number of licenses that you could possibly sell. If your licensing is by household, you count the number of households. If you are licensing by sales person, you count the number of sales people. And if your licensing is based on number of wind turbines, you count all the wind turbines out there.

Let’s take a specific example. Let’s say that you are manufacturing a black box device for aircraft. Your licensing is basically per aircraft and hence you need to the data on the number of aircraft in the world. That data exists – it won’t take you long to find a number of data sources providing the annual production for key manufacturers, the active fleet for each airline, etc. You as the experts in this space should know those data sources and be able to assess their validity.

Now, that you have the maximum possible number of licenses, you multiply it by the price of your unit and voila, that’s your TAM. Of course you can get more granular. Let’s say that your product is only for commercial aircraft and not for private jets. You can calculate your TAM based on that. Or maybe the long haul jets require two units – you can adjust your TAM accordingly.

But remember, as you are adjusting your TAM to fit your specific product, you might cross the line from TAM to SAM or SOM because some of these filters are based on your business decisions that have reduced your addressable market. For example – not selling to private jets might be a smart GTM focus but they need your black boxes just as much as the commercial jets and your TAM should reflect that. While it is important to put some boundaries around your company’s or product’s opportunity, don’t restrict that opportunity based on tactical thinking.

The challenge of this method is that the number of possible licenses might not exist or that it is not precise enough. Let’s say you license by number of sales people, but your product is only relevant to those who are on the road every day. Or maybe it’s only for sales people selling insurance. Finding that data may prove much more difficult. Still, there are many sources worth checking out: Gallup, Nielsen, Pew Research Center, US Bureau of Economic Analysis, US Bureau of Labor Statistics, US Census Bureau, Data.gov, Reuters Data Dive, and many others.

Another way to get to relevant data is your own customer base. Let’s say that you have a product licensed for your customers’ IT helpdesk and so you need to know their number of IT helpdesk workers in the world. Analyzing your own customer data, you might be able to determine that your customers have on average 8% of their employees in IT and out of those, 25% work the helpdesk. Knowing this ratio is extremely useful and if you have at least a few hundred customers, it is very accurate.

With that ratio, you will need to establish the employee population in your target market. If you sell to Financial Services, it’s easy to find out that there are 6.3 million workers in that sector. If 2% of them work the IT helpdesk, you have your TAM.

If you can’t find the employee population in your target market, you can get it from the number of companies and their respective employee numbers. That data is available from data sources such as Dun & Bradstreet, Lightning Data (Salesforce), LinkedIn, NAICS.com, and others.

A similar process works for other licensing models - number of vehicles, terabytes of data, or megawatts of energy produced. Sure, sometimes you may need to estimate some of the data points. This is where your own expertise comes in. It’s OK to estimate but always, document your estimates and data sources. That way, anyone who doesn’t agree with any of your decisions can follow your logic and adjust accordingly if they think they know better.

Top-Down Estimate

The top down estimate is based on the share-of-valet calculation for your product. The basic idea is that from a macro-economical standpoint, there is a finite amount of money spent on certain goods or services. That spend is often well documented as several analyst firms publish the total annual spend on markets such as IT, retail, travel, advertising, etc.

I’ll stick to the IT sector, since this is a blog on technology and that’s what I know. Here, firms like Gardner, Forrester and IDC regularly publish data on the worldwide market spend for IT technologies. Let’s take Gardner – they forecast it for the next five years in their quarterly IT Spending Forecast. That number is not going to grow because of your product, no matter how amazing it is. That means, you need to take some of the share of valet from the all other products. In other words, you’ll need to convince the IT buyers to spend some of their precious budget on your product at the cost of all the other IT spend. You need to take over some share of their valet.

The Gardner forecast gives you some amount of granularity, using a two-layer taxonomy for software and providing the data for each of the categories and sub-categories. For example, under CRM, Gardner forecasts the following sub-categories: Customer Service and Support, Digital Commerce Platforms, Marketing, and Sales. This is very helpful to further narrow your available share-of valet down to the respective subcategory.

Your product will likely occupy an even narrower category and you will need to estimate the share your category will take from the Gardner taxonomy. Let’s say your product is a CPQ solution (Configure, Price, Quote), which clearly falls under the Sales sub-category under CRM. Now, the good news is that because this is a category recognized by Gardner (with its own Magic Quadrant) there is likely more data available, including the current market size and maybe some breakdowns by geography and vendors. Gardner doesn’t publish this data but they track it and they will share it if you have the right subscription.

If that data doesn’t exist, you need to estimate. List all of the solution types in a given sub-category and assign them percentages based on your best judgment. Again, your educated estimate is going to be better than no data at all. But in general, this method is more useful for market sizing and forecasting than to estimate the TAM. Still, having the understanding of the overall market spend and its taxonomy is useful for TAM calculation.

Economic Impact Estimate

The economic impact estimate is trying to assess the value your solution has for the customers and estimate how much they would be willing to pay for that value. The basic logic is that if a particular product lowers your cost by 10%, customers would be willing to pay, say, 1% of that cost to realize that benefit. This rationale makes a lot of sense economically, however, I consider this method rather unreliable.

First, companies are really mistrustful of any promise of hard cost savings or revenue growth. All the ROI calculators in the world are usually met with severe skepticism. On top of that, companies are very reluctant to promise you a share of their savings or growth. What they want is a predictable and elastic operating expense that they can throttle up and down as needed. That’s why all the software is moving to the cloud – it shifts any risk towards an operating expenditure.

Still, the TAM calculation based on the economic impact estimate makes sense on a macro level. For example, you can estimate the impact of the entire IT infrastructure on a particular sector – i.e. how much does technology make a difference in banking or in retail. Or perhaps the impact of a major industry trend such as mobility or IoT. But as you start getting more granular, it’s hard to defend that your particular product has made all of the contribution to the bottom line. That’s why using this methodology can lead to unreasonable TAM estimates.

As you can see, there is more than one way to go and if you are serious about estimating your TAM, I do recommend you try them all. Don’t expect the results to be same but hopefully at least within the same number of digits. If they differ by order of magnitudes, you might have to revisit some of your estimates.

Good luck estimating and…trust your judgment!


Friday, August 16, 2019

How to Size a Market

Sizing the market opportunity is a key component of any business plan. It’s logical that the senior management, the board, the VCs and other stakeholders want to see how big the market opportunity is before investing any dollars in your plan. Ideally, they will want to see the opportunity size, growth, and also breakdowns by various factors such as industry, geography, and company size.

The expectation is that you “do your homework” and find the relevant Gartner report with hard data that you will be able to quote to management so they can make their tough business decisions. However, no matter how hard you are looking, you discover that neither Gartner nor any other analysts cover your particular product category. And forget about slicing it by geography or vertically!

Market Sizing Available from Industry Analysts

Gartner publishes regular reports and spreadsheets such as the forward-looking Forecast: Enterprise Software Markets and the backward-looking Market Share: All Software Markets. Both do a pretty good job estimating the market size, growth, and the 5-year compound annual growth rate (CAGR) for the main software categories. If you are a Gartner customer, you should definitely get ahold of these reports! If you are not, search around. You might find them or you might find some of the data points relevant to you. Just about every Magic Quadrant states the market size and market growth and those are available for free from many vendors.

Gartner also sizes the submarkets of their main software categories and chooses not to publish those numbers. For example, while they publish the data for the enterprise content management (ECM) market, they also track the subcategories such as document management, digital asset management, and web content management. If you have the right type of subscription, they will share the data with you. They have it for software subcategories, geographies, and to a limited degree, as market shares by company.

Since 2017, Gartner stopped attributing revenue to industry verticals. Knowing how impossible of a task that is, I don’t blame them. I plan to write a separate post about that issue. So unless you are lucky and happened to work with some boutique firm that publishes vertical numbers, you will be out of luck. Of course that won’t stop your bosses from asking for it and so keep reading, because I’ll explain how to move forward.

The Gartner methodology seams to be primarily based on tracking all players in a given category and collecting/estimating their revenue in said market. I find the Gartner methodology plausible and the data credible. The same is true for IDC and their Software Tracker. There are a few other firms that provide some market data including Forrester, ARC and others but only Gartner and IDC have teams of analysts who are primarily responsible for market sizing, as far as I know.

I am somewhat skeptical of the market sizing conducted by firms such as Markets and Markets, Market Research Future, Intense Research, Eminent Market, Orbis Research, etc. I actually suspect that there is a single organization behind all these names but I admit that haven’t bothered to investigate that. Those companies all seem to stem from the India/China region and they are trying to sell me their reports every day. However, none of them have ever spoken to me or to any of my colleagues. I don’t know their analysts, I am not familiar with their methodology, and I am doubtful of their data.

TAM versus Actual Market Size

Now, when talking about market sizing, there are two main data points that are often being confused:

1.     The actual current market size and
2.     Total addressable market (TAM)

The actual market is the sum total of all revenue generated by all market players over the last 12 months. This is what Gartner reports and they are specifically counting all [on premises] license revenue, cloud-based subscription revenue, maintenance, and support revenue. They do not count the professional services revenue in any market as that revenue is often difficult to attribute to a specific software category and the money made by the major players such as Accenture, Deloitte, and Capgemini would skew the numbers dramatically. This is part of the market sizing methodology by Gartner that makes sense to me.
Note: I realize that many companies obsess more about their bookings rather than revenue by product line but I don’t think that Gartner is concerned with that distinction and so let’s ignore it for now.

What Gartner doesn’t report is the TAM. The TAM is the potential market, independent of your company’s ability to capture it. It doesn’t consider your market presence, the amazing appeal of your solution, or your competitive win rate. Basically, if everyone out there purchased everything they possibly could from you, that’s your TAM. This data point is very useful for business planning, and executives, VCs, and other investors are very interested in it. Sometimes, there are other data points used such as the serviceable addressable market (SAM), or the serviceable obtainable market (SOM, aka target market) but if you have the TAM and the actual market size, you are in a good shape as far as your business planning goes.

Estimating Market Size

Now, what do you do when the data you are looking for is simply not available? Yes, you’ve checked with Gartner, with IDC, and with all the other analyst firms in your space and you are coming up empty. Well, it’s time to do your own research and come up with your own data. You think you can’t just make it up? You worry that your data won’t be as credible as Gartner’s? Well, perhaps not. But you are an expert in your field. You know more about it than 99% of the people out there. The No. 1 rule is that an educated estimate from an expert is much better than flying blind. How do you think Gartner comes up with their data?

So, let’s estimate the actual market size. First, list all the market players in your space. That’s usually not that hard. You might even have an additional insight such as which vendors play in the enterprise end of the market and who is focused on SMB. Add a bucket for “Others” because there is no way that you are not forgetting some players; for example, vendors only active in certain geographies. Depending on how mature your market is, the Others group will occupy somewhere between 20-50% of the overall market size. Estimate that percentage and calculate the value based on the overall market.

Now, start estimating the revenue for each of the vendors. Start with yourself, that’s the hard data point you have. You usually know who’s bigger and who’s smaller than you in your space. Estimate by how much – give each vendor a number. If you can’t even dare to estimate, do some more research. Google around – big companies share a lot of data points and hints in their quarterly earning’s calls while the small ones like to brag about themselves at conferences.  Ask co-workers who might have worked at some of your competitors. If you have a competitive intelligence function, check with them. Collect data points and tidbits. Over time, you’ll get a pretty good feel for what the market looks like. At the end of the day, estimate. Yes, guess. Estimate the new business vs upsell vs recurring revenue. Trust me, your guess will be better than you think and certainly better than no data at all.

Now, to be clear, while guessing is OK, do attribute your sources and document your methodology, including our own estimates. In the end, what makes any market-sizing model credible is the methodology. By sharing the methodology with your management, they are free to adjust any estimate as they like. You will be off by some margin…and so is Gartner! Ultimately, it makes relatively little difference for your business plan whether your market is $3.9 bln or $4.4 bln. That’s a small enough range and your estimate will be within or close to that range. Add it all together, don’t forget to account for the Others, and voilà, you have market size and market share!

Calculating Market Growth

The next data point that you will need is the market growth. This gets trickier as the more you drill down, the more inaccuracy you’ll be adding. To do that, go through the exercise but estimate the companies’ revenue 12 months ago. You will find some actual numbers through research and your expertise will help you to gauge who’s been growing faster and who slower. Getting actual data from 2-3 companies will significantly improve your ability to estimate. You know much you’ve grown and you should have accurate data about your win rate against each of the competitors. Who’s hot right now? Who’s been very visible at every event? Who’s milking a massive installed base? Who’s showing up with a different reference at every webinar and who keeps using the same customer for every occasion? All that considered will give you some understanding of the relative growth of each vendor.

And just like that, you have the market size, market share, and market growth overall and for each vendor. That’s pretty good! It might feel rough but it’s way better than an empty slide.

Understanding the Limitations

Remember, the more granularity you want the less accuracy you’ll have to accept. That’s why I wouldn’t use this methodology to estimate CAGR over 5 years. This is where Gartner and IDC will beat you – they have been doing it regularly for years and they have a much more accurate history and a much better feel for future growth.

I’d be also careful using this methodology to estimate the market size and revenue for a specific geography or vertical. You can try and leverage some of the country growth data from Gartner, combined with the specific market expertise within your company. But you will find it much less confident and rightfully so. You are making estimates based on estimates and your accuracy will suffer. Again, this is where Gartner and other analysts who’ve been doing it year after year will beat you.

Being Proactive to Be Credible

Ultimately, it is not the coming up with the numbers but the credibility of your data that will be your greatest challenge. It’s easy to dismiss your estimates. But chances are very high that the right chart doesn’t exist and you will have no choice but to come up with your very own take on what the market looks like. Don’t leave this for the night before the business plan is due. This exercise takes time. Start building it today. Iterate on it and fine-tune it over time. Setup a workshop and involve your boss and other key stakeholders. That’s the best way to proactively address the data credibility issue. Remember, an educated estimate is better than no data at all!

In the next post, I will discuss how to estimate the total addressable market, TAM. In the mean time, here’s some recommended reading: Forbes: How To Effectively Determine Your Market Size