Sunday, June 29, 2025

The Killer App for Generative AI

Exactly two years ago, I published an article about The Importance of Killer Apps. At the time, a new platform technology had just emerged, spatial computing. Remember Apple Vision? That was the hottest tech right before OpenAI launched ChatGPT. I wrote then that no killer app had emerged for Vision, and without one, market success is unlikely.

Today, spatial computing is all but forgotten amid the generative AI gold rush. So, what’s the killer app for generative AI?

Yes, AI qualifies as a platform—it enables countless applications solving a wide range of business and personal problems. But a successful platform needs at least one killer app that drives the platform adoption across many audiences. The truly transformative platforms tend to have several killer apps. Think of the mobile phone: its killer apps include the camera, media players, messaging, and social media. The reason you're willing to spend $1,000 on an iPhone isn’t because of apps like Asana or Conga. It's because you no longer need a separate camera, music player, DVD player, or monthly supply of postage stamps.

So, what’s the killer app for generative AI?

Your first instinct might be to say, “There are thousands!” But if none of them clearly rises to the top, that could be a problem. It suggests that while the new technology is powerful, we're still unclear on why we truly need it or why we should pay for it. Some signs suggest that this might be the current state of the market. Many organizations are experimenting with generative AI out of fear of missing out (FOMO), yet very few compelling business use cases have emerged so far.


The leading contenders

Chatbots are an obvious candidate. Yet those powered by generative AI remain as frustrating to help-seeking customers as the pre-ChatGPT ones. Their primary function is still to deflect inquiries rather than to serve customers, which doesn’t really feel like a “killer app.”

AI-powered search and summarization is another strong area. These tools allow us to ask natural-language questions instead of typing keywords and can synthesize answers across sources. It’s one of my top use cases. Google should be worried. But is that “killer” enough? We’ve never paid for search - would we pay for AI search? I’m not sure.

Content creation also gets mentioned frequently, but that’s too broad to be considered an application. I’m also concerned about the rise of AI slop, mass-produced, low-quality images and videos generated from questionable training data that are flooding the web. The same applies to text content. Don’t get me wrong, I use gen AI for content creation and love it, but it’s becoming clear we’re being buried in AI-generated noise. I won’t get into the legal implications of implicit and explicit copyright violations, but those issues might soon face their judgment day.

Content generation alone isn’t specific enough to qualify as a killer app. Real apps solve specific business problems. I see a lot of promise in use cases like marketing content (blogs, email sequences, sales collateral), customer and prospect communication, memo drafting, business planning, and brainstorming. These are undeniably useful, especially when AI is used as a creative assistant, but the quality degrades quickly when left unsupervised. Creative assistant is a tool, not an application. People pay for tools, but tools don't become killer apps.

Naturally, I asked the AI what it thought could become the killer app. In addition to the areas above, it added a few more: code generation for software development, legal and compliance document summarization, and medical/clinical documentation and decision support.

I like these suggestions because they address real business needs. That’s what defines an app. You can’t be a killer app without being an actual app. These are industry-specific use cases, and I believe the future of AI will be shaped by domain-specific applications. Still, it’s a stretch to say that the legal or medical sectors are driving generative AI adoption, like killer apps would. Code generation might be the closest we’ve come to a killer app, but the millions of ChatGPT users who aren’t coding would probably disagree.

In summary, I don’t believe the killer app for generative AI has emerged yet. But I’m confident it will. Perhaps a new category will evolve, like "personal AI assistants" in the spirit of HAL 9000 or J.A.R.V.I.S. Of course, we’re nowhere near that level of capability. As long as prompt engineering is still a thing, we’re not getting close. Luke Skywalker doesn't have to think about the right prompt structure when he talks to C-3PO.

Until the killer app (or apps) emerges, generative AI will remain in the Trough of Disillusionment on the Gartner Hype Cycle. The Plateau of Productivity seems distant, but the technology evolves very rapidly.

What do you think will be the killer app for gen AI?

Gartner Hype Cycle for Generative AI 2024



Sunday, March 30, 2025

Enterprise Content and AI Security

You can’t talk to a business customer today without AI coming up. While most people seem to have embraced the power of public generative AI tools like OpenAI’s ChatGPT or Google’s Gemini, there’s a lot of hesitation when it comes to using generative AI on enterprise content. The one concern that comes up again and again? 

Security.

Rightfully so. Public AI tools don’t have to worry about security. They’re gobbling up all the data on the public internet with the motto: “Train first, worry about intellectual property rights later.” Technically, nothing is stopping them from doing that, and their models are fed by scrapers and crawlers that grab everything they can find.

In an enterprise, however, that doesn’t work. Enterprise data is privileged, confidential, and subject to privacy laws. The data cannot be shared with everyone, and AI models must respect that. It means two users must receive different answers to the same question, depending on the data they’re authorized to access.

The problem is, if your content isn’t well secured and governed in the first place, AI will expose those holes quickly. You may have been able to hide some data behind cryptic file names, but that won’t stop the AI models. Having solid data governance with granular, clean permissions is imperative. Otherwise, it’s “bad security in, bad security out,” to paraphrase Fuechsel’s Law ("garbage in, garbage out").

It also means you need to bring AI tools to your content rather than trying to bring your content to the AI tools. It’s hard enough to secure your content in the first place, and the idea of copying a snapshot into a separate container for AI would obliterate any of that security.

Don’t expect public AI vendors like OpenAI, Google, Anthropic, Meta, or DeepSeek to solve this problem. Enterprise content is a different animal—one they neither understand nor care to understand. None of these vendors has any enterprise DNA. Security is not their concern, and their models aren’t built with the assumption that data access should vary by user.

To illustrate this point, let me remind you of what happened with enterprise search. Web search, which we all use many times a day, is based on an index created by crawlers that scour the internet to deliver the best content match for your keywords—the same results for everyone. That’s what Google does in simplest terms. But in the enterprise, that approach doesn’t work. Enter enterprise search.

About 20 years ago, Google—the heavyweight search champion—entered the enterprise search space with a bright yellow, rack-mounted Google Search Appliance, drawing a lot of attention with its promise that managing content wasn’t necessary: "Wherever it is, you can find it with Google". Or something like that.

It sounded great—except it didn’t work. Google eventually discontinued the product after a decade of trying. Interestingly, other major players in enterprise search met similar fates. There was FAST, which Microsoft acquired in 2008—only to discover a year later that FAST had been cooking the books. And then there was Autonomy, which HP acquired in 2011, only to eventually sue the CEO and CFO for—you guessed it—cooking the books. The Hollywood-worthy Autonomy saga ended with the CFO in jail and the CEO dying in a freak boating accident. (I described that story in more detail last year in “Mike Lynch, Autonomy, and Incredible Coincidences”.)

Today, search is provided by the companies that own the data. Enterprise search is hard, and usually, only the company that built the repository has a chance of doing it well. On the web, Google finds content that wants to be found—literally. Millions of companies spend billions of dollars each year on SEO to make their content easily discoverable. And there’s no security to worry about.

Enterprise content is different. It’s not optimized for search engines, and security is not optional. This is hard to get right. Eventually, the open-source Apache Lucene solved the problem well enough, and that’s what many enterprise applications use today. Still, you rarely hear anyone say, “Wow, this search is amazing”—because it doesn't match Google’s web search, which sets the expectations bar.

Now, let’s come back to AI. The vector databases at the heart of enterprise AI models must respect data security, just like search indexes do. That’s incredibly difficult for anyone other than the companies that hold the data. Only they understand the data structures, the users, and their permissions. For any external application, sure, it’s possible, but it's really hard to make that work. If you don’t believe me, think back to enterprise search.

AI in the context of enterprise data will be extremely valuable, with the potential to dramatically boost productivity—whether it’s through assistants, agents, or whatever comes next. But in an enterprise, the first rule will always be: respect the data’s security. 

And that makes it hard.