Use GAI to Solve Problems, Not Impress
The flashiest demo isn’t always the win, how to prioritize what to build and when
Over the past few months, I’ve been working with a mix of startups and mature organizations all grappling with the same challenge: how to approach Generative AI (GAI) with clarity and purpose. Most aren’t sure where to begin, what questions to ask their teams, or how to separate high-value opportunities from the noise. Some have built flashy pilots without a path to production. Others are stuck in analysis paralysis, overwhelmed by hype and uncertainty. After thirty years of leading product strategy through turnarounds, scale-ups, and IPO prep, I’ve learned that the key to leveraging any emerging technology isn’t speed, it’s deliberate focus.
This article is the first in a series. This article provides a foundational product strategy with questions to help teams get started: what to build, what to defer, and how to evaluate ideas through a practical lens. The next piece will focus on companies that are already deep into GAI efforts but finding themselves stuck, particularly around people, technology and infrastructure. I’ll dive into how to hire for a robust data science and engineering team, the impact of Agentic AI and explore common technical issues around vector databases, ML training pipelines, and scaling beyond the prototype.
Image by gpointstudio on Freepik
Where Most Companies Begin (and Why It’s Not Enough)
When companies first explore GAI, the ideas typically fall into three categories:
Build something flashy to attract customers, excite investors or drive short term revenue
Speed up internal work like coding, content creation, research, or development
Dig through messy data to find hidden patterns, share insights
All of these have value. I’ve worked with teams who found success in each of them, but not all at once. What separates successful GAI efforts from stalled ones is a clear framework for decision-making, not enthusiasm.
How to Prioritize: Start with a Simple 2x2 Grid
I love to start simple when looking at anything new, GAI or otherwise, I recommend asking two core questions:
Will this drive meaningful business impact?
How complex is it to implement?
From that, you get this simple matrix:
I also use five criteria to score each idea from 0 to 5:
Business impact, strong alignment with business goals; clear value creation (higher score, more impact)
Feasibility, gauge of ease to implement with existing tools and talent (higher score, easier to implement)
Risk, governance, legal, or operational risk (higher score, lower risk, ie, 5 is no risk)
Organizational readiness, organization has the data, team structure, and buy-in to support it (higher score, more ready)
Strategic value, offers learning, future unlocks, or positions the company competitively (higher score, higher strategic value)
This gives you a 0–25 score for your ideas. If a project scores above 20, prioritize it. Below 15, reconsider.
Evaluating Real-World Use Cases
Here’s a breakdown I’ve used with teams across industries for the top use cases. This represents real GAI initiatives many mid-sized companies are considering right now:
Takeaways:
Marketing automation is often a fast, effective win, but brand integrity matters
Personalization pays off, but only with well-organized customer data and killer UX
Data pattern detection can be powerful, but it’s often an exploration at first, not a fix or immediate solution
Document summarization helps reduce time spent, but requires strong human review to get right.
First Steps and What to Ask Before You Start
Before committing resources to a new Generative AI initiative, it’s important to slow down and ask the right foundational questions. Many projects fail not because the idea is bad, but because the assumptions around ownership, data readiness, or success metrics were never clarified. This is especially true in mid-sized organizations, where bandwidth is limited and internal silos can slow adoption.
Below are some straightforward recommended first steps for the top GAI use cases, along with key questions leadership teams should ask before greenlighting any project. These questions aren’t just about feasibility, they’re about ensuring alignment, accountability, and a clear definition of value from the start.
Don't Overlook These GAI Success Factors
Even the smartest use cases fail without these considerations:
Is your team ready to adopt it, not just test it?
Can you govern outcomes like bias or inaccuracy?
Do you know how to measure success…and failure?
Are you starting small, or building something too broad too early?
At one point, I worked with an AI product team that had a strong idea but no boundaries. The pilot became a prototype, then a roadshow, then a distraction. We had to pull it back, slice it in half, and relaunch with a smaller scope. That version worked….and scaled.
But What Will We Track?
It’s very common as discussions move from exploration to implementation that tracking the right metrics becomes critical. Each team: product, engineering, marketing, and customer support play a different role in ensuring that AI initiatives deliver real business value. Without clear KPIs, it's easy to mistake activity for progress. The table below outlines practical, team-specific metrics that organizations can use to measure adoption, performance, and impact as GAI features are rolled out. These indicators help leadership identify what’s working, where to course-correct, and how to align teams around shared outcomes.
In the next article, I’ll dive deeper into the engineering side of GAI, with a focus on how to measure and improve model performance using metrics like precision, recall, F1 score, and inference time. These KPIs are essential for teams that have moved past the pilot phase and need to ensure their models are not just functional—but reliable and scalable.
A CXO’s Checklist for Generative AI
You don’t need to be an AI expert to lead an AI initiative. But you do need to be consistent.
Here’s what that looks like:
Ask for a business case before approving a budget
Build cross-functional teams—not tech-only task forces
Treat pilots like experiments, not product launches
Make explainability and risk part of every conversation
Don’t be afraid to say no and ask hard questions—even when the demo looks great
I’ve helped one company’s CEO calm a panicking board by simply saying, “We’re first building a system to learn before we scale.” That mindset didn’t just reduce pressure. It kept the company focused.
TLDR
Generative AI is only as useful as the problems it’s asked to solve. The most successful initiatives don’t start with technology, they start with clarity.
If you skipped down to this from the top, take this away from the article:
Start with structure. Use a simple impact vs. complexity matrix to evaluate ideas, and score them with clear business, feasibility, and readiness criteria.
Focus on fundamentals. Before building anything, clarify ownership, data availability, and what success looks like. Skip this, and even good ideas stall.
Measure what matters. Product, engineering, marketing, and support all need their own KPIs to track whether AI efforts are creating real value—not just output.
Product teams are uniquely positioned to lead these efforts. Product understands the user, the problem, and the path to execution. PMs, don’t wait for permission. Drive the conversation. Define the value. And help your organization build something that actually ships—and sticks.
Attribution and Inspiration
Image by gpointstudio on Freepik
Identifying and Prioritizing Artificial Intelligence Use Cases for Business Value Creation, Adnan Masood, Medium, March 2025
34 AI KPIs: The Most Comprehensive List of Success Metrics, Multimodal, August 2024
AI and Product Strategy by Roman Pichler, Medium, April 2025
KPIs for Gen AI: Why Measuring Your New AI is Essential to Its Success, Google Cloud, January 2024
How GenAI is Rewriting the Rules of Product Management, Economic Times, May 2025