AI agents are rapidly emerging as one of the most powerful tools in the enterprise AI toolkit. However, for organizations looking beyond experimentation, the real challenge lies in deploying them effectively and scaling to deliver lasting value. How do enterprises unlock that value quickly? And how can they trust automation enough to implement it as a core part of their operations?
Jack Kennedy, CTO and co-founder of Whippy, joined The AI Forecast to discuss the evolution of Whippy from a scrappy consulting operation to a fully productized platform, the surprising advantages of building with niche programming languages, and why speed—not just scale—is a critical differentiator in enterprise AI.
Here are some key takeaways from that conversation.
Paul: The pace of change in AI has been rapid, and as a CTO, part of your role is managing that, deciding when to lean in versus when to hold back. So, how do you make good decisions about where to apply different models? When do you replace what you’ve already built to make it better, and when do you just say this is good enough for now?
Jack: My principle from the start has been to stay on top of what’s new, but remember it’s better to be right than first. It doesn’t matter what you do as long as the last thing you do is correct. That mindset helps us avoid chasing hype and stay focused on building things that actually work.
Paul: How do you figure out what’s worth scaling versus where a human is still needed? What’s your framework for deciding what AI should take on and what it shouldn’t?
Jack: We work with a lot of staffing and recruiting agencies. There’s a role like a phone recruiter that churns at 125 percent a year. That’s a great candidate for automation. AI is better than a human in that case; it speaks every language, works 24/7, and documents everything directly into your CRM. But there are parts of the process, like convincing a candidate to take the job, where the human touch still matters. That’s not something we try to automate. You have to be selective about where you scale and where you don’t. Not everything should scale, even if it can.
Paul: AI has changed fast. How do you think customers react, especially regarding fear versus excitement?
Jack: There are three buckets of people — the Dreamers, the Pragmatists, and the Skeptics.
First, the Dreamers. They want to automate everything and sit on a beach while the business runs. They’ll ask, “Can we do this?” and I’ll say, you can automate it, but should you? Technically, yes, you can automate many job responsibilities. However, once we discuss what it takes to maintain that kind of system, and how valuable the human touch still is in many parts of the business, they usually start to rethink their approach.
Then there are the Pragmatic users. These are the people who handle ten tasks a day and recognize that two of them could be automated. They’ve already experimented with tools like ChatGPT and understand the potential. They are practical, open to change, and often become internal champions who help set the stage for broader adoption.
These are the people we focus on first. They’re the ones who help you scale AI responsibly. If you empower the Pragmatists and guide the Dreamers toward reality, you create a solid foundation. That makes it much easier to bring along the third group, the Skeptics.
Paul: What advice do you have for people considering building AI apps, especially given the many tech stack options and the pressure to move quickly?
Jack: If you’re in a company where people are either scared of AI or eager to automate everything, you need to be the voice of reason in the middle. As mentioned earlier, the third bucket of users is the Skeptics. These are the ones who are hesitant or cautious about adopting AI. For them, the best approach is to start with one low-risk, clearly worthwhile project.
One client was getting 100 after-hours calls in Spanish that no one returned. We suggested capturing those voicemails, translating them, and emailing transcripts to the team. It was a low-risk change that didn’t interfere with existing systems, but it immediately improved the customer experience and helped bring the skeptics on board. It showed that AI could create real value without forcing a big leap.
Paul: What should leaders consider regarding long-term flexibility when choosing between APIs and models?
Jack: We’ve been provider-agnostic since day one, and that has been important. We separate data storage from model usage. We store data we pass into the model, store the data it gives back, and log where it came from. That way, we can rerun something if it fails, switch providers when better ones come along, and reuse the same data without rebuilding everything.
I’d be cautious about tools that don’t let you take your data out or track what’s going on. Those are the risky ones. There’s a whole wave of LLM proxies now that let you plug multiple providers into one dashboard, which is useful. But even with tools like that, you still need your own logging and architecture. If you own your data and track everything, you’ll be in a strong position no matter where the ecosystem goes.
Catch the full conversation with Jack Kennedy on The AI Forecast on Spotify, Apple Podcasts, and YouTube.
This may have been caused by one of the following: