What I Actually Look at Before I Recommend Any AI Tool
Everyone wants to know which AI tool to use. That's usually the wrong first question. Here's the six-question framework I work through before recommending anything.
The question I get most often is some version of "which AI tool should we be using?"
It's a reasonable question. It's also usually the wrong one to start with.
Before I recommend anything, I work through the same six questions every time. They're not complicated. But most businesses skip them — and that's why so many AI projects underdeliver.
1. Is there a real process here to enhance?
AI works best when it's applied to something that already exists: a recurring task, a defined workflow, a repeatable decision. If the answer to "what exactly would we be automating?" is vague or inconsistent, that's a signal the problem isn't ready for AI — it's ready for a process conversation first.
Not every inefficiency is an AI problem. Some are just workflow problems. Getting that distinction right from the start saves a lot of wasted effort.
2. How well-defined is that process?
A test I use: could you explain this process clearly enough to train a new employee in a day? If the answer is no — if it lives in someone's head, varies depending on who's doing it, or has never been written down — AI isn't going to help. It's going to inherit the inconsistency and run with it.
AI is a multiplier. It amplifies what's already there. If the process is clean and documented, AI makes it faster and more consistent. If it's messy and undefined, AI just makes the mess move faster.
Before any tool gets selected, the process has to be documented. That's not optional.
3. Can AI handle the whole thing — or just part of it?
There's a meaningful difference between AI fully handling a task and AI assisting with one. Both are valuable. But they require different setups, different oversight, and different expectations.
Full automation — AI handles it start to finish with no human review — only makes sense for low-risk, high-volume tasks where the output is easy to verify. Anything more complex usually calls for an assist model: AI does the heavy lifting, a person reviews and approves before anything goes out.
Misreading this is one of the most common reasons AI deployments fail. The expectations don't match what the tool can actually do unsupervised.
4. What data does this process touch — and what does the vendor actually do with it?
This is the question almost nobody asks. It's also the one I care most about.
Before any AI tool gets connected to your business, you need to know what data it's going to see. Customer records, financial information, internal communications, proprietary processes — all of it is potentially in scope depending on what you're automating.
The follow-up matters just as much: what does the vendor do with that data? The assumption that a recognizable brand name equals responsible data handling is not a policy. Read the terms. Understand the data retention and training practices. Know whether opting out of data sharing is even possible.
For small businesses, this is especially important. You're often handling information your clients trust you with. That trust doesn't automatically extend to a third-party AI vendor you signed up for last Tuesday.
5. What does failure look like?
Every AI deployment has a failure mode. The question is whether it's acceptable.
A draft email that needs editing before it goes out — low stakes, easy to catch, fine. A miscalculated invoice sent to a client, or a customer-facing message that's factually wrong — that's a different conversation entirely.
Before recommending any tool, I want to understand the blast radius if something goes wrong. That shapes how much human review stays in the loop, what guardrails need to be built in, and whether the tool is even appropriate for the use case at all.
A high-risk failure mode doesn't mean don't use AI. It means design the workflow accordingly.
6. Will your team actually use it?
The best tool in the world accomplishes nothing if people don't adopt it.
This part consistently gets treated as an afterthought — and it's exactly why so many projects that look good on paper never deliver. Tools get rolled out. Nobody changes how they work. Six months later the subscription is still running and nothing has actually changed.
Adoption isn't a training problem. It's a change management problem. Who's championing this internally? How does it fit into how people already work, rather than piling on top of it? What does success look like for the people actually using it every day?
If those questions don't have clear answers, the rollout isn't ready.
These six questions won't make AI simple. But they'll tell you quickly whether you're ready to move forward, what needs to happen first, and what oversight to build in once you do.
That's the difference between an AI deployment that delivers and one that quietly gets abandoned three months in.
If you want to work through these questions for your business, reach out here — it's where every engagement I do starts.
Want to talk through your situation?
Every business is different. Book a free call and we'll figure out where technology can make the biggest difference for yours.