What 81,000 People Told Anthropic — And What It Means If You Run a Racquet Sports Business
- Apr 10
- 4 min read

Picture this.
A club director sits down on a Saturday morning with coffee and opens his laptop. He's got four decisions in front of him — a programming calendar, a pricing review, a capital ask for two new courts, and a staffing gap he hasn't solved yet.
He opens an AI tool, types a few prompts, and spends the next hour getting answers that sound confident, logical, and organized.
He makes his calls. Moves on.
What he doesn't know is that over 80,000 people around the world just told Anthropic exactly what they think about that process — and some of what they said should stop him cold.
The research nobody in this industry is reading.
Anthropic's research team spent a week in December conducting open-ended interviews with over 80,000 people — 81,000 is the number most are using — across 159 countries and 70 languages. HR leaders are citing it. Educators are citing it. I haven't seen anyone in racquet sports even mention it.
That's a gap worth closing.
Because the patterns inside it map directly to the pressure points of running or growing a tennis club, a pickleball facility, or a padel operation right now.
One thing up front: this isn't an AI victory lap. Some of what they found should give every operator pause. That's actually what makes it worth your time.
The #1 thing people want from AI isn't what most people assume.
Nearly 1 in 5 respondents — the single largest group in the entire study — said they want AI to handle the routine so they can focus on work that actually matters.
Not to replace people. Not to automate everything in sight. To get out from under the administrative weight so they can think.
Club directors buried in scheduling and member communications. Association staff managing a dozen moving pieces when they should be making strategic calls. Operators who got into this business because they love the sport, spending the majority of their week on everything but the sport.
The demand is real. For that specific use case, AI is delivering — and that's worth building into how you run your operation.
81% said AI had already moved them toward their vision.
Not in theory. Already.
The most common experience: raw productivity. Getting meaningful work done faster. The second most common: using AI as a thinking partner to pressure-test ideas and work through hard problems before committing to them.
For an operator running lean, having a tool that compresses research time, drafts first versions, and helps you think through options before you move — that's a real advantage.
But here's what the data also showed: the people getting the most out of AI weren't just the heaviest users. They were the ones who came in with sharper questions.
Which brings up something from Anthropic's broader research that didn't make as many headlines.
Most people aren't directing AI clearly enough to get what they actually want.
In Anthropic's analysis of real user conversations — separate from the 81,000-person study but consistent with its findings — only about 30% of people ever tell the AI how they want it to behave. The other 70% are typing and hoping for the best. Some researchers are calling this the "prompting gap."
Most people are getting back answers that sound right but aren't actually solving the real problem — because they haven't articulated the real problem clearly enough to begin with.
Garbage in, garbage out has a new address.
If you can't tell a tool what you actually need, you probably haven't gotten sharp enough on the question yet. That's worth sitting with.
Here's the tension most people are skipping past.
The single most common concern in the entire study — raised by more than 1 in 4 respondents — was unreliability.
AI sounds confident. It is sometimes wrong. And when you're making a real business decision, confident and wrong doesn't just waste time. It costs you.
Researchers and practitioners consistently describe a version of this that's easy to miss in the moment — answers that stay internally consistent while drifting further from reality, compounding quietly until the decision is already made.
Read that twice if you're using AI to inform decisions about capital, pricing, staffing, or growth strategy.
The educators in and around this research raised something that landed equally hard: a consistent fear that students leaning too hard on AI are losing the ability to work through hard problems on their own. They're getting the answer without building the muscle.
That pattern doesn't stay in the classroom.
What this actually means for racquet sports business operators.
This industry is in an expansion moment. Courts are going in. New brands are entering. Capital is moving.
The operators I talk to are more pressed for time, mental bandwidth, and decision clarity than at any point in recent memory. The growth is real and so is the complexity underneath it.
AI can help with the load. It can accelerate research, surface options you hadn't considered, and help you get to a sharper question faster. Use it for that. It's genuinely good at it.
What it can't do is replace the judgment that comes from knowing your market, your members, your team, and your numbers — and being accountable for what happens next.
The racquet sports business leaders who win the next five years aren't the ones who use AI the most. They're the ones who stay clear on which decisions are theirs to make — and don't hand that clarity to a tool that has no stake in the outcome.
AI can be in the room. It probably should be. But it doesn't get a vote.
If you want a real human to look at where your business actually stands —
The Growth Pressure Diagnostic is 15 questions across three domains: Market Position, Execution, and Alignment.
No algorithm scores your answers. No AI generates your results. I review every response myself and send back a plain-language read on what's working, what's drifting, and where the real pressure is coming from.
It's free. It takes about five minutes.
What are you actually using AI for in your business right now? Genuinely curious what's working and what's not.




Comments