Training Analytics Agents Like New Hires

September 12, 2026

Training Analytics agents isn't that different from working with someone new to the team or unfamiliar with the data. If you hand them vague tasks like "pull sales numbers," you'll probably get back something incomplete, messy, or misaligned with what you actually need. But if you give them context, spell out the task, and set expectations on tone and output, you'll usually get something useful on the first try.

Training here means doing whatever it takes to make the agent work in the real world, whether that's prompting, shaping the data model, instructing the semantic layer, or granting access to databases and access tokens.

During the recent effort of developing an analytics agent that worked for us, I've realized that the way we train teammates also works just as well for AI agents. Clear instructions, good examples, and feedback loops create wonders.


Data Models: Be Specific

Someone new to the dataset won’t know which data source is correct. They might make assumptions that don’t align with how the data is structured. Same with agents: they have access to many tables (1000+ in our case), but unless you point them to the right one and explain how to use it, you’ll get nonsense.

A good rule of thumb is to use descriptive table names, spell out join conditions, and define filters clearly. Don’t assume they’ll figure it out from context. Just like you wouldn’t tell an intern “grab the sales data” without saying which table to use, don’t leave your agent wandering through ambiguous names like sales1.

Instead of “show Q2 revenue,” say:

“Using quarterly_sales_report_cte, summarize Q2 2025 revenue by region. Join to budget_targets on region_id. Show actual vs budget.”

Another example: instead of leaving aggregation assumptions vague, be explicit. Sometimes total users means summing all rows, other times it’s counting unique IDs. Say:

“Using user_activity, calculate monthly active users by counting distinct user_id. Compare to total login rows using SUM.”


Business Context

When you work with someone new to the project or industry, you don't just throw them a vague ask, you explain the why. Who's the audience? What's the purpose? Should it be a one-pager for executives or a detailed backup for finance? AI agents need the same framing. Tell them the role they're playing, the task at hand, and the intended reader. Without it, they'll default to something generic, which might miss the point entirely.

For example:

"You are the financial controller preparing a Q2 revenue update for the board. Highlight key trends and variances with bullet points. Keep it under 300 words."

That's very different from just "Write a summary," and the results will reflect it.


Technical Prompt: Define the Task Clearly

When training someone new, you’ll often say, “First do this, then check that, then present it in this format.” Agents benefit from the same explicit structure. Spell out the scope, reasoning steps, and output format. Otherwise, you risk getting either a flood of irrelevant details (thanks to how LLM works) or something half-baked. A simple “think step by step” before answering often improves reasoning, just like telling a teammate to walk through their assumptions out loud.

Example:

“Using quarterly_sales_report_cte, calculate total Q2 2025 revenue by region. Break down the steps: first filter for Q2, then SUM revenue by region, then compare to the budget table budget_targets. Present results in a Markdown table with columns for Region, Actual, Budget, and Variance.”


Personality & Tone: Model the Voice You Want

When a new hire gives you project updates, tone is often the difference between a good update and one that can cause panic. Agents are no different: if you don't specify tone, they might default to something too casual ("things look bad!") or too dramatic ("sales collapsed!"). Always define how you want it to sound, and give examples of phrasing that works. This prevents misinterpretation and builds trust with whoever reads the output.

Example:

"Write a professional and concise client update. Use neutral language—for example, frame results as an 'adjustment' rather than a 'crisis.' Keep the tone constructive, informative, and credible, suitable for an executive audience."

Sometimes I just refer to a well-known person and ask the agent to follow their tone. Richard Feynman, the famous physics teacher and writer, is one of my favorites.

"Use the tone of Richard Feynman and always find a silver lining if the results are not optimal."


Keep Coaching

Even the best employees need coaching and iteration. You don’t expect someone to nail a report on their first day. You test, give feedback, and refine. Same with agents: don’t just run a prompt once and give up if it’s not perfect. Save the versions that work, tighten the constraints, and use examples to show what “good” looks like. Visual aids, like checklists or flowcharts, also help when the task has many steps.


Final Thought

If you treat an AI agent like a junior team member, you’ll naturally get better results. Be clear about the data, explain the context, define the task, and set the right tone. And just like with people, iterate, give feedback, refine instructions, and build a library of what works.

With all the cool demos and fluffy websites out there, spending time with agents is the true secret to making them work in real environments.