How Tavus Tool Calls Work
Tool calls allow Tavus agents to interact with external systems such as APIs, databases, and internal services. Here’s the high-level flow:Tavus does not execute tool calls on the backend. You must implement event listeners in your frontend to handle
conversation.tool_call events and execute your own logic when a tool is invoked. For detailed implementation instructions, see the Tool Calling for LLM documentation.1. Keep Tool Schemas Clear and Explicit
Tool definitions should be as clear and specific as possible. Ambiguous parameters make it harder for the model to choose and populate tools correctly. Prefer narrow tools with explicit parameters. Bad:2. Separate Read Tools from Write Tools
Tools generally fall into two categories. Read tools retrieve information and are safe to call frequently. Examples:- retrieving account data
- searching knowledge bases
- checking order status
- creating support tickets
- sending emails
- updating records
3. Keep Tool Results Small
Echo interactions are often the output of a tool call, and they are injected back into the model’s context. Large payloads increase token usage and can degrade conversational quality. Keepconversation.echo interactions small: return only the fields needed for the next response.
Example:
4. Avoid Triggering Tools Too Early
Tavus agents operate in real-time conversations where users may interrupt or revise their requests. If a tool executes too early, it may perform the wrong action. The LLM does not truly know intent is clear. You make “intent is clear” operational by defining concrete criteria such as:- required slots are present (for example,
email,issue_type, etc.) - no unresolved ambiguity (for example, “today or tomorrow?”)
- user gave explicit confirmation for write actions (for example, “yes, submit it”)
- wait until the user’s intent is clear
- avoid executing write actions mid-sentence
- allow the conversation to stabilize before triggering tools
Recommended system prompt addendum (copy-paste)
Add the following policy to your persona’s system prompt to improve tool-call quality and safety:5. Log Tool Calls for Observability
Production systems should log tool activity so issues can be debugged easily. In addition to backend execution logs, listen to Tavus app-events (Dailyapp-message events) for end-to-end observability. This lets you trace the full lifecycle:
- when
conversation.tool_callwas emitted - what payload was received
- what your app executed
- what response event (
conversation.echo,conversation.respond, orconversation.append_llm_context) was sent back
- conversation_id
- tool_name
- parameters
- incoming event_type
- execution result
- outgoing response event_type
- timestamp
6. Inject Results Back into the Conversation
After executing a tool call, you can send results back to the conversation so the LLM can use them to generate a response when needed. In some workflows, you may execute a tool-side action without sending the result back into conversation context. There are two primary methods: Sample tool call event: When the LLM triggers a tool call, you’ll receive an event like this:properties.name field contains the tool name, and properties.arguments is a JSON string that needs to be parsed to access the tool parameters.
Using conversation.echo
The most common approach is to use the conversation.echo event to send tool results as text that the replica will speak and incorporate into its response:
Using conversation.append_llm_context
For more structured data or when you want to inject context without the replica speaking it aloud, use conversation.append_llm_context:
Using conversation.respond
You can use the conversation.respond event to send tool results as text that the LLM will treat as if the user had spoken it, causing the replica to generate and speak a response. This is useful when you want the LLM to process the tool result and generate a natural language response.
Important: Format your respond event in such a way that it can be replied to as if spoken by a user. The text should be phrased as something the user might say.
- Use
conversation.echowhen you want the replica to acknowledge and speak about the tool result directly - Use
conversation.append_llm_contextwhen you want to silently add context that informs future responses without explicit mention - Use
conversation.respondwhen you want the LLM to “hear” new information and generate a natural language response to it
Example Implementation
For a complete working example of how to implement tool calling with Tavus, see the official example repository: https://github.com/Tavus-Engineering/tavus-examples/tree/main/examples/cvi-tool-calling This example demonstrates how to:- Define tools in the persona configuration
- Listen for
conversation.tool_callevents - Execute backend logic when a tool is triggered
- Inject results back into the conversation

