> ## Documentation Index
> Fetch the complete documentation index at: https://docs.tavus.io/llms.txt
> Use this file to discover all available pages before exploring further.

# Tavus Tool Calling

> Key guidelines for implementing reliable tool calls in Tavus conversational agents

<Tip>
  **New to tool calling?** Start with the [Tool Calling for LLM](/sections/conversational-video-interface/persona/llm-tool) guide to learn how to set up and configure tools. This page focuses on best practices for building reliable tool integrations.
</Tip>

## How Tavus Tool Calls Work

Tool calls allow Tavus agents to interact with external systems such as APIs, databases, and internal services. Here's the high-level flow:

```
User speaks
    ↓ (to Tavus)
Tavus LLM triggers `conversation.tool_call` event
    ↓ (Tavus → Your app)
Your app receives `conversation.tool_call` event
    ↓ (Your app executes)
Execute your backend logic (API calls, DB queries, etc.)
    ↓ (Your app → Tavus)
Send results via `conversation.echo` or `conversation.append_llm_context`
    ↓ (Tavus → User)
LLM generates natural language response to user
```

<Note>
  Tavus does not execute tool calls on the backend. You must implement event listeners in your frontend to handle `conversation.tool_call` events and execute your own logic when a tool is invoked. For detailed implementation instructions, see the [Tool Calling for LLM](/sections/conversational-video-interface/persona/llm-tool) documentation.
</Note>

Because Tavus agents operate in **live conversational environments**, tool design should prioritize reliability, clarity, and conversational continuity.

Below are the six most important principles for building effective tool integrations.

***

## 1. Keep Tool Schemas Clear and Explicit

Tool definitions should be as clear and specific as possible. Ambiguous parameters make it harder for the model to choose and populate tools correctly.

Prefer narrow tools with explicit parameters.

Bad:

```json theme={null}
{
  "name": "lookup_customer",
  "parameters": {
    "query": "string"
  }
}
```

Better:

```json theme={null}
{
  "name": "lookup_customer_by_email",
  "parameters": {
    "customer_email": "string"
  }
}
```

Clear schemas reduce incorrect tool usage and improve consistency.

***

## 2. Separate Read Tools from Write Tools

Tools generally fall into two categories.

**Read tools** retrieve information and are safe to call frequently.

Examples:

* retrieving account data
* searching knowledge bases
* checking order status

**Write tools** modify system state.

Examples:

* creating support tickets
* sending emails
* updating records

Write tools should only run when user intent is clear and parameters are validated.

***

## 3. Keep Tool Results Small

Echo interactions are often the output of a tool call, and they are injected back into the model's context. Large payloads increase token usage and can degrade conversational quality.

Keep `conversation.echo` interactions small: return only the fields needed for the next response.

Example:

```json theme={null}
{
  "message_type": "conversation",
  "event_type": "conversation.echo",
  "conversation_id": "<conversation_id>",
  "properties": {
    "modality": "text",
    "text": "Customer Jane Doe is on the Enterprise plan."
  }
}
```

This keeps conversations efficient and improves response quality.

***

## 4. Avoid Triggering Tools Too Early

Tavus agents operate in real-time conversations where users may interrupt or revise their requests.

If a tool executes too early, it may perform the wrong action.

The LLM does not truly know intent is clear. You make "intent is clear" operational by defining concrete criteria such as:

* required slots are present (for example, `email`, `issue_type`, etc.)
* no unresolved ambiguity (for example, "today or tomorrow?")
* user gave explicit confirmation for write actions (for example, "yes, submit it")

Best practice:

* wait until the user's intent is clear
* avoid executing write actions mid-sentence
* allow the conversation to stabilize before triggering tools

***

### Recommended system prompt addendum (copy-paste)

Add the following policy to your persona's system prompt to improve tool-call quality and safety:

```
Tool invocation policy:
- Only call write tools when user intent is explicit and all required parameters are present.
- If any required parameter is missing, ask a follow-up question instead of calling a tool.
- If the user's wording is ambiguous, ask for clarification before calling a tool.
- For irreversible/state-changing actions (create, update, send, submit, charge, delete), require explicit user confirmation immediately before calling the tool.
- Do not call the same write tool repeatedly for the same request unless the user explicitly asks to retry.
- Read-only tools may be called without confirmation when they directly answer the user's request.
- Keep tool results small; if you need the replica to speak them, summarize succinctly before using conversation.echo.
```

***

## 5. Log Tool Calls for Observability

Production systems should log tool activity so issues can be debugged easily.

In addition to backend execution logs, listen to Tavus app-events (Daily `app-message` events) for end-to-end observability. This lets you trace the full lifecycle:

* when `conversation.tool_call` was emitted
* what payload was received
* what your app executed
* what response event (`conversation.echo`, `conversation.respond`, or `conversation.append_llm_context`) was sent back

At minimum, log:

* conversation\_id
* tool\_name
* parameters
* incoming event\_type
* execution result
* outgoing response event\_type
* timestamp

This helps identify duplicate calls, incorrect parameters, or unexpected behavior during conversations.

***

## 6. Inject Results Back into the Conversation

After executing a tool call, you can send results back to the conversation so the LLM can use them to generate a response when needed. In some workflows, you may execute a tool-side action without sending the result back into conversation context. There are two primary methods:

**Sample tool call event:**

When the LLM triggers a tool call, you'll receive an event like this:

```json theme={null}
{
  "message_type": "conversation",
  "event_type": "conversation.tool_call",
  "conversation_id": "<conversation_id>",
  "properties": {
    "name": "get_current_time",
    "arguments": "{\"location\": \"New York\"}"
  }
}
```

The `properties.name` field contains the tool name, and `properties.arguments` is a JSON string that needs to be parsed to access the tool parameters.

### Using `conversation.echo`

The most common approach is to use the `conversation.echo` event to send tool results as text that the replica will speak and incorporate into its response:

```javascript theme={null}
if (message.message_type === 'conversation' && message.event_type === 'conversation.tool_call') {
  const toolCall = message.properties;
  
  // Execute your tool logic
  const result = await executeTool(toolCall.name, JSON.parse(toolCall.arguments));
  
  // Send result back via echo
  const responseMessage = {
    message_type: "conversation",
    event_type: "conversation.echo",
    conversation_id: message.conversation_id,
    properties: {
      text: `Tool result: ${JSON.stringify(result)}`
    }
  };
  
  call.sendAppMessage(responseMessage, '*');
}
```

### Using `conversation.append_llm_context`

For more structured data or when you want to inject context without the replica speaking it aloud, use `conversation.append_llm_context`:

```javascript theme={null}
const contextMessage = {
  message_type: "conversation",
  event_type: "conversation.append_llm_context",
  conversation_id: message.conversation_id,
  properties: {
    context: `Tool execution result: ${JSON.stringify(result)}`
  }
};

call.sendAppMessage(contextMessage, '*');
```

### Using `conversation.respond`

You can use the `conversation.respond` event to send tool results as text that the LLM will treat as if the user had spoken it, causing the replica to generate and speak a response. This is useful when you want the LLM to process the tool result and generate a natural language response.

**Important:** Format your respond event in such a way that it can be replied to as if spoken by a user. The text should be phrased as something the user might say.

```javascript theme={null}
const respondMessage = {
  message_type: "conversation",
  event_type: "conversation.respond",
  conversation_id: message.conversation_id,
  properties: {
    text: `The current time in New York is ${result.time}`
  }
};

call.sendAppMessage(respondMessage, '*');
```

**When to use each:**

* Use `conversation.echo` when you want the replica to acknowledge and speak about the tool result directly
* Use `conversation.append_llm_context` when you want to silently add context that informs future responses without explicit mention
* Use `conversation.respond` when you want the LLM to "hear" new information and generate a natural language response to it

***

## Example Implementation

For a complete working example of how to implement tool calling with Tavus, see the official example repository:

[https://github.com/Tavus-Engineering/tavus-examples/tree/main/examples/cvi-tool-calling](https://github.com/Tavus-Engineering/tavus-examples/tree/main/examples/cvi-tool-calling)

This example demonstrates how to:

* Define tools in the persona configuration
* Listen for `conversation.tool_call` events
* Execute backend logic when a tool is triggered
* Inject results back into the conversation
