LLM tool calling works with OpenAI’s Function Calling and can be set up in the llm layer. It allows AI agents to trigger functions based on user speech during a conversation.

Defining Tool

Top-Level Fields

FieldTypeRequiredDescription
typestringMust be "function" to enable tool calling.
functionobjectDefines the function that can be called by the LLM. Contains metadata and a strict schema for arguments.

function

FieldTypeRequiredDescription
namestringA unique identifier for the function. Must be in snake_case. The model uses this to refer to the function when calling it.
descriptionstringA natural language explanation of what the function does. Helps the LLM decide when to call it.
parametersobjectA JSON Schema object that describes the expected structure of the function’s input arguments.

function.parameters

FieldTypeRequiredDescription
typestringAlways "object". Indicates the expected input is a structured object.
propertiesobjectDefines each expected parameter and its corresponding type, constraints, and description.
requiredarray of stringsSpecifies which parameters are mandatory for the function to execute.

Each parameter should be included in the required list, even if they might seem optional in your code.

function.parameters.properties

Each key inside properties defines a single parameter the model must supply when calling the function.

FieldTypeRequiredDescription
<parameter_name>objectEach key is a named parameter (e.g., location). The value is a schema for that parameter.

Optional subfields for each parameter:

SubfieldTypeRequiredDescription
typestringData type (e.g., string, number, boolean).
descriptionstringExplains what the parameter represents and how it should be used.
enumarrayDefines a strict list of allowed values for this parameter. Useful for categorical choices.

Example Configuration

Here’s an example of tool calling in the llm layers:

Best Practices:

  • Use clear, specific function names to reduce ambiguity.
  • Add detailed description fields to improve selection accuracy.
LLM Layer
"llm": {
  "model": "tavus-llama",
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_current_time",
        "description": "Fetch the current local time for a specified location",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The name of the city or region, e.g. New York, Tokyo"
            }
          },
          "required": ["location"]
        }
      }
    },
    {
      "type": "function",
      "function": {
        "name": "convert_time_zone",
        "description": "Convert time from one time zone to another",
        "parameters": {
          "type": "object",
          "properties": {
            "time": {
              "type": "string",
              "description": "The original time in ISO 8601 or HH:MM format, e.g. 14:00 or 2025-05-28T14:00"
            },
            "from_zone": {
              "type": "string",
              "description": "The source time zone, e.g. PST, EST, UTC"
            },
            "to_zone": {
              "type": "string",
              "description": "The target time zone, e.g. CET, IST, JST"
            }
          },
          "required": ["time", "from_zone", "to_zone"]
        }
      }
    }
  ]
}

How Tool Calling Works

Tool calling is triggered during an active conversation when the LLM model needs to invoke a function. Here’s how the process works:

This example explains the get_current_time function from the example configuration above.

1

Input Detected

The AI processes real-time speech input.

Example: The user says, “What time is it now in New York?”

2

Tool Matching

The LLM analyzes the input and identifies that the user’s question matches the purpose of the get_current_time function, which expects a location argument.

3

Event Broadcast

Tavus broadcasts a tool call event over the active Daily room.

Your app can listen for this event, handle the tool call (e.g. by calling an API), and return the result to the AI for use in its response:
“It’s currently 2:43 PM in New York”

Modify Existing Tools

You can update tools definitions using the Update Persona API.

curl --request PATCH \
  --url https://tavusapi.com/v2/personas/{persona_id} \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <api-key>' \
  --data '[
    {
      "op": "replace",
      "path": "/layers/llm/tools",
      "value": [
        {
          "type": "function",
          "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
              "type": "object",
              "properties": {
                "location": {
                  "type": "string",
                  "description": "The city and state, e.g. San Francisco, CA"
                },
                "unit": {
                  "type": "string",
                  "enum": ["celsius", "fahrenheit"]
                }
              },
              "required": ["location", "unit"]
            }
          }
        }
      ]
    }
  ]'