Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tavus.io/llms.txt

Use this file to discover all available pages before exploring further.

We recommend using Tavus’s Full Pipeline in its entirety for the lowest latency and most optimized multimodal experience. Integrations like LiveKit Agent or Pipecat only provide rendering, while our Full Pipeline includes perception, turn-taking, and rendering for complete conversational intelligence. The LiveKit integration also does not support interactions (“app messages”) like echo messages.
With the LiveKit Agents integration, LiveKit runs your voice assistant in the room while Tavus renders the avatar. Create a persona whose layers include transport_type: livekit and the pipeline_mode value in Step 2, then start a tavus.AvatarSession with your replica_id and persona_id as in Step 3. Tavus enables AI developers to create realistic video avatars powered by state-of-the-art speech synthesis, perception, and rendering pipelines. Through its integration with the LiveKit Agents application, you can seamlessly add conversational avatars to real-time voice AI systems.

Prerequisites

Make sure you have the following before starting:
  • LiveKit Voice Assistant Python App

Integration Guide

1

Step 1: Setup and Authentication

  1. Install the plugin from PyPI:
pip install "livekit-agents[tavus]~=1.0"
  1. Set TAVUS_API_KEY in your .env file. Use the same value as your Tavus API key from Authentication. Step 2 creates the persona via Create Persona.
2

Step 2: Configure Replica and Persona

  1. Create a persona with LiveKit support using the Tavus API:
curl --request POST \
  --url https://tavusapi.com/v2/personas \
  -H "Content-Type: application/json" \
  -H "x-api-key: <api-key>" \
  -d '{
  "persona_name": "Customer Service Agent",
  "pipeline_mode": "echo",
  "layers": {
    "transport": {
            "transport_type": "livekit"
    }
  }
}'
  • Replace <api-key> with your actual Tavus API key. You can generate one in the Developer Portal. See Authentication for how requests are authorized.
  • Set pipeline_mode to echo. That value is the persona’s pipeline mode for this integration; it is not the same as CVI conversation.echo app messages (which this LiveKit path does not support).
  • Set transport_type to livekit.
  1. Save the persona_id from the API response.
  2. Choose a replica from the Stock Library or browse available options on the Developer Portal.
We recommend using Phoenix-3 PRO Replicas, which are optimized for low-latency, real-time applications.
3

Step 3: Add AvatarSession to AgentSession

In your LiveKit Python app, create a tavus.AvatarSession alongside your AgentSession:
from livekit import agents
from livekit.agents import AgentSession, RoomOutputOptions
from livekit.plugins import tavus

async def entrypoint(ctx: agents.JobContext):
    await ctx.connect()

    session = AgentSession(
        # Add STT, LLM, TTS, and other components here
    )

    avatar = tavus.AvatarSession(
        replica_id="r90bbd427f71",
        persona_id="pcb7a34da5fe",
        # Optional: avatar_participant_name="Tavus-avatar-agent"
    )

    await avatar.start(session, room=ctx.room)

    await session.start(
        room=ctx.room,
        room_output_options=RoomOutputOptions(
            audio_enabled=False  # Tavus handles audio separately
        )
    )
ParameterDescription
replica_id (string)ID of the Tavus replica to render and speak through
persona_id (string)ID of the persona with the correct pipeline and transport configuration
avatar_participant_name (string, optional)Display name for the avatar participant in the room. Defaults to Tavus-avatar-agent
Try out the integration using this sample app.