Overview
Define how your persona behaves, responds, and speaks by configuring layers and modes.
Personas are the ‘character’ or ‘AI agent personality’ and contain all of the settings and configuration for that character or agent. For example, you can create a persona for ‘Tim the sales agent’ or ‘Rob the interviewer’.
Personas combine identity, contextual knowledge, and CVI pipeline configuration to create a real-time conversational agent with a distinct behavior, voice, and response style..
Persona Customization Options
Each persona includes configurable fields. Here’s what you can customize:
- Persona Name: Display name shown when the replica joins a call.
- System Prompt: Instructions sent to the language model to shape the replica’s tone, personality, and behavior.
- Conversational Context: Background knowledge or reference information provided to the persona’s language model.
- Pipeline Mode: Controls which CVI pipeline layers are active and how input/output flows through the system.
- Default Replica: Sets the digital human associated with the persona.
- Layers: Each layer in the pipeline processes a different part of the conversation. Layers can be configured individually to tailor input/output behavior to your application needs.
Layer
Explore our in-depth guides to customize each layer to fit your specific use case:
Perception Layer
Defines how the persona interprets visual input like facial expressions and gestures.
STT Layer
Transcribes user speech into text using the configured speech-to-text engine.
LLM Layer
Generates persona responses using a language model. Supports Tavus-hosted or custom LLMs.
TTS Layer
Converts text responses into speech using Tavus or supported third-party TTS engines.
Pipeline Mode
Tavus provides several pipeline modes, each with preconfigured layers tailored to specific use cases:
Full Pipeline Mode (Default & Recommended)
The default and recommended end-to-end configuration optimized for real-time conversation. All CVI layers are active and customizable.
- Lowest latency
- Best for natural humanlike interactions
We offer a selection of optimized LLMs including Llama 3.3 and OpenAI models that are fully optimized for the full pipeline mode.
CVI quickstart
Custom LLM / Bring Your Own Logic
Use this mode to integrate a custom LLM or a specialized backend for interpreting transcripts and generating responses.
- Adds latency due to external processing
- Does not require an actual LLM—any endpoint that returns a compatible chat completion format can be used