LLM Model Options: Choose between GPT-3.5-turbo and GPT-4-turbo, with additional models coming soon. Available through both our API and dashboard.
Interruption Sensitivity Slider: Adjust how easily users can interrupt the agent. This feature is now accessible in our API and dashboard.
We've updated our pricing structure to be clearer and more modular.
Conversation voice engine API
- With OpenAI / Deepgram voices ($0.08/min)
- With Elevenlabs voices ($0.10/min)
LLM Agent
- Retell LLM - GPT 3.5 ($0.02/min )
- Retell LLM - GPT 4.0 ($0.2/min )
- Custom LLM (No charge)
Telephony
- Retell Twilio ($0.01/min )
- Custom Twilio (No charge)
Dashboard Updates: The history tab now includes a public log, essential for debugging and understanding your agent's current state, tool interactions, and more.
Enhanced API Responses: Our get-call API now provides latency tracking for LLM and websocket roundtrip times.
Ensure the authenticity of requests with our new IP verification feature. Authorized Retell server IPs are: 13.248.202.14, 3.33.169.178.
Enhancements for Custom LLM Users
Web Call Frontend Upgrades
SDK improvement: Our updated SDK maintains backward compatibility, ensuring smooth transitions and consistent performance.