Changelog

1. Batch Calling

Batch Calling is now live! This feature allows you to make multiple calls simultaneously by simply uploading an Excel sheet.
Here’s how it works:

  • Send Now: Dispatch all calls immediately.
  • Scheduled Time: Send out calls at a scheduled time.

Batch Calling queues your calls without hitting concurrency limits, ensuring a seamless experience.
Pricing: $0.005 per dial (20k calls for $100)

2.  New Docs

We’ve completely overhauled our documentation to make it more user-friendly and comprehensive. We’ll continue updating it regularly based on your needs.
Have suggestions? Join us on Retell Discord and share your thoughts.
Want to find the answer more easily, feel free to join Retell Discord and ask questions in #AI Evy Channel.

See Docs

3. Expanded Latency Metrics

We’ve added TTS (Text-to-Speech) latency details in call history and the Get-call API.
If you notice higher-than-usual TTS latency, switch to another TTS provider directly. (Please note that older latency fields are now deprecated.)

See Docs

4. Bug Fixes and Improvements

  • Resolved real-time API recording pitch issues.
  • Enhanced functionality for calling multiple functions at once.
  • Updated Knowledge Base API SDK code.
  • Drastically reduced voice speed added latency (from ~100ms to ~5ms).

1. Branded Call

We’ve added the Branded Call feature!

Now, you can enable branded call functionality on each of your phone numbers. It’s a great way to build trust with your outbound calls and significantly improve conversion rates.

Once activated, the recipient will see your business name when you call.

Read related blog

2. Advanced Filters

You can now monitor your call records more effectively with these powerful new filters!

For examples:    

  • Spot high-latency calls.    
  • Check long-duration calls.
  • Listen to unsuccessful calls.    
  • Search for calls from specific customers.    
  • Review all transferred calls.    
  • Use your custom post-call analysis filters.

3.  Usage Dashboard

A detailed dashboard now shows your daily call costs and costs by provider.

Easily track your spending at a glance.

4.  Purchase Concurrency

You can now purchase them directly on the dashboard.

Additionally, you can see your live concurrent call usage in the bottom-left corner of the dashboard.

5. Knowledge Base KPI

You can now access the Knowledge Base via API.

Go to docs

6.   Structured Output

If you are using OpenAI’s LLM, we’ve added a Structured Output setting.

When enabled, it ensures responses follow your provided JSON Schema.

Note: This feature may increase the time required to save or update functions.

7.   Claude 3.5 Haiku LLM

We’ve integrated Claude 3.5 Haiku with a pricing of $0.02/min.

Others:

  1. API Update: llm_websocket_url is now deprecated; use the new response_engine field.
  2. Live Transcription: Available on WebCall.
  3. API Key Rotation: Enhanced security for API keys.

1. Knowledge Base

You can now equip your Voice AI agents with your company’s knowledge in three simple ways:

  • Scrape from Webpages: Upload your website’s sitemaps
  • Upload Files: We support all document formats.
  • Manually Add Text: You can also copy and paste the text.

For the "Scrape from Webpages" method, you can select auto-sync every 24 hours or manually sync anytime. No more manual updates!

Pricing for Knowledge Base:

  • You will be charged an additional $0.005 per minute for using the Knowledge Base.
  • Every workspace includes 10 free knowledge bases. Additional knowledge bases will be charged at $8/month, billed at the end of the month.

Go to dashboard

2.  Verified Phone Numbers

A must-have for outbound campaigns.

If your calls are being marked as spam or blocked by carriers, this verification process will prevent that from happening.

Simply submit your business profile, and after review, if your business is legitimate, you're good to go!

3.  OpenAI Realtime API Integration

We’ve added the OpenAI Realtime API to our platform. The average latency is 600-1000ms, but pricing is currently $1.5/minute. We expect the pricing will go down soon.

Watch a video from Brendan

4. Display Transferee’s Number

If you're using the call transfer feature and want the next agent to receive the caller’s number (instead of the Retell number), adjust the settings.

5.   Workspace

We’ve added the Workspace feature.

• For companies, you can now invite your teammates.
• For agencies, you can now create different organizations for your clients.

Others:

  1. Improved Webhook Verification for python: https://github.com/RetellAI/retell-custom-llm-python-demo/commit/131ae64f86e38debb5fdc87168cb7240649e385f
    https://docs.retellai.com/features/webhook#sample-code
  2. Dark Mode.
  3. Pricing & Latency details.

1. Dashboard Overhaul

We’ve completed a full dashboard overhaul.

  • A more user-friendly and intuitive agent builder.
  • Added more functionality to the history tab.
  • Overhauled the billing portal.
  • A cleaner and more consistent UI.

Go to dashboard

2.  Warm Transfer

We’ve added a warm transfer feature.

If you need to provide background information and hand off the call to the next agent, this feature allows you to set up a prompt or static message for smooth transitions.

Watch the tutorial

3.  Disable Transcript Formatting

We’ve added a toggle to disable transcript formatting. This can help resolve the ASR (Automatic Speech Recognition) errors we recently discovered:

  • Phone numbers were being misinterpreted as timestamps.
  • Double numbers were not being accurately captured.

If you encounter issues related to number transcription, try out this toggle.

4.   Cal.com Custom Fields

If you’ve added custom fields in Cal.com, you can now use them in Retell.

When using Cal.com functions, you can instruct the agent to collect specific information, and it will automatically display the collected data in the booking event.

Watch the tutorial

Bug fixes and Improvements

  • Boosted keywords
  • Custom function timeout
  • Maintaining Consistent Voice Tonality

1. More Languages

We’ve now supported more languages:

  • zh-CN (Chinese - China)
  • ru-RU (Russian - Russia)
  • it-IT (Italian - Italy)
  • ko-KR(Korean-Korea)
  • nl-NL (Dutch - Netherlands)
  • pl-PL (Polish - Poland)
  • tr-TR (Turkish - Turkey)
  • vi-VN (Vietnamese - Vietnam)

Simply change the language in the settings panel on the agent creation page.

2.  Max Call Duration

You can now set the maximum duration for calls in minutes to prevent spam.

3.  Extended Voicemail Detection

You can set the duration for detecting voicemail. In some B2B use cases, there are welcome messages before going to voicemail. Setting a longer voicemail detection time can solve this issue.

4.   LLM Temperature

You can adjust the LLM Temperature to get more varied results. The default setting is more deterministic and provides better function call results.

5.  Agent Voice Volume

You can now control the volume of the agent’s voice.

Important Notification

  • We will fully stop Audio Websocket and V1 call APIs at 12/31/2024.

Bug fixes and Improvements

  • DTMF issue is now fixed.
  • Failed inbound calls will now have an entry in history.
  • Addressed inaudible speech.
  • Added retry for inbound dynamic variable webhook.
  • Added timeout for Retell webhook.
  • Fixed a bug where custom voice was accidentally overwritten.
  • Improved voicemail detection performance.

Community Videos

Rish - AI Business Automation

1. DTMF (Navigate IVR)

Guide the voice agent through IVR systems with button presses (e.g., “Press 1 to reach support”).

See Doc

2.  Voicemail Handling

  • You could set up the voicemail when calls reach the voicemail.
  • Dashboard only, not available for API.

3. SIP Trunking Integration

You can now integrate Retell AI with your telephony providers, using your own phone numbers (e.g., Twilio, Vonage). This works with both Retell LLM and Custom LLM.

Integration options:

  • Elastic SIP trunking
  • Dial to SIP Endpoint

See Doc

4. Multilingual Agent (English & Spanish)

You could make a multilingual agent who could speak English and Spanish at the same time.

See Doc

5. Pronunciation

You can also control how certain words are pronounced. This is useful when you want to make sure certain uncommon words are pronounced correctly.

See Doc

6. Voice model selector

We’ve added new settings for voice model selection:

  • Turbo v2.5: Fast multilingual model
  • Turbo v2: Fast English-only model with pronunciation tag support
  • Multilingual v2: Rich emotional expression with a nice accent.

1. Audio Infrastructure Update

We have upgraded our audio infrastructure to WebRTC, moving away from the original websocket-based system. This change ensures better scalability and reliability:

  • Web Calls: All web calls are now on WebRTC.
  • Phone Calls: Migration to WebRTC is in progress, pending resolution of some SIP blockers.

2. Call API V2

We've introduced the updates in our Call API V2, which now separates phone call and web call objects and includes a few field and API changes:

See Doc

3. Concurrency Enhancements

  • Default Limits: The default concurrency limit for all users has been increased to 20.
  • Concurrency API: A new API to check your current concurrency and limit is now available here.

4.  Seperation inbound and outbound

  • Agent Separation: Our APIs now support separate inbound and outbound agents, with the option to disable either as needed.
  • Nickname Field: Easily find specific numbers with the addition of a nickname field for better organization.

5. Bug Fix and Reliability Improvement

  • Enhanced all modules with a smarter retry and failover mechanism.
  • Resolved issues with audio choppiness and looping.
  • Corrected the display of function call results in the LLM playground.
  • Addressed the scrolling issue in the history tab.

6. Usage Limits

In response to abuse and misuse of our platform, we added some usage limits accordingly:

  • Scam Detection: Implemented to safeguard users.
  • Call Length Limit: Maximum of 1 hour.
  • Token Length Limit: Maximum of 8192 tokens for Retell LLM. For multi-state prompts, this includes the longest state plus the general prompt.
  • Please contact us if you need exceptions.

1. SOC 2 Type 1 Certification

We've obtained the Vanta SOC 2 Type 1 certification and are currently awaiting the SOC 2 Type 2 certification.

2. Debugging mode

Click on "Test LLM" to enter debugging mode. It works with both single prompts and stateful multi-prompt agents. Now, you can test the LLM without speaking. You can create, store, and edit the conversation.

Pro tip:
For multi-states prompt agent, you can change the starting point to the middle state and test from there.

3. TTS Fallback

Your stability is our top priority. We've added the capability to specify a fallback for TTS. In case of an outage with one provider, your agent can use another voice from a different provider.

4. GPT-4o and pricing

The OpenAI GPT-4o LLM is now available on Retell. The voice interface API has not been released yet, but we plan to integrate it as soon as it becomes available. Stay tuned!

The pricing for GPT-4o is $0.10 per minute (optional).

See Pricing

5. Add Pronunciation Input

You can now guide the model to pronounce a word, name, or phrase in a specific way.  For example:  "word": "actually",      "alphabet": "ipa",      "phoneme": "ˈæktʃuəli".

This feature is currently available only via the API but will soon be added to the dashboard.

See Doc

6. Normalize for speech option

Normalize the some part of text (number, currency, date, etc) to spoken to its spoken form for more consistent speech synthesis.

See Doc

7. End call after user stay silent

Now you could set if users stay silent for a period after agent speech, then end the call.

The minimum value allowed is 10,000 ms (10 s). By default, this is set to 600000 (10 min).

See Doc

8. Miscellaneous Updates

  1. Voice Lists API: You can now get available voices via API. API References
  2. Ambient Sound: New ambient sound for the call center and ambient sound volume control.
  3. Asterisks Fix: We noticed that OpenAI models recently started generating asterisks, causing some problems. We have applied a fix to stop this.
  4. SDK Updated

Article

retell AI call

Retell AI lets companies build ‘voice agents’ to answer phone calls

Techcrunch

1. Enhanced Call Monitoring

Voice AI agent Assitant

Call Analysis:  We've introduced metrics like Call Completion Status, Task Completion Status, User Sentiment, Average End-to-End Latency, and Network Latency for comprehensive monitoring. You can access these directly on the dashboard or through API.

Disconnection Reason Tracking: Get insights into call issues with the addition of "Disconnection Reason" in the dashboard and "get-call" object. For more details, refer to our Error Code Table.

Function Call Tracking: Transcripts now include function call results, offering a seamless view of when and what outcomes were triggered. Available in the dashboard and get-call API. For custom LLM users, can use tool call invocation event and tool call result event to pass function calling results to us, so that you can utilize the weaved transcript and can utilize dashboard to view when your function is triggered.

2. New Features

Reminder Settings: You can now configure reminder settings to define the duration of silence before an agent follows up with a response. Learn more.

Backchanneling: Backchannel is the ability for the agent to make small noises like “uh-huh”, “I see”, etc. during user speech, to improve engagement of the call. You can set whether to enable it, how often it triggers, what words are used. Learn more.

“Read Numbers Slowly”: Optimize the reading of numbers (or anything else) by making sure it is read slowly and clearly. How to Read Slowly.

Metadata Event for Custom LLM: Pass data from your backend to the frontend during a call with the new metadata event. See API reference.

3. Major Upgrade to Python Custom LLM (Important)

Improved async OpenAI performance for better latency and stability. Highly recommended for existing Python Custom LLM users to upgrade to the latest version.

See Doc

4. Webhook Security

Improved webhook security with the signature "verify" function in the new SDK. Find a code example in the custom LLM demo repositories and in the documentation.

Additionally, the webhook includes a temporary recording for users who opt out of storage; please note that this recording will expire in 10 minutes.

See Doc

This week’s video

We’ve got a shout out in the latest episode of Y Combinator’s podcast Lightcone.

1. Retell LLM Updates

LLM Model Options: Choose between GPT-3.5-turbo and GPT-4-turbo, with additional models coming soon. Available through both our API and dashboard.

Interruption Sensitivity Slider: Adjust how easily users can interrupt the agent. This feature is now accessible in our API and dashboard.

2. Pricing Updates

We've updated our pricing structure to be clearer and more modular.

Conversation voice engine API

- With OpenAI / Deepgram voices ($0.08/min)

- With Elevenlabs voices ($0.10/min)

LLM Agent

- Retell LLM - GPT 3.5 ($0.02/min )

- Retell LLM - GPT 4.0 ($0.2/min )

- Custom LLM (No charge)

Telephony

- Retell Twilio ($0.01/min )

- Custom Twilio (No charge)

See Detail

3. Monitoring & Debugging Tools

Dashboard Updates: The history tab now includes a public log, essential for debugging and understanding your agent's current state, tool interactions, and more.

Enhanced API Responses: Our get-call API now provides latency tracking for LLM and websocket roundtrip times.

See Doc

4. Security Features

Ensure the authenticity of requests with our new IP verification feature. Authorized Retell server IPs are: 13.248.202.14, 3.33.169.178.

5. Other Improvements

Enhancements for Custom LLM Users

  • You can now turn off interruption for each response (no_interruption_allowed in doc)
  • Ability to let agent interrupt / speak when no response is needed (doc)
  • Config event to control whether enable reconnection, and whether to send the call detail at beginning of call
  • New ping pong events and reconnection mechanism in LLM websocket: will reconnect the websocket if connection lost, and will also track server roundtrip latency (available in get-call API)

Web Call Frontend Upgrades

  • Frontend SDK now contains a lot more events that are helpful for animation
    • "audio": real time audio played in the system
    • "agentStartTalking", "agentStopTalking": track whether agent is speaking, not applicable when ambient sound is used
  • "enable_audio_alignment" option to get audio buffer and text alignment in frontend. Not supported in frontend SDK.

SDK improvement: Our updated SDK maintains backward compatibility, ensuring smooth transitions and consistent performance.

🌟 This week’s Demo: Introducing Retell LLM

1️⃣ Retell LLM (Beta)

Low Latency, Conversational LLM with Reliable Function Calls

Experience lightning-fast voice AI with an average end-to-end latency of just 800ms with our LLM, mirroring the performance featured in the South Bay Dental Office demo on our website. Our LLM has been fine-tuned for conciseness and a conversational tone, making it perfect for voice-based interactions. It is also engineered to reliably initiate function calls.

Single-Prompt vs. Stateful Multi-Prompt Agents

We provide two options for creating an agent. The Single-Prompt Agent is ideal for straightforward tasks that require a brief input. For scenarios where the agent's prompt is lengthy and the tasks are too complex for a single input to be effective, the Stateful Multi-Prompt Agent is recommended. This approach divides the prompt into various states, each with its own prompt, linked by conditional edges.

User-Friendly UI for Agent Creation and API for Programmatic Agent Creation

Our dashboard allows you to quickly create an LLM agent using prompts and the drag-and-drop functionality for stateful multi-prompt agents. You can seamlessly build, test, and deploy agents into production using our dashboard or achieve the same programmatically via our API.

Pre-defined Tool Calling Abilities such as Call Transfer, Ending Calls, and Appointment Booking

Leverage our pre-defined tool calling capabilities, including ending calls, transferring calls, checking calendar availability (via Cal.com), and booking appointments (via Cal.com), to easily build real-world actions. We also offer support for custom tools for more tailored actions.

Maintaining Continuous Interaction During Actions That Take Longer

To address delays in actions that require more time to complete, you can activate this feature. It enables the agent to maintain a conversation with the user throughout the duration of the function call. This ensures the voice AI agent keeps the interaction smooth and avoids awkward silences, even when function calls take longer.

2️⃣ SDK Update v3.4.0 Announcement

Please note, the previous SDK version will be phased out in 60 days. We encourage you to transition to the latest SDK version.

3️⃣ Status Page

Stay informed with system status on our new status page.

Status Page

4️⃣ Public Log in get-call API

To streamline your troubleshooting process, we've introduced a public log within our get-call API. This new feature aids in quicker issue resolution and smoother integration, detailed further at the link below.

See doc

1️⃣ More Affordable Premium Voices

Thanks to recent cost reductions in our premium voice service, we're excited to pass these savings on to our customers. We're pleased to announce a new, lower price for our premium voice service—now just $0.12 per minute, down from $0.17. Enterprise pricing will also see similar reductions (please contact us at founders@retellai.com for more information).

Please note: The adjusted pricing will take effect from March 1st, and billing will be charged at the end of this month.

2️⃣ Customizable Dashboard Settings

Gain more control over your voice output with new dashboard settings.

  • Ambient Sound;
  • Responsiveness;
  • Voice Speed;
  • Voice Temperature;
  • Backchanneling;

Tailor your voice interactions to suit your precise needs and preferences for a truly personalized experience.

3️⃣ Secure Webhooks for Enhanced Security

Boost your communication security with our new webhook signatures. This feature enables you to confirm that any received webhook genuinely comes from Retell, providing an additional layer of protection.

See doc

4️⃣ Launch of Multilingual Support

We're excited to announce the launch of our multilingual version, now supporting German, Spanish, Hindi, Portuguese, and Japanese. Access and set your preferred language through our dashboard.

While this feature is currently available via API, we're working on extending support to our SDKs shortly.

See doc

5️⃣ Opt-Out of Transcripts and Recordings Storage

Based on user feedback, we've introduced an opt-out option for storing transcripts and recordings. This feature, available in our API and the Playground, gives you more control over your data and privacy.

Dear Retell Community,

We are excited to share several updates and new features with you. Our goal is to continually improve our offerings to better meet your needs. Here's what's new:

1️⃣ Enterprise Plan and Discounts Now Available

We're excited to announce the availability of our discounted enterprise tiered pricing. For more information on that, please contact our team at founders@retellai.com.

2️⃣ Enhanced Conversation with Lower Latency

We've launched improvements to further reduce latency (by approximately 30%). Try our demo on the website again and experience the magical speed.

3️⃣ New Agent Control Parameters

We've introduced additional control parameters for agents for greater customization and control. Including:

  • Responsiveness: Adjust how responsive your agent is to utterances.
  • Voice Speed: Control the speech rate of your agent to be faster or slower.
  • Boost Keywords: Prioritize specific keywords for speech recognition.

These parameters have been added to our API. Documentation is being updated, and we are also working on incorporating these features into the SDKs. For more details, visit Create Agent API Reference.

4️⃣ New Call Control Parameter: - end_call_after_silence_ms

This parameter enables the automatic termination of calls following a specified duration of user inactivity. It's designed to streamline operations and improve efficiency.

5️⃣ Word-Level Timestamps in Transcripts

To enhance the utility of our transcripts, we are now including word-level timestamps. This feature is pending documentation updates, so stay tuned for more information at Audio WebSocket API Reference.

6️⃣ [Auto-reconnection] Web Call Updates - Client JS SDK 1.3.0

For users utilizing web calls, our latest client JavaScript SDK (version 1.3.0) now supports auto-reconnection of the socket in case of network disconnections. This ensures a more reliable and uninterrupted service.

We are dedicated to providing you with the best possible service and experience.

We welcome your feedback and are here to support you in making the most out of these new features.

Best regards,

Retell AI team 💛

1️⃣ Domain changed

Please note that our domain has changed. Make sure to update your bookmarks and records to stay connected with us seamlessly.

2️⃣ New TTS provider: Deepgram

We've introduced Deepgram as our new TTS provider. Explore it on the Dashboard and discover your favorite one! The price is still $0.10/minute($6/h)

Also, we've added more voice choices from 11labs, ensuring more stable and diverse voice options for your projects.

3️⃣ New Control Parameters: Voice Temperature

Gain control over the stability and variability of your voice output, allowing for more tailored and dynamic audio experiences.

See  doc

4️⃣ New Agent ability: Back channeling

Enhance interactions with the ability for the agent to backchannel, using phrases like "yeah" and "uh-huh" to express interest and engagement during conversations.

See  doc

5️⃣ Python Backend Demo Now in FastAPI

By popular demand, our Python backend demo has transitioned to FastAPI. It includes Twilio integration and a simple function calling example, providing a more robust and user-friendly experience.

See  demo

6️⃣ New Version of Web Frontend SDK

Our updated web frontend SDK makes integration easier and improves performance, allowing you to access live transcripts directly on your web frontend.

See  SDK

7️⃣Improved Performance in Noisy Environments

Our product now offers improved performance even in noisy settings, ensuring your voice interactions remain clear and uninterrupted.

1️⃣ New pricing tier released

Dear Retell Community,

We are thrilled to announce a new and significantly more affordable pricing tier featuring OpenAI's TTS. Effective immediately, you can take advantage of our state-of-the-art voice conversation API with OpenAI TTS at the new rate of $0.10 per minute.

This adjustment reflects our commitment to providing you with exceptional value and enhancing your voice interaction experience.

We believe this new pricing will make our product more accessible and allow you to leverage our technology for a wider range of applications.

2️⃣ SDK updated

We updated our SDK, so update your retell SDK to stay in the loop.
- https://www.npmjs.com/package/retell-sdk
- https://pypi.org/project/retell-sdk/

We added a frontend js SDK to abstract away the details of capturing mic and setting up playback.
- https://www.npmjs.com/package/retell-client-js-sdk

We update our documentation at https://docs.re-tell.ai/guide/intro to help people integrate.

See SDK documentation

3️⃣ Open-source demo repo

We open sourced the LLM and twilio codes that powers our dashboard as a demo:

Node.js demo:

https://github.com/adam-team/retell-backend-node-demo

Python demo:

GitHub - adam-team/python-backend-demo

We open sourced the web frontend demo:

React demo using SDK :

GitHub - adam-team/retell-frontend-reactjs-demo

React demo using native JS:

GitHub - adam-team/retell-frontend-reactjs-native-demo

See Opensource demo repo doc

API Changes & New Features

Dear Retell Community,

In our quest to deliver a human-level conversation experience, we've made a strategic decision to refocus our efforts on voice conversation quality, while scaling back on certain other nice-to-haves. The current API will be phased out after this Wednesday at 12:00 PM. We warmly invite you to adopt our new API, designed to continue providing you with a magical AI conversation experience long-term.

🌟 Key Changes:

  • LLM Open Sourced: Our LLM will no longer be included in the API. Instead, use the "Custom LLM" feature to integrate your own LLM into the conversation pipeline. Our LLM will remain accessible on the dashboard for demo purposes.
  • Twilio and Phone Call Features Open Sourced: These features are removed from the API but remain accessible on the dashboard for demo purposes.
  • Custom LLM Integration: Our API now exclusively supports the integration of your own LLM via a websocket, requiring a specified websocket URL for agent creation.
  • SDK Updates: We're updating our Node.js SDK to align with these changes, with the Python SDK update to follow soon.

🌟 New Features:

  • LIVE Transcript Feature: Leverage real-time transcription for more informed LLM responses.
  • Open Sourced Repositories: Gain more customizability with our open-sourced LLM voice agent implementation and Twilio and phone call features.
  • Reduced Pricing: Enjoy our service at a 15% discount, now priced at $0.17 per minute.

We understand that this transition may require adjustments in your current setup, and we are here to support you through this change. Please feel free to reach out to us for any assistance or further information regarding the new API.

Book a meeting with founders

Thank you for your understanding and continued support.

Best regards,

Retell AI Team 💛

1️⃣ New pricing tier released

Dear Retell Community,

We are thrilled to announce a new and significantly more affordable pricing tier featuring OpenAI's TTS. Effective immediately, you can take advantage of our state-of-the-art voice conversation API with OpenAI TTS at the new rate of $0.10 per minute.

This adjustment reflects our commitment to providing you with exceptional value and enhancing your voice interaction experience.

We believe this new pricing will make our product more accessible and allow you to leverage our technology for a wider range of applications.

2️⃣ SDK updated

We updated our SDK, so update your retell SDK to stay in the loop.
- https://www.npmjs.com/package/retell-sdk
- https://pypi.org/project/retell-sdk/

We added a frontend js SDK to abstract away the details of capturing mic and setting up playback.
- https://www.npmjs.com/package/retell-client-js-sdk

We update our documentation at https://docs.re-tell.ai/guide/intro to help people integrate.

See SDK documentation

3️⃣ Open-source demo repo

We open sourced the LLM and twilio codes that powers our dashboard as a demo:

Node.js demo:

https://github.com/adam-team/retell-backend-node-demo

Python demo:

GitHub - adam-team/python-backend-demo

We open sourced the web frontend demo:

React demo using SDK :

GitHub - adam-team/retell-frontend-reactjs-demo

React demo using native JS:

GitHub - adam-team/retell-frontend-reactjs-native-demo

See Opensource demo repo doc

Thank you for your understanding and continued support.

Best regards,

Retell AI Team 💛

1️⃣ New pricing tier released

Dear Retell Community,

We are thrilled to announce a new and significantly more affordable pricing tier featuring OpenAI's TTS. Effective immediately, you can take advantage of our state-of-the-art voice conversation API with OpenAI TTS at the new rate of $0.10 per minute.

This adjustment reflects our commitment to providing you with exceptional value and enhancing your voice interaction experience.

We believe this new pricing will make our product more accessible and allow you to leverage our technology for a wider range of applications.

2️⃣ SDK updated

We updated our SDK, so update your retell SDK to stay in the loop.
- https://www.npmjs.com/package/retell-sdk
- https://pypi.org/project/retell-sdk/

We added a frontend js SDK to abstract away the details of capturing mic and setting up playback.
- https://www.npmjs.com/package/retell-client-js-sdk

We update our documentation at https://docs.re-tell.ai/guide/intro to help people integrate.

See SDK documentation

3️⃣ Open-source demo repo

We open sourced the LLM and twilio codes that powers our dashboard as a demo:

Node.js demo:

https://github.com/adam-team/retell-backend-node-demo

Python demo:

GitHub - adam-team/python-backend-demo

We open sourced the web frontend demo:

React demo using SDK :

GitHub - adam-team/retell-frontend-reactjs-demo

React demo using native JS:

GitHub - adam-team/retell-frontend-reactjs-native-demo

See Opensource demo repo doc

Thank you for your understanding and continued support.

Best regards,

Retell AI Team 💛