HomeCompaniesCekura (formerly Vocera)

Cekura (formerly Vocera)

Testing & Observability for AI voice agents

Are you building AI voice agents like receptionists, customer support, or sales reps? Do you find yourself manually testing your bot by calling it? We faced similar challenges, especially being in a regulated industry. That's why we developed Cekura - a solution that automates the testing process for your voice agents. With Cekura, you can prove your reliability before going live, test every update seamlessly, and scale your operations efficiently. Replicating a real world conversation is hard. Our AI simulates these scenarios using workflows, personas and past conversations. We are already talking to AI for ordering food, getting appointments and even interviews. The market is aptly getting flooded with AI voice agents built by thousands of companies. We make them dependable.
Active Founders
Tarush Agarwal
Tarush Agarwal
Founder
Shashij Gupta
Shashij Gupta
Founder
Sidhant Kabra
Sidhant Kabra
Founder
Jobs at Cekura (formerly Vocera)
IN / Remote (IN)
$10K - $25K
1+ years
Cekura (formerly Vocera)
Founded:2024
Batch:F24
Team Size:5
Status:
Active
Location:San Francisco
Primary Partner:Nicolas Dessaigne
Company Launches
Cekura: Automated Testing & Monitoring for Voice and Chat AI Agents
See original launch post

Hi Everyone! We’re Sidhant, Shashij, and Tarush, co-founders of Cekura👋

TL;DR Cekura helps companies ship and scale reliable voice & chat AI agents by providing end-to-end testing and observability

Watch our demo video here

Problem: Making Conversational AI agents reliable is hard. Manually calling your agents or listening through thousands of calls is slow, error-prone and does not provide the required coverage.

Our Solution: At Cekura, we work closely with you at each step of the agent-building journey and help you improve and scale your agents 10 times faster

Key Features:

Testing:

  • Scenario Generation: Create varied test cases from agent descriptions automatically for comprehensive coverage.
  • Evaluation Metrics: Track custom and AI-generated metrics. Check for instruction following, tool calls, and conversational metrics (Interruptions, Latency, etc).
  • Prompt Recommendation: Get actionable insights to improve each of the metrics.
  • Custom Personas: Emulate diverse user types with varied accents, background noise, and conversational styles.
  • Production Call Simulation: Simulate production calls to ensure all the fixes have been incorporated.

Observability:

  • Conversational Analytics: Provides customer sentiment, interruptions, latency and call analytics: ringing duration, success rate, call volume trends, etc
  • Instruction Following: Identify instances where agents fail to follow instructions.
  • Drop-off Tracking: Analyzes when and why users abandon calls, highlighting areas of improvement.
  • Custom Metrics: Define unique metrics for personalized call analysis.
  • Alerting: Proactively notifies users of critical issues like latency spikes or missed functions.

Ready to make your Voice & Chat Agents Reliable?

The Team:

We met over eight years ago during our undergraduate at IIT Bombay.

Tarush comes from quantitative finance, where he worked on simulations for ultra-low latency trading strategies (think nanoseconds!).

Shashij has previously researched NLP at Google Research and is the first author of a paper on testing AI systems reliably, which has 50+ citations from his work at ETH Zurich.

Sidhant comes from a consulting background advising CEOs at Fortune 500 companies in FMCG and medical devices. He managed P&L in a 10 Mn+ ARR in a conversational AI company.

Other Company Launches

Cekura (formerly Vocera) - Testing & Monitoring for AI Voice Agents

Launch reliable voice agents in minutes not weeks.
Read Launch ›