Building tools for verifying information consumed by AI agents
We are building an AI agent for verifying user credentials.
Users login into a website, and the AI agent looks at the page and identifies required information on that page. If the required information is on a different page, the agent identifies that and prompts the user to navigate to the said page.
For example, if a user wants to prove that they study at Stanford - we’d ask them to login into stanford.edu ; Once they login, the AI Agent will identify their name, field of study and graduation year from their profile page.
We support over 50k websites from where a user can verify their credentials - university portals, employment portals, HR/Payroll systems, Govt ID portals, ecommerce websites, fintech apps etc.
This is a hard technical AI problem. But, that is something that we have built expertise in. However, the real deal is when the rubber hits the road! That is, does the AI perform as it is expected to for the long tail of websites we support? To build confidence, we need to keep testing and improving our AI. That means we need real users to test our products by logging into the websites we support and try generating a proof - and report if there is any issue encountered. We want to do this proactively, so that the issues are identified before a user experiences these problems in prod.
For that reason, we’re starting a new team to annotate the performance of our AI. It is an ongoing, ever-changing role. The requirements of the type of annotations and demography to get the annotations from will keep changing. Today, we might need thousands students to login and test our university page AI Agents. Tomorrow, we might need PhD candidates to review our AI’s intermediate reasoning steps.
This role is to make sure the AI we’re building is continuously benchmarked on real-use, and help improve the agent.
We build tools that allows enterprises verify information about a user before processing it - whether it is processed by a human or an AI agent.
Traditionally, these verifications including but not limited to background checks cost ~$10. Using our advancements in AI and Cryptography, we're able to send that cost to a few cents.
This enables enterprises to do verification at scale, earlier in their work-flows, and enables new kinds of businesses - which were not performing these verifications because of prohibitively high costs.
For example, it makes no sense that a recruiter does a candidate's document verification and background checks AFTER the candidate has cleared all the interviews. That's because that verification is cumbersome and expensive. They can't afford to verify all the candidates at the top of the funnel. Many such workflows are ripe for disruption.
Another example of a product that was not possible before. A dating app uses our tools to verify the users' information - including their employment, education and financial background. This was something that would have been too expensive to verify at a $10 pricepoint where the ARPU is only $8.
Not only that, we're designing the tools ground up - so that AI agents can utilize these verifications before ingesting information, so that they take action basis truthful information.
We have 3 tools