Scouttlo
All ideas/devtools/Plataforma SaaS que combine evaluaciones automáticas y humanas para LLMs con dashboards, métricas en tiempo real, gestión de evaluadores expertos y APIs para integración empresarial.
HNB2Bdevtools

Plataforma SaaS que combine evaluaciones automáticas y humanas para LLMs con dashboards, métricas en tiempo real, gestión de evaluadores expertos y APIs para integración empresarial.

Scouted 5 hours ago

7.3/ 10
Overall score

Turn this signal into an edge

We help you build it, validate it, and get there first.

Go from idea to plan: who buys, what MVP to launch, how to validate it, and what to measure before spending months.

Extra context

Learn more about this idea

Get a clearer explanation of what the opportunity means, the current problem behind it, how this idea solves it, and the key concepts involved.

Share your email to view this expanded analysis.

Score breakdown

Urgency8.0
Market size8.0
Feasibility7.0
Competition6.0
Pain point

Las evaluaciones automáticas de LLMs no logran la precisión del 95% requerida por clientes enterprise, necesitando feedback humano experto para casos de soporte al cliente.

Who'd pay for this

Empresas que implementan chatbots de IA para soporte al cliente, equipos de ML/AI en corporaciones, y proveedores de soluciones de customer support automatizado.

Source signal

"Automatic Evals are not enough to get the required 95% accuracy for our Enterprise customers. Automatic Evals are efficient, but still often miss nuances that only human expertise can catch."

Original post

Show HN: Paramount – Human Evals of AI Customer Support

https://github.com/ask-fini/paramount Hey HN, Hakim here from Fini (YC S22), a startup focused on providing automated customer support bots for enterprises that have a high volume of support requests.<p>Today, one of the largest use cases of LLMs is for the purpose of automating support. As the space has evolved over the past year, there has subsequently been a need for evaluations of LLM outputs - and a sea of LLM Evals packages have been released. &quot;LLM evals&quot; refer to the evaluation of large language models, assessing how well these AI systems understand and generate human-like text. These packages have recently relied on &quot;automatic evals,&quot; where algorithms (usually another LLM) automatically test and score AI responses without human intervention.<p>In our day to day work, we have found that Automatic Evals are not enough to get the required 95% accuracy for our Enterprise customers. Automatic Evals are efficient, but still often miss nuances that only human expertise can catch. Automatic Evals can never replace the feedback of a trained human who is deeply knowledgeable on an organization&#x27;s latest product releases, knowledgebase, policies and support issues. The key to solve this is to stop ignoring the business side of the problem, and start involving knowledgeable experts in the evaluation process.<p>That is why we are releasing Paramount - an Open Source package which incorporates human feedback directly into the evaluation process. By simplifying the step of gathering feedback, ML Engineers can pinpoint and fix accuracy issues (prompts, knowledgebase issues) much faster. Paramount provides a framework for recording LLM function outputs (ground truth data) and facilitates human agent evaluations through a simple UI, reducing the time to identify and correct errors.<p>Developers can integrate Paramount with a Python decorator that logs LLM interactions into a database, followed by a straightforward UI for expert review. This process aids the debugging and validation phase of launching accurate support bots. We&#x27;d love to hear what you think!