Heidi Health is a Series A health tech startup, based in Australia, focused on building an AI medical scribe for clinicians.

A header with a blue background that says "AI medical scribe for all clinicians"

Problem

Founded in 2019, Heidi Health has always aimed to transform healthcare by making medical documentation effortless, allowing clinicians to spend more time focusing on patients rather than writing documents. Recent developments with LLMs made these goals much more attainable.

Kieran McLeod, a medical doctor and prompt engineer at Heidi Health, is responsible for all things related to prompt engineering. Initially Kieran and his team of Medical AI Residents stored their prompts in a Google sheet and tested them in Jupyter notebooks. A common story, the team quickly outgrew these solutions. It was a clunky workflow, and not everyone was comfortable using Jupyter notebooks.

Kieran started using a prompt testing tool (not PromptHub) but quickly realized it was unreliable and clunky to use. This prompted Kieran and his team to search for a better solution for testing prompts.

They needed a stable product with an easy UI/UX that even their non-technical medical doctors would be comfortable using. Additionally, they needed a solution that would allow them to batch test different models, store data for repeat use, and easily tweak various parameters.

Solution

The shortcomings of their Google Sheets and Jupyter notebooks setup had Kieran and other Medical AI Residents looking for a new solution that enabled more advanced testing capabilities. That’s when they found PromptHub.

After a quick demo, it was clear to Kieran that PromptHub was a noticeable upgrade over the status quo. The UX was clean and simple for anyone to use, regardless if they were technical or not. Being able to seamlessly batch test prompts against different models helped Kieran understand the trade-offs between output quality, price, and latency.

A square graphic with text from a testimonial from Kieran

Additionally, the ability to store and reuse data in PromptHub via variables and datasets made it easy to ensure prompts worked at scale.

Lastly, testing how changes to parameters affected output quality enabled the team to get extremely granular with their evaluations.

This has all lead to major efficiency gains and better outputs from LLMs, as Kieran notes:

A square graphic with text from a second testimonial from Kieran

Siddharth Krishnakumar, a Medical AI Resident at Heidi Health, noted that the speed of the PromptHub team has been a huge win. Models are added to the platform as soon as they are released, which makes it easy for the Heidi Health team to quickly test new models, making the model selection process extremely fast.

A square graphic with text from a testimonial from Siddharth

Going forward

The team has been having great success with PromptHub and has seen their testing speed increase by 3x. Their prompts are becoming better and more robust, leading to improved product experiences for their users.

Looking ahead, the team is excited to expand upon the offerings in PromptHub and deepen the integration into their product. As they continue to scale, we’ll be there to support them every step of the way.

Dan Cleary
Founder