Anthropic’s Claude adds a prompt playground to quickly improve your AI apps

Date:


Prompt engineering became a hot job last year in the AI industry, but it seems Anthropic is now developing tools to at least partially automate it.

Anthropic released several new features on Tuesday to help developers create more useful applications with the startup’s language model, Claude, according to a company blog post. Developers can now use Claude 3.5 Sonnet to generate, test and evaluate prompts, using prompt engineering techniques to create better inputs and improve Claude’s answers for specialized tasks.

Language models are pretty forgiving when you ask them to perform some tasks, but sometimes small changes to the wording of a prompt can lead to big improvements in the results. Normally you’d have to figure out that wording yourself, or hire a prompt engineer to do it, but this new feature offers quick feedback that could make finding improvements easier.

The features are housed within Anthropic Console under a new Evaluate tab. Console is the startup’s test kitchen for developers, created to attract businesses looking to build products with Claude. One of the features, unveiled in May, is Anthropic’s built-in prompt generator; this takes a short description of a task and constructs a much longer, fleshed out prompt, utilizing Anthropic’s own prompt engineering techniques. While Anthropic’s tools may not replace prompt engineers altogether, the company said it would help new users, and save time for experienced prompt engineers.

Within Evaluate, developers can test how effective their AI application’s prompts are in a range of scenarios. Developers can upload real-world examples to a test suite or ask Claude to generate an array of AI-generated test cases. Developers can then compare how effective various prompts are side-by-side, and rate sample answers on a five-point scale.

A prompt being fed generated data to find good and bad responses.
Image Credits: Anthropic

In an example from Anthropic’s blog post, a developer identified that their application was giving answers that were too short across several test cases. The developer was able to tweak a line in their prompt to make the answers longer, and apply it simultaneously to all their test cases. That could save developers lots of time and effort, especially ones with little or no prompt engineering experience.

Anthropic CEO and co-founder Dario Amodei said prompt engineering was one of the most important things for widespread enterprise adoption of generative AI in an interview from Google Cloud Next earlier this year. “It sounds simple, but 30 minutes with a prompt engineer can often make an application work when it wasn’t before,” said Amodei.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

The Lion’s markings | Astronomy Magazine

The Lion’s markings | Astronomy Magazine ...

String theory is not dead yet

String theory is a mathematical description of nature...

JWST spots more light than expected in the early universe

This artist's concept shows early galaxies forming in...