NIST releases a tool for testing AI model risk

Date:


The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, companies and the broader public, has re-released a testbed designed to measure how malicious attacks — particularly attacks that “poison” AI model training data — might degrade the performance of an AI system.

Called Dioptra (after the classical astronomical and surveying instrument), the modular, open source web-based tool, first released in 2022, seeks to help companies training AI models — and the people using these models — assess, analyze and track AI risks. Dioptra can be used to benchmark and research models, NIST says, as well as to provide a common platform for exposing models to simulated threats in a “red-teaming” environment.

“Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra,” NIST wrote in a press release. “The open source software, like generating child available for free download, could help the community, including government agencies and small to medium-sized businesses, conduct evaluations to assess AI developers’ claims about their systems’ performance.”

NIST Dioptra
A screenshot of Diatropa’s interface.
Image Credits: NIST

Dioptra debuted alongside documents from NIST and NIST’s recently created AI Safety Institute that lay out ways to mitigate some of the dangers of AI, like how it can be abused to generate nonconsensual pornography. It follows the launch of the U.K. AI Safety Institute’s Inspect, a toolset similarly aimed at assessing the capabilities of models and overall model safety. The U.S. and U.K. have an ongoing partnership to jointly develop advanced AI model testing, announced at the U.K.’s AI Safety Summit in Bletchley Park in November of last year. 

Dioptra is also the product of President Joe Biden’s executive order (EO) on AI, which mandates (among other things) that NIST help with AI system testing. The EO, relatedly, also establishes standards for AI safety and security, including requirements for companies developing models (e.g. Apple) to notify the federal government and share results of all safety tests before they’re deployed to the public.

As we’ve written about before, AI benchmarks are hard — not least of which because the most sophisticated AI models today are black boxes whose infrastructure, training data and other key details are kept under wraps by the companies creating them. A report out this month from the Ada Lovelace Institute, a U.K.-based nonprofit research institute that studies AI, found that evaluations alone aren’t sufficient to determine the real-world safety of an AI model in part because current policies allow AI vendors to selectively choose which evaluations to conduct.

NIST doesn’t assert that Dioptra can completely de-risk models. But the agency does propose that Dioptra can shed light on which sorts of attacks might make an AI system perform less effectively and quantify this impact to performance.

In a major limitation, however, Dioptra only works out-of-the-box on models that can be downloaded and used locally, like Meta’s expanding Llama family. Models gated behind an API, such as OpenAI’s GPT-4o, are a no-go — at least for the time being.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

The Lion’s markings | Astronomy Magazine

The Lion’s markings | Astronomy Magazine ...

String theory is not dead yet

String theory is a mathematical description of nature...

JWST spots more light than expected in the early universe

This artist's concept shows early galaxies forming in...