The UK’s national standards body is to unveil a new standard for companies that independently audit artificial intelligence tools in a bid to weed out rogue players in the growing market in AI verification.
“Hundreds” of “unchecked” groups offer audits, claiming to assess whether companies using AI models such as those in self-driving cars and cancer-detecting programs do so reliably, fairly and safely, the British Standards Institution warned on Monday.
Many of the groups that sell AI audits also develop their own AI technologies, “raising concerns about independence and rigour”, the BSI told the Financial Times.
The BSI said the standard, due to be launched on July 31, was the first international set of requirements to standardise how assurance firms check whether companies are following AI management standards.
Demand for new AI assurance services is feeding off businesses’ concerns about the array of potential harms AI can cause, as well as the need to comply with international regulations, such as the EU AI act.
AI assurance is increasingly seen as crucial for business wanting to adopt the technology. Boutique companies have sprung up to take advantage of the demand, racing against larger contenders, including the Big Four accountancy firms.
The fledgling AI assurance market generates a gross value add of more than £1bn in the UK but the government has warned of a lack of standardisation in the sector. Some “assurance” can be light-touch advice, or limited to checking whether the AI complies with one particular piece of legislation.
Mark Thirlwell, global digital director at the BSI, said: “There is a risk of a ‘wild west’ of unchecked providers and the potential for radically different levels of assessment.
“Businesses need to be sure that when their AI management system is being assessed, it is being done in a robust, coherent and consistent manner.”
The standard would help regulators, customers and investors to differentiate between AI that has been assured by a certified assurance provider, he added, “supporting responsible AI innovation”.
Philip Dawson, head of AI policy at AI assurance and insurance company Armilla AI, said the BSI standard would raise the bar for companies such as itself providing AI assessments and other certification bodies.
It was a “pivotal step forward for the AI assurance ecosystem” by clarifying which companies are qualified to certify AI systems against ISO standards.
Many assurance firms used proprietary systems to audit AI, said Inioluwa Deborah Raji, researcher at UC Berkeley who specialises in AI audits and evaluations.
While companies “will pay to evaluate against [proprietary] standards”, she warned that “we do not necessarily have any way of externally vetting the quality of these standards.
“We don’t really know how kind of inclusive or comprehensive those proprietary standards are.”