AI systems ‘subject to new types of vulnerabilities,’ British and US cyber agencies warn
2023-11-28 06:1:23 Author: therecord.media(查看原文) 阅读量:10 收藏

British and U.S. cybersecurity authorities published guidance on Monday about how to develop artificial intelligence systems in a way that will minimize the risks they face from mischief-makers through to state-sponsored hackers.

“AI systems are subject to new types of vulnerabilities,” the 20-page document warns — specifically referring to machine-learning tools. The new guidelines have been agreed upon by 18 countries, including the members of the G7, a group that does not include China or Russia.

The guidance classifies these vulnerabilities within three categories: those “affecting the model’s classification or regression performance”; those “allowing users to perform unauthorized actions”; and those involving users “extracting sensitive model information.”

The document sets out practical steps to “design, develop, deploy and operate” AI systems while minimizing the cybersecurity risk.

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said Lindy Cameron, chief executive of the U.K.’s National Cyber Security Centre (NCSC).

Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), described the release of the guidelines as “a key milestone in our collective commitment — by governments across the world — to ensure the development and deployment of artificial intelligence capabilities that are secure by design.”

The NCSC in August warned about “prompt injection attacks” as an apparently fundamental security flaw affecting large language models (LLMs) — the type of machine learning used by ChatGPT to conduct human-like conversations.

“Research is suggesting that an LLM inherently cannot distinguish between an instruction and data provided to help complete the instruction,” the agency’s previous paper stated.

The new guidance focuses on addressing potential cybersecurity vulnerabilities arising directly from the use and integration of AI tools with other systems, rather than their misuse by bad actors.

Monday’s guidance sets out how developers can secure their systems by considering the cybersecurity risks specific to the technologies that make up AI, including by providing effective guardrails around the outputs these models generate.

Composed on the heels of the AI Safety Summit, the guidance was developed with input from the NCSC and CISA’s sister agencies in 17 other countries — from New Zealand to Norway and Nigeria — as well as over a dozen organizations currently developing the technology, including Microsoft, Google and OpenAI.

The NCSC wrote in a press release that “agencies from 17 other countries have confirmed they will endorse and co-seal the new guidelines” as a “testament to the UK’s leadership in AI safety.”

Jonathan Berry, the Viscount Camrose — an aristocrat who inherited his seat in Britain’s unelected House of Lords before being appointed as the Minister for AI and Intellectual Property by Prime Minister Rishi Sunak — described the guidance as “only the start of the journey to secure AI” during a launch event at NCSC’s headquarters on Monday.

Berry said the British government did not immediately plan to legislate to improve AI security. He said the Department for Science, Innovation and Technology (DSIT) was currently developing a “voluntary code of practice” regarding AI development that would subsequently be scrutinized by a public consultation, with the hope of one day establishing an international standard.

Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Alexander Martin is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and is also a fellow at the European Cyber Conflict Research Initiative.


文章来源: https://therecord.media/ai-subject-to-new-vulnerabilities
如有侵权请联系:admin#unsafe.sh