A recent study by the Rand Corporation (a US-based research institute)hhas discovered that artificial intelligence (AI) models, which form the backbone of modern chatbots, could potentially be used to plan a biological weapons attack.
These AI models, known as large language models (LLMs), are trained on extensive internet data and are the driving force behind chatbots like ChatGPT. The researchers found that these LLMs could provide guidance that could be used in planning and executing a biological attack. However, it’s important to note that these models did not generate explicit instructions for creating biological weapons.
The report shed light on the fact that previous attempts to weaponize biological agents have been unsuccessful due to a lack of understanding about bacteria. AI has the potential to bridge this knowledge gap swiftly.
In one of the test scenarios, an LLM identified potential biological agents, including those causing smallpox, anthrax, and plague, and discussed their relative chances of causing mass death.
The LLM also evaluated the possibility of obtaining plague-infested rodents or fleas and transporting live specimens. It further mentioned that the scale of projected deaths depended on factors such as the size of the affected population and the proportion of cases of pneumonic plague, which is deadlier than bubonic plague.
With AI becoming increasingly prevalent in technology, there is growing concern about its potential misuse or overuse. To address these threats related to AI and to make the technological space safer, an AI safety summit will be held in the UK next month.
In July, Dario Amodei, CEO of AI firm Anthropic, warned that AI systems could help create bioweapons in two to three years’ time. This revelation by a US think-tank ahead of the summit has raised concerns about advanced AI-based chatbots’ potential role in planning an attack with a biological weapon.
The researchers accessed the models through an application programming interface (API), but did not specify which LLMs were tested. The findings from this study underscore the need for careful consideration and regulation of AI technologies to prevent their misuse.
Image Credits: Unsplash