AI could accelerate scientific fraud as well as progress

Hallucinations, deepfakes and simple nonsense: there are plenty of risks


  • by
  • 02 1, 2024
  • in Science and technology

IN A meetingLLMAILLM room at the Royal Society in London, several dozen graduate students were recently tasked with outwitting a large language model (), a type of designed to hold useful conversations. s are often programmed with guardrails designed to stop them giving replies deemed harmful: instructions on making Semtex in a bathtub, say, or the confident assertion of “facts” that are not actually true.The aim of the session, organised by the Royal Society in partnership with Humane Intelligence, an American non-profit, was to break those guardrails. Some results were merely daft: one participant got the chatbot to claim ducks could be used as indicators of air quality (apparently, they readily absorb lead). Another prompted it to claim health authorities back lavender oil for treating long covid. (They do not.) But the most successful efforts were those that prompted the machine to produce the titles, publication dates and host journals of non-existent academic articles. “It’s one of the easiest challenges we’ve set,” said Jutta Williams of Humane Intelligence.

  • Source AI could accelerate scientific fraud as well as progress
  • you may also like