It doesn’t take much to make machine-learning algorithms go awry

The rise of large-language models could make the problem worse


  • by
  • 04 5, 2023
  • in Science and technology

ThealgorithmsAIAI that underlie modern artificial-intelligence () systems need lots of data on which to train. Much of that data comes from the open web which, unfortunately, makes the s susceptible to a type of cyber-attack known as “data poisoning”. This means modifying or adding extraneous information to a training data set so that an algorithm learns harmful or undesirable behaviours. Like a real poison, poisoned data could go unnoticed until after the damage has been done.Data poisoning is not a new idea. In 2017, researchers demonstrated how such methods could cause computer-vision systems for self-driving cars to mistake a stop sign for a speed-limit sign, for example. But how feasible such a ploy might be in the real world was unclear. Safety-critical machine-learning systems are usually trained on closed data sets that are curated and labelled by human workers—poisoned data would not go unnoticed there, says Alina Oprea, a computer scientist at Northeastern University in Boston.

  • Source It doesn’t take much to make machine-learning algorithms go awry
  • you may also like