Generative AI technologies and tools have opened a new era in the technology industry; Experts say it is being seen as the next revolution, with some comparing it to the revolution spawned by the advent of the Internet itself. Therefore, major technology companies and startups are competing to provide technologies and products suitable for the new stage, hoping to capture a share of the growing artificial intelligence market.
Microsoft and Google have released versions of chatbots that were developed using Language Large Models (LLMs) and are currently working on integrating their capabilities into their products.
came to the government. The Romanian government appointed an interactive chatting robot called (ION) as the first artificial intelligence advisor as part of a government mission, and scientists are using artificial intelligence programs to learn what languages animals speak and how we communicate with them. They communicate.
The rapid adoption of AI technology and the introduction of new products for users has alarmed AI experts, who have sounded the alarm about the web data used to train the tools.
How does web data pose a threat to AI?
Artificial intelligence and machine learning experts have warned that data poisoning attacks could affect huge datasets that are commonly used to train deep learning models in many AI services.
Data poisoning refers to displaying meaningless or malicious data with the intent of impairing the performance of machine learning models and many AI algorithms that rely primarily on data quality.
Data poisoning occurs when an attacker tamperes with the training data used to build the deep learning model, or by manipulating data fed into it during the model production phase, which can be traced in such a way that it is difficult to determine who it was. Hitting decision making affects the AI, causing it to make inaccurate predictions.
How effective are data poisoning attacks?
Data poisoning attacks can be very powerful; Because AI learns from wrong data and can make bad decisions with disastrous consequences, all in a very simple way to secretly change the data used for training. Source data for machine learning algorithms.
There is currently no evidence that real-world attacks involve poisoning of network logs, but a team of AI and machine learning researchers from Google, Nvidia, ETH Zurich, and Robust Intelligence claim the possibility of network poisoning attacks using existing datasets. To train the most popular machine learning models.
The researchers cautioned that a small amount of misleading data in the training set could be enough to deliberately introduce errors into the behavior of the AI model.
Using the technology they developed to take advantage of the way datasets are collected on the web, the researchers claim to have been able to extract 0.01% of good deep learning datasets with little effort and destroy them at low cost.
The researchers warn that 0.01% may sound low, and represents only a small part of the dataset, but it is enough to poison the AI model, an attack known as split-view poisoning, because when the attacker fires, certain web resources can be controlled to index the set of data that can be accessed. It can corrupt the data collected, make it inaccurate and potentially negatively affect the entire algorithm.
Data poisoning attacks can be used to rewrite the linguistic inclinations of chatbots like ChatGPT and Bard to speak differently, use offensive language to convince algorithms that some company is doing something wrong, or test viruses and malware to convince them that security files are malicious. Some examples of the use of AI and how poisoning can disrupt operations.
Because AI models learn a variety of skills for different types of applications, the ways hackers can poison training data are as diverse as their uses.
How is a data poisoning attack performed?
One way attackers achieve this is by simply purchasing expired Internet domain names that have been collected to train AI models. Now they will most likely use it to poison huge amounts of data on the Internet.
In addition, the researchers describe a second type of attack, which they call preemptive poisoning, in which the attacker does not have complete control over a given data set, but can accurately predict when a trainer will access network resources to provide data for training. Combines the artificial intelligence model. He was able to poison the dataset by entering misleading data before it was recorded.
Even if the information is restored to its original unchanged form after a few minutes, in the event of an active malicious attack, the faulty record extracted by the algorithm will still be stored continuously in the table.
The researchers cited Wikipedia as an example of a commonly used resource for machine learning training data. The nature of Wikipedia's work means that anyone can edit a page at any time. According to the researchers, attackers can poison the Wikipedia training set by making malicious changes and forcing the model to collect inaccurate data.
Wikipedia uses documented data collection protocols to train artificial intelligence models, which means that it can predict with high accuracy when data is being collected from a given article, and it can intervene and maliciously edit pages to force models to collect inaccurate data that is permanently saved. in the log.
Researchers believe the protocol can be used to poison Wikipedia pages with a success rate of up to 6.5%. This percentage may not seem high. But the number of Wikipedia pages and the way they are used to train machine learning datasets means that large numbers of models can be fed inaccurate information.
The researchers alerted Wikipedia to potential attacks and countermeasures, and declared that the article's purpose was to encourage others in the security field to do their own research on protecting AI and machine learning systems from malicious attacks.
"Our work is just a starting point for the community to better understand the risks of modeling from existing data on the Internet," they said.
Is there any solution?
interesting ; Manipulating AI models in this way reflects a problem that cybersecurity professionals face with personnel training issues. Attackers often rely on employee ignorance to infiltrate organizations by targeting untrained employees with phishing scams. Same for AI data poisoning. .
Since it is still in its infancy, cybersecurity professionals are still learning how best to defend against data poisoning attacks. One way to prevent data poisoning, Bloomberg said, is for scientists developing AI models to regularly validate all labels in their training data.
Other experts have also found success using open source data sparingly, although this has advantages as it allows access to more data to enrich existing resources; This means that it is easier to develop more accurate models, but also makes trained models more efficient. An easy target for scammers and hackers.
Pentesting can also offer a solution because it can find vulnerabilities that allow hackers to access data to train models. Some researchers are also considering developing a second layer of AI and machine learning to identify potential errors in the training data.
finally:
There is no doubt that AI has brought many benefits to the world, but it also raises serious security concerns as attacks will be difficult to detect and even harder to stop because hackers will be able to go unnoticed in targeted computing. An ever longer structure, the development of AI and machine learning technologies will allow hackers to gain access to large databases and control data mining.
This may not pose a huge threat to individuals, but it can become a security issue on a global scale given the use of data mining in many fields, the most important of which are finance of mass market, healthcare, etc.