Many applications of machine learning technology have appeared in most fields recently, although the principles of machine learning were developed about half a century ago, with the growth of computing power, computers first learned to distinguish objects in pictures and play Chinese games that are better than humans (Go ) and then draw pictures based on text descriptions they can now have coherent conversations with humans.
Since 2021, scientific discoveries in the field of artificial intelligence will become accessible to everyone. For example: you can register on MidJourney and create the image you want, just a text description, you can use the bot (ChatGPT) to quickly get an answer to your question.
But if we look at the base on which the bot is built (ChatGPT), we see that it is a large language model (GPT) that has been trained with a large amount of texts on the Internet that are often collected from it. Words, sentences, paragraphs and the relationships between them. Through various technical tricks and continuous training experiences with humans, the model evolves and is able to have natural language conversations with humans.
And because you can find almost anything online; The model is able to naturally lead conversations on almost any topic: from fashion and art history to programming and quantum physics.
Scientists, journalists, and casual hobbyists will find other uses for the ChatGPT bot. (Awesome ChatGPT prompts) contains a list of prompts that allow ChatGPT bots to switch roles, respond in the style of Gandalf or other literary characters, or write Python code. business letters, resumes, and even Linux terminal emulation.
However, ChatGPT is still just a paradigm as you can sometimes find that like many others it pretends to be nonsense and this is when it mentions non-existent scientific studies in its answers. Therefore, always be careful with ChatGPT content.
But all this does not negate that ChatGPT bots, even in their current form, are useful in many practical processes and areas. Below, we explore how this new generation of interactive chatbots poses a significant threat to cybersecurity and how it can be used to identify potential risks and even develop countermeasures.
1- Creating malware:
New cyber security criminals are reporting in their underground forums how ChatGPT bot is used to create new trojans because the bot is able to write code if you briefly describe the required functionality eg publish it via HTTP on server Y) You can get simple (infostealer) stealing malware without Programming skills.
But that doesn't worry cybersecurity professionals, because if the code written by the chatbot is actually used, security solutions will detect and remove it as quickly and efficiently as any synthetic malware before it. Furthermore, if this code is not verified by experienced programmers, the malware can contain hidden bugs and logic errors that reduce its effectiveness.
For now at least, chatbots can only compete with inexperienced virus writers.
2- Malware analysis:
When InfoSec analysts look for a suspicious new application, they reverse engineer code or fake code written using machine learning to see how it works. In this context, you will find that chatbots can explain very quickly what a particular section of code is doing.
Ivan Kwiatkowski, a security researcher in Kaspersky's Global Research and Analytics team, has developed an IDA Pro plugin that does just that. This component has been tested using a language model (davinci-003) also developed by (OpenAI) and is compatible with the GPT model and is able to automatically assign valid names to functions in many cases as well as in the code and its own encryption algorithm used in the specified parameters.
3- Look for weaknesses:
The chatbot reads fake code from the decrypted application and identifies places that may contain security vulnerabilities. In addition, the chatbot provided Python code designed to exploit vulnerabilities in POC code. Sure, chatbots can make all kinds of mistakes when looking for vulnerabilities and writing POC code, but in its current form, the tool is useful for both attackers and defenders.
4- Provide security tips:
Since the ChatGPT bot knows what people think about online cybersecurity, his advice on the subject is compelling. But like any chatbot suggestion, you never know exactly where it came from, so 1 in 10 good suggestions might be worthless. However, the claims made in the sample screenshots below are all true:
5- The evolution of phishing attacks:
Persuasive scripts are a strength of the ChatGPT bot, which is based on the GPT-3 language model it runs on, making it vulnerable to spear-phishing attacks. The main problem with many phishing messages is that they don't sound right, with a lot of generic text that doesn't address the recipient directly. Spear phishing can be very costly when cybercriminals send emails directly to individual victims.
The ChatGPT bot is a game-changer for phishing attacks because it allows attackers to create persuasive, personalized email messages at scale.
However, large phishing attacks usually consist of a series of emails, each of which gradually gains more and more trust from the victim. Therefore, on the second, third, and ninth emails, ChatGPT saves cybercriminals a lot of time. Because the chatbot remembers the context of the conversation, it does an excellent job of generating follow-up emails with very short prompts.
In addition, victims' responses can be easily entered into a form, creating compelling follow-up actions in a matter of seconds. Since one of the tools available to attackers is automated messaging, a chatbot can easily apply a small example of a pattern to other messages. This generated supposedly fake employee-to-employee emails.
All of this means that the number of successful phishing attacks will increase and chatbots will become as persuasive as human-written messages. So what is the solution?
Content analytics experts are actively developing tools to discover content written by chatbots, such as B. ChatGPT, but more time is needed to determine the effectiveness of these tools. Therefore, for the time being, Kaspersky security experts only offer two suggestions: increased vigilance and cybersecurity awareness.
Except for one new tip; It teaches you to recognize text written by a robot itself So yes you can't tell the difference with the naked eye thanks to sophisticated models of natural language processing But use the little stylistic quirks and small inconsistencies that robots can't Play this game to practice distinguishing between human-made text and the written text.
In the end, there is no doubt that just like a ChatGPT bot, technology is a double-edged sword, so it can be beneficial or detrimental to cyber security, but it depends on how you use it.