since debut In November, ChatGPT became the Internet’s new favorite plaything. The AI-powered natural language processing tool has quickly amassed over a million users. wedding speech Hip hop lyrics for writing academic essays and writing computer code.
ChatGPT’s human-like abilities not only took the internet by storm, but put many industries at risk. New York schools banned ChatGPT, he said, over concerns that students could be used to cheat. Copywriters are already banned. exchangedand according to reports, Google is so wary of ChatGPT’s functionality that it has issued a “code red” to ensure the viability of the company’s search business.
Amid concerns that ChatGPT could be exploited by hackers with limited resources and lack of technical knowledge, the cybersecurity industry, a community that has long been skeptical about the potential impact of modern AI, also You seem to be paying attention.
Just weeks after ChatGPT’s debut, Israeli cybersecurity firm Check Point discovered that web-based chatbots can carry malicious payloads when used in conjunction with OpenAI’s code-writing system, Codex. We have demonstrated that it is possible to craft a phishing email that works. Sergey Shykevich, Threat Intelligence Group Manager at Checkpoint, told TechCrunch that he believes use cases like this show that ChatGPT “has the potential to significantly change the cyberthreat landscape.” “It represents another step in the dangerous evolution of an increasingly sophisticated and effective threat,” he added. cyber capabilities. ”
TechCrunch was also able to use a chatbot to generate legitimate-looking phishing emails. When he first asked ChatGPT to create a phishing email, the chatbot rejected the request. “I am not programmed to create or promote malicious or harmful content,” the prompt spit. I was able to avoid it.
Many of the security experts TechCrunch spoke to said ChatGPT’s ability to create legitimate-sounding phishing emails (ransomware’s number one attack vector) made chatbots gain widespread acceptance by cybercriminals, especially among non-English speakers. I believe it will.
Chester Wisniewski, principal researcher at Sophos, said it’s easy to see ChatGPT being used for “all kinds of social engineering attacks.”
“At a basic level, I have been able to create some excellent phishing lures with it. We hope that it can be used to make interactive conversations even more realistic,” Wisniewski told TechCrunch.
“Actually getting and using malware is just part of the shit that goes into being a bottom cybercriminal.” The Grugq, Security Researcher
The idea that a chatbot can write compelling text and realistic interactions isn’t all that far-fetched. “For example, you can tell ChatGPT to pretend to have his GP surgery and within seconds it will generate lifelike text,” Hanah Darley, head of threat research at Darktrace, told TechCrunch. “It’s not hard to imagine how attackers would use this as a force multiplier.”
Check Point also recently raised alarm bells about the apparent ability of chatbots to help cybercriminals create malicious code. Researchers say they witnessed at least three instances of a hacker with no technical skills bragging about how he leveraged his AI smarts in ChatGPT for nefarious purposes. One of her hackers on a dark web forum exposed code created by ChatGPT that allegedly stole files of interest, compressed them, and sent them over the web. Another user posted her Python script. They claimed that this was the first script they created. Check Point says that while the code appears harmless, it could be “easily modified to make someone’s machine fully encrypted without user interaction.” According to Check Point, the same forum where users previously sold access to hacked company servers and access to stolen data.
How hard would that be?
Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, recently told TechCrunch that he used ChatGPT to create a World Cup-themed phishing lure and ransomware code targeting macOS. I demonstrated how to do it. Ozarslan asked a chatbot to code for him in Swift. Swift is a programming language used to develop apps for Apple devices that searches for Microsoft Office documents on your MacBook, sends them over an encrypted connection to a web server, and then renders the Office documents on your MacBook. can be encrypted. .
“There is no question that ChatGPT and other tools like this will democratize cybercrime,” said Ozarslan. “It’s bad enough that ransomware code was already available for purchase ‘off-the-shelf’ on the dark web, but now virtually anyone can create their own.”
Unsurprisingly, the news that ChatGPT can create malicious code raised eyebrows across the industry. We also see some experts trying to debunk concerns that AI chatbots could turn would-be hackers into full-blown cybercriminals. In a post on Mastodon, independent security researcher The Grugq ridiculed his Check Point claim that ChatGPT “overvalues cybercriminals who are not good at programming.”
“They have to register domains and maintain infrastructure. They have to update websites with new content and test that software that barely works still works on slightly different platforms. You need to monitor the health of your structure, check what’s happening in the news, and make sure your campaign isn’t in an article about the ‘top 5 most embarrassing phishing scams’,” said The Grugq. says. “Actually getting and using malware is just part of the shit that goes into being a bottom cybercriminal.”
Some believe that ChatGPT’s ability to create malicious code has consequences.
“Defenders can use ChatGPT to generate code to simulate adversaries or automate tasks to make their work easier.” Laura Cancarra, Threat Intelligence Lead at F-Secure ( Laura Kankaala) said, “However, it should be noted that it is risky to fully trust the output of the text and code generated by ChatGPT. The code that ChatGPT generates contains , may contain security issues and vulnerabilities.The generated text may also contain factual and outright inaccuracies,” Kankaala added, adding that ChatGPT generated It cast doubt on the reliability of the code that was written.
ESET’s Jake Moore says that as the technology evolves, “if ChatGPT learns enough from your input, it can quickly analyze potential attacks on the fly and make positive suggestions for improving security.” It may become like this,” he said.
Security experts aren’t the only ones clashing over the role ChatGPT will play in the future of cybersecurity. I was also curious about what ChatGPT itself had to say when I posed a question to the chatbot.
“It is difficult to predict exactly how ChatGPT and other technologies will be used in the future, as it depends on implementation and user intent,” the chatbot replied. “Ultimately, the impact of ChatGPT on cybersecurity depends on how it is used. It is important to be aware of potential risks and take appropriate steps to mitigate them.”