When OpenAI released ChatGPT in November, programmers were surprised to discover that chatbots powered by artificial intelligence could not only imitate a wide variety of human voices, but also write code.A few days after the release, programmers Posted wild eyes example ChatGPT churns out pretty competent code.from Connect cloud services To Porting Python to RustChatGPT showed remarkable proficiency in at least some basic programming tasks.
But it’s not easy to separate the hype from reality when it comes to ChatGPT. Its coding prowess has inspired a series of heated headlines, including “ChatGPT is a bigger threat to cybersecurity than most people realize.” For example, it’s about the ability to create malware, and veteran hackers wonder how large-scale language models are really possible. Used for malicious hacking.
Black hat turned white hat hacker Marcus Hutchins made headlines for stopping the spread of the WannaCry ransomware in 2017, and his experience creating banking Trojans in a previous life puts ChatGPT’s powers at work. You were named one of the people I’m interested in. He wondered: Can chatbots be used to create malware?
The results were disappointing. “I legally said he was a malware developer for 10 years, but it took him 3 hours to get code that worked. This was Python,” he wrote online under his popular name MalwareTech. The known Hutchins said in an interview with Cyber Scoop.
After hours of trial and error, Hutchins was able to generate a component of the ransomware program – a file-encrypting routine – but combined that component with the other functions necessary to build full-fledged malware. When I tried to combine it, ChatGPT failed in a crude fashion and asked me to open it. File after trying to open. Also, ChatGPT usually fails when trying to combine various components.
These types of fundamental ordering problems demonstrate the shortcomings of generative AI systems such as ChatGPT. Large language models can produce content that closely resembles the data they are trained on, but they often lack the error correction tools and contextual knowledge that constitute real expertise. And amidst the surprising reaction to ChatGPT, the tool’s limitations are often lost.
If you believe the hype, There is little that ChatGPT does not interfere with. Everything from white-collar jobs, college essays, professional exams, and, of course, malware development in the hands of ordinary hackers are on the brink of obsolescence. But that hype masks how tools like ChatGPT are likely to be deployed as an adjunct to, rather than a replacement for, human expertise.
In the weeks since ChatGPT’s release, cybersecurity companies have released a flurry of reports indicating that bots can be used to create malicious code, and have argued that ChatGPT’s capabilities (e.g., “polymorphic It spawned catchy headlines about its ability to create “malware”. However, these reports tend to impose a description on the model and, importantly, obscure the role of expert authors in modifying the code that the model generates.
In December, Checkpoint researchers showed that ChatGPT could potentially create a malware campaign from start to finish, from crafting phishing emails to crafting malicious code. However, generating fully-featured code requires only expert programmers to add functionality to detect sandboxes, check if functionality is open to SQL injection, etc. We had to encourage the model to consider things that we think.
Checkpoint researcher Sergey Shykevich said: “Just saying ‘write the malware code’ doesn’t really do anything useful.”
For hackers like Hutchins, knowing what questions to ask is half the battle to create software, and much of the press coverage of ChatGPT as a programming tool is due to researchers asking ChatGPT for help. You might miss how much expertise you bring to the conversation when you ask for it. Software development, or “dev”.
“People who understand development show they are doing it, but they don’t realize how much they contribute,” says Hutchins. “People with no programming experience don’t even know what prompt to display.”
For now, ChatGPT remains one of many tools in the malware development kit. In a report published last week, threat intelligence firm Recorded Future found over 1,500 references to his use of ChatGPT in creating malware development and proof-of-concept code on the dark web and closed forums. However, much of its code is publicly available, and the company claims that ChatGPT is “a threat associated with script kiddies, hacktivists, fraudsters and spammers, credit card fraudsters, and other vile and disreputable forms of cybercrime.” The report notes that it expects it to be most useful to “actors.”
For newcomers to the space, ChatGPT may offer margin help, the report concludes. ”
Overall, the benefits to malicious hackers are marginal. The introductory hacking tips provided by ChatGPT are more accessible, but also easy to Google. As Cyber Scoop reported in his December, as ChatGPT and other large language models mature, their ability to create original code (whether malicious or not) may improve. Until then, rather than generating malware from scratch, tools like ChatGPT will likely play a supporting role.
ChatGPT, for example, offers a compelling way to create more effective phishing emails. For Russian-speaking hackers who may be struggling to write clickable messages in their native English (or another target language), ChatGPT can sharpen their writing skills. “The majority of attacks are caused by email, and the majority of email attacks are not malware attacks. “They are trying to trick users into giving up their credentials or transferring money,” said Cidon. predicts that this will be much easier now.
But this represents a gradual change rather than a revolution in hacking. High-quality phishing emails can already be produced easily by attackers or with the help of translators hired by gig work platforms, but ChatGPT can produce them at scale. ChatGPT “reduces the investment required,” argues Cidon.
In a more exotic approach, an attacker with access to the email archive could use it to tweak a large language model to replicate the CEO’s writing style. Training his LLM to write like a boss makes it easier to trick employees, Cidon said.
But when evaluating ChatGPT’s impact on cybersecurity more generally, experts say it’s important to stay focused on the big picture. His use of LLM in targeted attacks is an interesting use case. However, ChatGPT probably won’t improve your chances of success for most targets. After all, as Drew Lohn, a researcher at Georgetown’s Center for Security and Emerging Technology, observes, “Phishing is already so successful that it may not make much of a difference.”
Overall, tools like ChatGPT have the potential to increase the number of capable attackers. About ChatGPT, Lohn claims: …there are a lot of open source tools and bits of malware that are just floating around or packaged up,” he said. “I worry that more people will use ChatGPT.”
Plus, given how fast the field is progressing, “wait a week and everything might change,” he says.