Hackers Exploit ChatGPT Tool To Write Malicious Codes

As hackers become smarter and more advanced, the cybersecurity industry must become more resourceful in order to combat AI-powered exploitation.

Hackers Exploit ChatGPT Tool To Write Malicious Codes

Artificial intelligence (AI) has undoubtedly brought many benefits to humanity. However, the bad guys benefit from it as well. Hackers are now using the popular ChatGPT tool to write malicious codes.

According to Palo Alto Networks, cyber attackers can now instruct ChatGPT to write malicious code, with astounding results. Sean Duca, its Asia Pacific and Japan regional vice-president and chief security officer, stated that AI has always been a two-edged sword.

“Based on its input learning programme, AI tools can be trained to mimic human behaviour. Coding and curating malicious codes, if no rules are in place to prevent AI from doing so, can be one of the technology’s side effects “He stated.

“Malicious code written by AI tools has the potential to be more harmful than code written by humans. While the developers of ChatGPT have stated unequivocally that the AI-powered tool has the ability to challenge incorrect premises and reject inappropriate requests, it is expected to have some false negatives and positives for the time being. Criminals who intend to break the rules can find a way around the loopholes by playing between the gaps in AI’s judgements.”

One of the most serious risks of AI tools being capable of creating malicious code is how much it improves the efficiency of creating dangerous tools. Even the most experienced hackers, according to Duca, can spend up to an hour developing a script that can infiltrate a target via a software vulnerability.

“This, however, can be accomplished in a matter of seconds using OpenAI’s ChatGPT. This, like other automation, has the potential to increase the number of attacks by these threat actors.” Almost every industry has increased its use of AI to automate their software supply chain.

“While cybersecurity providers use AI to identify and filter malicious codes/phishing links, threat actors use similar technologies to increase their efficiency and ensure their ‘business’ is profitable,” Duca explained. “Because of how simple it is to create malware, he believes the cybersecurity sector will be disrupted in a variety of ways.”

Its increasing ability to generate and disseminate malware is one of the key issues it poses. As more people become capable of producing and transmitting malware, it becomes more difficult for cybersecurity experts to detect and prevent these attacks. As a result, cyberattacks may become more effective, raising the cost of responding to and recovering from such incidents.

“Users now have AI-powered security tools and products that tackle large volumes of cybersecurity incidents with minimum human interference. However, it can also allow amateur hackers to leverage the same technology to develop intelligent malware programs and execute stealth attacks.”

” This trend is expected to continue as the availability of tools on the dark web for as little as RM30 for ransomware-as-a-service models and AI-based tools like ChatGPT lower the barrier to entry for cybercriminals.”

As hackers become smarter and more advanced, the cybersecurity industry must become more resourceful in order to combat AI-powered exploitation.

“In the long run, the industry’s vision cannot be that a swarm of human threat hunters tries to patch things up haphazardly with guesswork. The need of the hour is for intelligent action to be taken to counteract these evolving threats “Duca stated.

On the plus side, autonomous response effectively deals with threats without the need for human intervention. “However, as AI-powered attacks become more common, businesses, governments, and individuals affected by such automated malware will increasingly rely on emerging technologies such as AI and machine learning to generate their own automated responses.”

Leave a Reply