
Repetitive tasks, tons of labor hours going into mundane tasks, delayed response to complex customer queries and lack of real-time customer engagement, are no more, thanks to AI chatbots that help businesses automate the whole process with efficiency and accuracy.
ChatGPT – AI Bot Getting More Advanced - From Conversations to Code Generation

ChatGPT is an AI-based natural language processing tool, developed for human-like interactions for the purpose of response and assistance. This AI chatbot has the potential to bring millions of dollars in savings in terms of customer costs, time and resources used on content creation. The fact that ChatGPT has crossed 100 million users, and is expected to reach revenue of up to $200 million, makes it one of the most popular and fast-rising AI tools across industries. Initially designed for conversational purposes, the bot has evolved rapidly, accommodating many unique features including generation of sophisticated code.
The Problem
The increasing development capabilities of ChatGPT have also provided attackers with a more accessible way to exploit the codes for their gains. This leads to serious cyber threats with ChatGPT, which has even been highlighted on different forums.
A British spy agency GCHQ has warned the masses from time to time that AI bots like ChatGPT may bring a security threat. These chatbot providers, which are storing sensitive queries, are prone to hacking and data leaks. Some of the big users, like JPMorgan and Amazon, have even paused the use of ChatGPT at workplaces.

Queries stored online can be leaked, hacked, or accidentally made public. This may include potential user-identifiable data. Tools like ChatGPT learn from their interactions, that may consequently include learning and repeating sensitive information.
This risk is validated by actual incidents related to Samsung that surfaced recently where the information shared by an employee with the chatbot supposedly included the source code responsible for measuring semiconductor equipment. In this instance, a Samsung employee was trying to get the bot’s help in analyzing an error seen in the code. Organizations can’t stop employees from using AI bots as a tool aiding in their work, which opens the gates for leaking sensitive data and IP addresses with the bot and, potentially, to a larger number of users.
Security Threats with ChatGPT - - Is ChatGPT Security-proof?
- Data theft
With time, hackers and cyber criminals are using more advanced methods and technologies to steal confidential data. ChatGPT’s capability to mimic others, code and create flawless text makes it vulnerable to be misused for malicious intent.
- Malware development
As per many researchers, ChatGPT can facilitate malware development. For instance, a user with a basic knowledge of malware software could be able to develop functional malware through ChatGPT. Malware authors, through ChatGPT, can also develop advanced software, such as a polymorphic virus, which alters its code to avoid detection.
- Phishing emails
You can recognize a phishing email easily from the grammatical mistakes or spelling errors. An official email from a bank is likely to be written flawlessly, for instance. Through ChatGPT, hackers can write phishing emails seemingly identical to the authentic ones.

- Impersonation / Mimi
ChatGPT can write text in a person’s exact voice and style. ChatGPT’s capability to impersonate or mimic high-profile personalities may lead to more prolific scams or fraud.
- Spam
People sending spam generally takes some time to make the text. ChatGPT can help them by generating spam text instantly. Although most of the spam is harmless, some may contain malware and malicious websites.
- Ransomware
According to some researchers, ChatGPT can write malicious code that can encrypt a whole system in a ransomware attack.
- Misinformation
There is a chance that ChatGPT can be used to spread rumors or misinformation. Bad actors can get assistance from conversational AI tools to quickly generate fake news stories, mimic the celebrities’ voices and make these stories viral online.
Minimize the ChatGPT Cyber Threats with Effective Measures
To protect data leakage in these AI tools, security administrators need to adopt tools that provide complete visibility and control of known and unknown SaaS app usages. Application aware firewalls and Cloud Access Security Broker technology can provide coverage on a large number of SaaS apps and can provide a detailed view of data transactions happening in these apps. It is recommended for security admins to deploy tooling that enables them to:
- Get visibility on SaaS applications being used and activities being performed in the customer environment, including these generative AI tools.
- Use tooling that allows specific apps and activities while being able to block all other applications to safeguard unintended data leakage.

- Allow customers to use advanced DLP capability in congestion with app aware firewall and CASB capability to be able to write granular policies and block sensitive data leakage with confidence.
Final Thoughts
The adoption of ChatGPT is growing at an unprecedented rate, and with this growth comes the potential for malicious use and data exfiltration cost with all other benefits. Precautions should be adopted with the right policies to prevent misuse. This requires a collaborative effort between developers, cybersecurity professionals, policymakers, and users. With responsible AI design, monitoring and moderation, education and awareness, collaboration with the cybersecurity community, and government regulation, we can ensure the ethical use of impressive and high-potential conversational AI tools like ChatGPT.
About the Author
Manish Mradul is a cloud and network security enthusiast. He is working as the Director of Engineering at Palo Alto Networks. He has 20+ years of experience in network and cloud security, malware and threat detection, SIEM, big data & the security analytics field.
He previously was with Netskope and led Malware and Threat detection teams. He earned a bachelor’s degree in Computer Science and Engineering from National Institute of Technology, Tiruchirappalli, India. Manish is passionate about changing security challenges with accelerated cloud and SaaS adoption and increasing AI & machine learning usage in business.