
Impact of Artificial Intelligence on Cybersecurity
Current advances in Artificial Intelligence (AI) are widely impacting the cybersecurity industry. Companies are increasing budgets for cybersecurity engineers with experience building AI toolsets to combat Offensive AI. Bad actors have found ways AI can morph its attack pattern, which has led to almost undetectable attacks. Attackers are leveraging AI to increase the efficiency of many common attack methods that companies face. These attacks have increased in impact and are now becoming life-threatening.
The introduction and wide-scale availability of ChatGPT have changed the world in 2023. There have been significant increases in job postings for AI skills, with new AI services appearing weekly. Many large companies have identified potential AI use and implemented these use cases into their infrastructure. The demand for cybersecurity engineers with AI skills to secure these implementations has skyrocketed as AI becomes adopted quickly. Columbus (2023) found that “Just 24% of cybersecurity teams are fully prepared to manage an AI-related attack.” This statistic has gained recognition from industry leaders, which has led to a request for a moratorium. While moratoriums may be useful for companies that follow laws, many bad actors will continue to increase AI development and build highly skilled personnel to continue their attacks.
Bad actors often range from unskilled laypeople who follow instructions provided through social media to Nation-State groups who conduct attacks on foreign countries as part of their daily job assignments. Columbus (2023) found that “Members of the North Korean Army’s elite Reconnaissance General Bureau’s cyberwarfare arm, Department 121, the number is approximately 6,800 cyber warriors.” This Nation-State group comprises more personnel than most small to medium size businesses. The monetary gains from cyber theft are often used to further the bad actor’s skillset and thus lead to more advanced attacks.
Offensive Artificial Intelligence
Offensive AI is a term gaining traction in the industry. Offensive AI uses machine learning to perform offensive attack methods against a target. These attack methods could be from bad actors or a white hat penetration test used in assessing an organization. Columbus (2023) advised that “current tools, techniques, and technologies in cybercriminal gangs’ AI and ML arsenal include automated phishing email campaigns, malware distribution, AI-powered bots that continually scan an enterprise’s endpoints for vulnerabilities and unprotected servers, credit card fraud, insurance fraud, generating deep fake identities, money laundering.” Automation in each of these attack methods increases the efficiency and effectiveness of the attack. The AI algorithm can be built to incorporate multiple attack patterns simultaneously. This decreases the chances of detection and makes it highly difficult to build defensive AI algorithms that can combat the attacks. Columbus (2023) found that “Data poisoning is one of the fastest-growing techniques they are using to reduce the effectiveness of AI models designed to predict and stop data exfiltration.” Attackers insert strings of code that will cause the defensive AI model to become corrupted and unable to detect the attacks. AI-created malware uses deep neural networks with morphing capabilities that allow the code to modify itself upon detection. This is making it difficult for cyber defenders to update detection tools. Guembe et al. (2022) found the impact of AI-driven attacks alarming for facilities that provide life-saving care, such as hospitals. These attacks can be focused on medical facilities that could hinder care in the event of an attack.
Defensive Artificial Intelligence
The use of artificial intelligence for cyber defense is quickly evolving to keep pace with offensive AI and bad actors. Machine learning is implemented in environments to detect baseline traffic and create defensive deception technologies. Artificial neural networks see malicious traffic and divert the attack to honeypots for further analysis. This defense deception strategy has led to game developers joining forces with cyber defenders to create more realistic deception capabilities.
According to Mohan et al. (2022), “machine learning has emerged as an effective technology that provides us with a wide range of applications ranging from recognition of patterns, image identification, image, and video processing, making predictions, virus or malware detection, autonomous driving, and other application scenarios.” Machine learning is gaining traction in many industries that have found the cost of cyber-attacks is worth the investment for defense in depth. Machine learning can be implemented to monitor and establish a baseline in a company’s environment. This baseline will help monitor and detect anomalies that could negatively impact the business. ML can be used in facial recognition software incorporated in a CCTV loop that alerts to anomalies. These algorithms can be combined with additional capabilities to create intrusion prevention systems responses based on the detection. Olivares et al. (2022) found that artificial intelligence is used in the automotive sector for autonomous driving. Vehicles are being built with deep neural networks that process images from around the car. These images are used in driver-assisted technology to control the vehicle in unpredictable circumstances. Bad actors are finding ways to exploit AI to disrupt central processing units.
Combining AI algorithms can increase the effectiveness of the application. Mohan et al. (2022) report that artificial neural networks can “not only be used for IDPS (Intrusion detection and prevention system), but there are also proposals for their application in DOS, malware, worm, and spam detection systems.” Deep neural networks are being used to increase the logic in the model. This logic can then further the capabilities of the intrusion detection system to take actions and make predictions based on the inputs. The IPS can detect the traffic and divert to heavily monitored honeypots. Honeypots are deceptive environments that are designed to mimic a live production environment. This environment is segmented from production and is designed to monitor the bad actor’s activities. This gives great insight into the bad actor’s capabilities and potential motives. Honeypots have attractive information intended to keep the bad actors engaged but could lose their effectiveness against Offensive AI. Mohan et al. (2022) found that “researchers have developed hybrid approaches that combine reinforcement learning and game theory.” Game developers have the distinct ability to create stories that are engaging. This strategy has been combined with AI to develop advanced deception environments. These environments have become more elaborate and provide alternate stories based on the attack strategy.
Conclusion
Many advances in artificial intelligence are impacting cybersecurity. This has led to companies incorporating machine learning and deep neural network algorithms into their defense strategies. Cybersecurity defenders with AI skills are in high demand as offensive AI attacks increase the sophistication of bad actors. Bad actors are building offensive AI that can morph its attack and detection patterns. This has increased the difficulty in defending against these attacks. Game development strategies have enhanced defensive AI. This creates deception environments, such as honeypots, designed to lure in attackers and monitor their activities. This leads to threat intelligence that can be shared with the community and enhance our defensive capabilities.
References
Columbus, L. (2023, January 3). Defensive vs. offensive ai: why security teams are losing the AI war. Venturebeat.com. https://venturebeat.com/security/defensive-vs-offensive-ai-why-security-teams-are-losing-the-ai-war/
Guembe, B., Azeta, A., Misra, S., Osamor, V. C., Fernandez-Sanz, L., & Pospelova, V. (2022, December 1). The emerging threat of ai-driven cyberattacks: A review. Applied Artificial Intelligence, 36(1), 1-34. https://doi.org/10.1080/08839514.2022.2037254
Mohan, P.V., Dixit, S., Gyaneshwar, A., Chadha, U., Srinivasan, K., & Seo, J.T. (2022, March 11). Leveraging Computational Intelligence Techniques for Defensive Deception: A Review, Recent Advances, Open Problems and Future Directions. Sensors 2022, 22, 2194. https:// doi.org/10.3390/s22062194
Olivares, J. G., Hofmann, P., Kapsalas, P., Casademont, J., Mhiri, S., Piperigkos, N., Diaz, R., Cordero, B., Marias, J., Pino, A., Saoulidis, T., Escrig, J., Jun, C. Y., & Choi, T. (2022). Artificial intelligence-based cybersecurity for connected and automated vehicles. Now Publishers. https://directory.doabooks.org/handle/20.500.12854/95759
The post <strong>Impact of Artificial Intelligence on Cybersecurity</strong> appeared first on Hakin9 - IT Security Magazine.