ChatGPT and Cybersecurity: The Dark Side of AI

ChatGPT and Cybersecurity: The Dark Side of AI

Artificial intelligence (AI) plays an increasingly pivotal role in society and is being used for business purposes in industries from healthcare to entertainment. AI-powered natural language processing is exemplified by models like OpenAI's ChatGPT

ChatGPT, trained on vast datasets and featuring the GPT-4 architecture, offers impressive conversational abilities, enabling users to interact with it in a way that mimics human conversation. (Plus, it’s really cool and fun to use!) However, as with all technology, it can be used maliciously. 

While ChatGPT makes it easy to interact with AI technology, we must remain mindful of the accompanying risks. OpenAI has taken steps to tackle specific risks, but it’s crucial to have checks and balances to minimize the potential for abuse or unintended repercussions.

Misuse of AI: ChatGPT in the Hands of Cybercriminals

1. Phishing Attacks

A prime concern is how AI, like ChatGPT, could be used to facilitate phishing attacks. Using an AI language model, phishing emails, typically characterized by poor grammar and spelling, could be transformed into highly convincing messages. This is especially useful to adversaries who live in other countries and may not have a strong grasp of English.

AI could easily impersonate a known contact or business and use context-appropriate language to convince the victim to disclose personal information or perform a dangerous action. Appeals to sympathy, fear, or urgency are particularly effective.

2. Social Engineering

Social engineering involves manipulating people into revealing confidential information or breaching security protocols. It has always been a weapon of choice for cyber attackers, and the advent of sophisticated AI like ChatGPT has only enhanced their arsenal. 

With the ability to hold near-human conversations, AI could potentially impersonate a trusted individual online. If a malicious actor had access to enough conversation samples of a specific person, they could potentially fine-tune a model like ChatGPT to mimic that person's style of communication. This could be used in a social engineering attack, where the attacker impersonates a trusted individual to manipulate others into revealing sensitive information or performing unauthorized actions. 

It can also be parlayed into a deepfake video or audio recording, further deceiving targets into divulging sensitive information, granting unauthorized access, or performing actions that compromise security. Social engineering isn’t about attacking technology; it’s about attacking people.

3. Automated Malicious Code Generation

ChatGPT doesn't inherently understand code or cybersecurity, so it doesn’t generate code out of thin air. However, it can be trained to recommend malicious code

Because ChatGPT's output is based on its training data, a bad actor can find ways to exploit it for harmful intent. Despite possible limitations, even partially successful attempts could yield code that is significantly more dangerous than initially anticipated. Recognizing this potential misuse is crucial for developing preventive measures.

4. Disinformation Campaigns

AI-driven disinformation campaigns could use models like ChatGPT to produce large volumes of plausible-sounding text to quickly spread false information across social media or other platforms, creating confusion or panic or manipulating public opinion. Researchers are concerned.

“This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet,” said Gordon Crovitz, co-chief executive of NewsGuard, a company that tracks online misinformation. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having AI agents contributing to disinformation.”

Mitigating the Risks: The Future of AI and Cybersecurity

The potential misuse of AI technologies like ChatGPT raises urgent questions about the future of AI and cybersecurity. It emphasizes the need for safeguards and countermeasures.

1. Education and Training

When SOC practitioners and employees are educated about the possible misuse of AI, organizations are better equipped to detect and ward off related threats. Understanding the evolving nature of these threats should become a standard component in ongoing employee cybersecurity training. Many organizations have a reward system for those who spot and report deceptive emails or other phishing attempts. That is increasingly valuable with AI complicating the identification of such threats. 

That said, ChatGPT is not all bad and can be an important tool in the cybersecurity professionals’ arsenal. When it comes to training SOC and CSIRT teams, ChatGPT and other AI tools should be included in the customized, live-fire environment so SOC and CSIRT teams can practice using them in a safe manner. Additionally, an ongoing cyber readiness simulation program should include scenarios of AI-enhanced cyber attacks.

2. AI in Cybersecurity

While AI can pose cybersecurity threats, it can also bolster our defense and help combat them. AI can monitor network activity, detect unusual patterns, respond to threats in real time, automate routine security tasks, provide personalized recommendations for network security improvements, and assist in incident response and forensic analysis. Using AI to combat AI-driven attacks could become a prominent strategy in cybersecurity.

We shouldn’t be scared to embrace new technologies like AI, which is an advancement that has propelled us forward in remarkable ways. However, it is crucial to approach the technology with a sense of caution. It’s important to implement checks and balances to effectively manage risks.

As AI continues to evolve, a balanced approach will be critical: harnessing its positive possibilities, mitigating its risks, and educating people about its dangers. To enhance cybersecurity readiness, it is essential to invest in training cybersecurity practitioners with a cyber range program that emulates real-life scenarios – including AI-driven ones. Request a demo of Cloud Range to learn how we do that.

3. AI Transparency and Control Measures

It’s critical to implement effective control measures and transparency around AI usage. OpenAI, for example, has strict policies for its AI models, prohibiting any usage that infringes upon privacy rights, promotes harm, or violates ethical norms. While AI chatbots do not retain or misuse data, they interact with users within environments that might be observed, logged, or analyzed by other systems or individuals – such as for help desk, quality assurance, or training purposes.

Therefore, sharing sensitive information can potentially expose that data to third parties outside of the chatbot's control. So, it's recommended that users not disclose sensitive or personal information while using such platforms to ensure their privacy and safety.

The National Telecommunications and Information Administration is working to ensure accountability in AI development and use, with a policy formulation process underway. Comments on their proposed AI Accountability Policy are open until mid-2023, after which a report focusing on AI assurance will be drafted.

Conclusion

As AI models like ChatGPT evolve, so too do the cybersecurity threats they pose. The path forward requires a commitment to ongoing education about AI threats, understanding how to defend against its use in cybersecurity attacks, and establishing clear, enforceable governance. By proactively addressing AI-induced cybersecurity threats, we can responsibly tap into AI's potential, like that of ChatGPT, ensuring its use for good doesn't compromise our defenses.

 
 
Previous
Previous

The Future of SOC Readiness and Cyber Resilience

Next
Next

Cloud Range Appoints Cybersecurity Leader Galina Antova to Board of Directors