The Cybersecurity Risks of AI & How to Safeguard Sensitive Data
Experts in the cybersecurity industry are becoming increasingly concerned about artificial intelligence attacks.
A 2020 report from Forrester Consulting found that 88% of decision-makers in the security industry believed offensive AI was on the horizon, and almost two-thirds of them expected AI to lead new attacks.
Considering that the AI we see today is the worst that it will ever be, organizations need to be aware of the dangers of AI and understand the unique ways that hackers can leverage AI in their attacks.
With concerns such as these it’s no wonder that companeis such as Apple restrict employees from using AI tools.
In this article we will highlight what artificial intelligence is, how it’s used by employees and threat actors, and the dangers of using AI in the workplace.
What is Artificial Intelligence?
Artificial Intelligence (AI) refers to the development and application of computer systems that can perform tasks typically requiring human intelligence. It involves the creation of intelligent machines capable of simulating and imitating human cognitive abilities, such as learning, reasoning, problem-solving, perception, and decision-making.
AI systems aim to process and analyze vast amounts of data, recognize patterns, and make predictions or take actions based on that analysis. These systems learn from experience and adjust their behavior to improve performance over time, often through machine learning algorithms.
Examples of AI include:
- Large Language Models: A large language model (LLM) uses large data sets and deep learning techniques to comprehend, generate, summarize, and predict new content. These models can be further refined into specialized models by further training at least one internal model parameter (i.e. weights) in a process known as LLM fine-tuning.
- Machine Learning: It involves training machines to learn from data and make predictions or decisions without being explicitly programmed. Machine learning algorithms enable systems to improve performance through exposure to more data.
- Neural Networks: Inspired by the structure and function of the human brain, neural networks are algorithms that learn and recognize patterns. They consist of interconnected nodes (artificial neurons) that process and transmit information.
- Natural Language Processing (NLP): NLP focuses on enabling machines to understand and interpret human language, including speech and text. It involves tasks such as language generation, translation, sentiment analysis, and speech recognition.
- Computer Vision: This field focuses on giving machines the ability to understand and interpret visual information from images or videos. Computer vision applications enable tasks such as object recognition, image classification, and facial recognition.
- Robotics: AI is closely integrated with robotics, where intelligent machines are designed to interact physically with the environment. Robotics combines AI techniques with mechanical engineering and control systems to create autonomous or semi-autonomous robots.
How is Artificial Intelligence Used in the Workplace?
AI has the potential to revolutionize various aspects of organizations by improving efficiency, enabling automation, and solving complex problems. It has applications across various industries, including healthcare, finance, marketing, manufacturing, and cybersecurity.
Cybersecurity Teams
Many organizations leverage Security Information and Event Management (SIEM) User Entity and Behavior Analytics (UEBA) tools to detect and respond to cyber threats. These tools are infamous for overwhelming security professionals with a vast amount of data.
AI is revolutionizing cybersecurity by analyzing massive quantities of risk data to speed up response times and augment the capabilities of under-resourced security operations.
Cybersecurity professionals within security teams can use AI powered systems to surface insights from SIEM logs. They can also orchestrate and automate hundreds of time-consuming, repetitive and complicated response actions that previously required human intervention.
While AI cybersecurity systems are known to generate false positives, they serve as an important threat identification tool. These systems are beneficial for detecting and remediating vulnerabilities, malware, and threat actors.
Digital Marketing & Copywriting
Marketing teams have been leveraging AI to speed up the writing, design, and research process. While there are legitimate concerns that improper use of AI will result in poor quality and inaccurate information, AI technology has the potential to greatly improve the productivity and efficiency of the marketing process when used responsibly.
For example, you can use AI design tools to design AI presentations and other assets.
Software Development
AI-based programming assistants allow developers to write code more efficiently by proactively identifying syntax errors, creating basic structures more efficiently, and translating natural language into programming languages that computers understand.
AI Course Development
AI can help make courses more engaging and effective by:
- Personalizing the content to fit individual learning styles and paces.
- Adapting the difficulty level and type of content based on responses.
- Incorporating gamification and interactive elements, like quizzes, simulations, and story-based learning journeys.
- Providing real-time feedback.
For more detailed insight on each step involved, including planning, AI tools selection, and effective delivery methods, check out this comprehensive guide on how to create a course using AI.
The Dangers of AI in the Workplace
While AI based tools are excellent for automating repetitive tasks, the technology does come with its own set of unique risks and limitations.
This section will focus on the cybersecurity implications of artificial intelligence in the workplace. Later we will discuss cybersecurity defense strategies that companies can implement to address the risks of this technology.
Phishing & Spear Phishing
Phishing is a pervasive threat to cybersecurity. Cybercriminals use phishing attacks to break into accounts, steal company funds, and compromise sensitive data.
Thanks to generative AI tools phishing campaigns have become much more scalable. On the low end AI-powered chatbots such as ChatGPT and other natural language processing supported systems can help threat actors craft well-written and convincing phishing emails.
On the upper end, AI systems can be trained to use publicly available information about individuals and their workplace to craft sophisticated spear phishing campaigns that take place over several emails.
One example comes from Jonathan Todd, Cyber Operations Specialist for the US Army.
He demonstrated how a threat actor could use an AI image generator such as DALL·E can create fake profile pictures that cannot be found via a simple reverse image search. This increases the perceived authenticity of the fake image.
Extracting beyond that, as AI video and audio generation becomes more sophisticated it can even be used in vishing campaigns.
From there, a threat actor can train an AI model on your social media history to craft an email that sounds human generated and uniquely targeted to your interests.
In Jonathan’s example, his AI tool waited for a response to the initial email, then waited for a period of time before sending a follow-up email with a link that could have contained malicious exploits.
Leaking Sensitive Data to AI
Without the proper knowledge of AI systems, users may not realize the dangers of sharing sensitive information with AI.
For example, a bug in OpenAI’s ChatGPT leaked elements of their users’ conversation histories. OpenAI’s privacy policy also notes that user data, such as prompts and responses, may be used to continue training the model.
Another prime example of the cybersecurity risks of using AI in the workplace: Samsung employees accidentally leaked confidential data, such as the source code for a new program and internal meeting notes data relating to their hardware when attempting to use ChatGPT as a development aid.
This incident led Samsung to ban the use of the publicly available form of ChatGPT and start looking into hosting a private ChatGPT instance internally.
By default, OpenAI stores all interactions between users and ChatGPT. These conversations are collected to train OpenAI’s systems and can be inspected by moderators for breaking the company’s terms and services.
While tools such as ChatGPT have a “don’t learn/respond only” mode, without a proper NDA, MSA, SOW and/or SLA there’s no guarantee that that sensitive information input into these systems will be kept adequately protected.
For this reason, organizations need to ensure that their employees are not sharing sensitive information with AI models that are not within the organization’s complete control.
How to Protect Sensitive Data Against AI
Web Filtering & App Blocking Software
Organizations can use web filtering & app blocking software to proactively restrict access to unsanctioned AI tools.
For example, CurrentWare’s BrowseControl includes a web content category filter that includes a dedicated AI category, allowing organizations to block employees from using AI in the workplace. As new AI websites are created they are automatically added to the database.
Exceptions for authorized AI websites can be readily made by simply adding their URLs to BrowseControl’s allowed websites list.
Host Artificial Intelligence Tools Locally
While many businesses will proactively decide to restrict all access to AI as a security precaution, it’s worth noting that many others actively embrace AI as a powerful tool to enhance productivity, improve decision-making, automate tasks, and gain competitive advantages.
To reduce data security risks, these AI models can be hosted locally and prevented from accessing the internet. This helps to mitigate the risk of data leaks by keeping all new data inputs within the control of the organization.
Employee Training
As with many other cyber threats, defending against the threat of AI technologies starts with ensuring your end-users are aware of the threat in the first place.
While employees do not need to have advanced threat intelligence, they must understand the limitations and risks of AI, whether or not there is a sanctioned use for AI in their organization, and the types of new threats they may encounter thanks to generative AI.
The most imminent threat is the increased sophistication of phishing emails. Your phishing training strategy must be reviewed to ensure that employees are well aware of the dangers of phishing attacks and how to spot them.
AI Cybersecurity Policies
Without clear communication, it’s easy for employees to see internet-hosted AI models such as ChatGPT as just another website they can visit without officially making it part of the known supply chain.
As there is little no control over how the data is used by AI shadow IT systems, organizations must update their information security policies to reflect the unique vulnerabilities presented by AI.
Conclusion & More Resources
While artificial intelligence creates opportunities for improved productivity, there are legitimate cybersecurity threats that organizations need to carefully consider and mitigate before using them.
Technologies such as web filtering software can enforce compliance with security policies that ban the use of web-hosted AI tools. In addition, employees must be thoroughly trained and retrained to recognize phishing emails, particularly as AI is leveraged to create increasingly sophisticated campaigns.
If your organization intends to allow the use of AI in the workplace, it should consider seeking legal advice to manage the many ethical, privacy, policy, and legal considerations that come from using it. It should also consider reviewing Microsoft’s AI security risk assessment framework to learn how to audit, track, and improve the security of its AI systems.
More Resources