Your Employees are Already Using GenAI. How Will You Communicate the Security Risks?
2024-8-15 19:47:26 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

Did you know that 75% of people are already using Generative AI (GenAI) at work? GenAI tools are defined as any artificial intelligence that can generate content such as text, images, videos, code, and other data using generative models, often in response to prompts. Examples include Open AI’s ChatGPT, GitHub’s Copilot, Claude, Dall-E, Gemini, and Google Workspace’s new functionality that connects Gemini to Google apps, to name just a few. 

Like any new technology, GenAI comes with a side of risk, and recent data from Cisco uncovered that 27% of businesses have banned the use of GenAI entirely for security reasons. However, with such widespread adoption, and such groundbreaking potential — closing the door to GenAI is likely to be a mistake. Instead, PWC recommends that “Demonstrating that you’re balancing the risks with the rewards of innovation will go a long way toward gaining trust in your company — and in getting a leg up on the competition.”

To make responsible use of GenAI, and support employees in freely using the tools to upgrade their productivity, you need to start by understanding what the industry is dealing with. 

Understanding the Potential Risk of GenAI Tools

As excited as your employees are about the productivity benefits of using GenAI tools, you can bet the attackers are feeling the same way. As teams get to grips with how AI can free up hours in the day on tasks like content creation, code writing, and design, hackers are finding innovative ways to use GenAI as a new attack surface to steal sensitive information and disrupt business operations. 

To stay one step ahead, organizational policies and employee education should evolve to take into consideration the new threats. As a starting point, security teams should speak to employees about: 

  • AI code generation: All AI-generated code needs to be tested thoroughly before it’s used, as hackers can manipulate a Large Language Model (LLM) to change its output. Untested, it could open your own customers up to risk or provide an entry point to your network. 
  • Trusting LLMs: Just because an LLM provides information, that doesn’t make it true. All LLMs carry the risk of hallucinations — providing incorrect or nonsensical information, and if an LLM has been manipulated, there could be malicious content produced, too. Make sure to double-check all facts and data. 
  • Sharing sensitive data: Your LLM is not a personal diary, and it won’t keep your secrets safe. In order to learn, GenAI tools collect everything we share, which means if it becomes public knowledge, that data can be exposed to others. Employees should craft prompts free of personal information, intellectual property, trade secrets, or passwords. 
  • Copyright issues: The rules around copyright with AI-generated content are still being discussed and rolled out, but to establish ownership over what you create, employees should make sure that they make modifications to their content, including text and images, too. 
  • Customer trust: If you rely on GenAI tools to boost your productivity in customer-facing interactions, they deserve to know whether they are being bot-driven or getting the human touch. Be transparent when using GenAI-delivered responses or content, including always sharing its source. 

Understanding the Potential Risk of GenAI Tools

The Impact of GenAI on Phishing

Even without your employees independently using GenAI tools in the workplace, the risks of generative AI can still target your organization. One example is the huge impact of GenAI on the efficacy of phishing scams. 

Try this thought experiment: if you asked your employees to point out the tell-tale signs of a phishing email, what do you think they would describe? Not too long ago, markers of your average phishing scam were poor spelling and grammar, broken language, and unprofessional designs — making it easy for staff to spot a garden-variety phishing attack when it arrived in their inbox. 

With the advent of GenAI tools, hackers now have access to free online tools that allow them to spin up highly professional-looking content faster than ever before. Even videos and images of known associates can be faked using GenAI, which means employees need to be more on guard than ever. According to research completed by the Harvard Business Review, Artificial intelligence changes this playing field by drastically reducing the cost of spear phishing attacks while maintaining or even increasing their success rate.” Organizations should expect “a vast increase in credible and hyper-personalized spear-phishing emails that are cheap for attackers to scale up en masse.” 

The warning from HBR is clear — “We are not yet well-equipped to handle this problem. Phishing is already costly, and it’s about to get much worse.”

This means that even if you’re one of the 27% of organizations that have banned the use of GenAI, the chances of a successful data breach or cyberattack against your organization have still increased. 

Changing your Training Approach in the Era of GenAI

The threat of GenAI comes from both directions — from unaware employees using new technology without realizing its potential threats, and from hackers leveraging these tools intentionally to launch ever more sophisticated and believable attacks of their own. 

However, the methodology behind security awareness training to reduce the risk of phishing simulations has remained the same in principle. Organizations simply need to increase the frequency of their training, as well as the variety of the simulations they use to meet the growing threat. At CybeReady, we recognize that employees don’t always feel accountability over security within an organization and that CISOs have too much to handle to be continually proactive. That’s where we come in. 

Our comprehensive SaaS awareness program continually trains 100% of your employees, with realistic simulations that reduce risk, engage users, and promote a positive culture of security awareness organization-wide. 

We also provide training materials that can be distributed to your employees to empower them to use AI for innovation and productivity purposes, without adding risk. Download your free AI training toolkit to access: 

  • Short training content decks that educate on the dark side of AI
  • Tips for identifying a phishing scam that was created by GenAI tools
  • Bite-sized digital posters displaying GenAI best practices

Download your free Cybersecurity Awareness AI Learning Kit here.

The post Your Employees are Already Using GenAI. How Will You Communicate the Security Risks? appeared first on CybeReady.

*** This is a Security Bloggers Network syndicated blog from Cyber Security Awareness Training Blog | CybeReady authored by Nitzan Gursky. Read the original post at: https://cybeready.com/awareness-training/your-employees-are-already-using-genai


文章来源: https://securityboulevard.com/2024/08/your-employees-are-already-using-genai-how-will-you-communicate-the-security-risks/
如有侵权请联系:admin#unsafe.sh