The AI Paradox: Balancing Generative AI Adoption With Cybersecurity Risks
2023-11-22 22:0:6 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

When new technologies and use cases are unlocked, there’s always an inherent risk that our reach may exceed our grasp. The arrival of the internet revolutionized communication and business, but it also brought about new threats such as identity fraud, data theft and cybercrime. It took years for regulations such as the Children’s Online Privacy Protection Act (COPPA) and the Digital Millennium Copyright Act (DMCA) to catch up and ensure that companies are held accountable for mitigating threats.

Similarly, when social media became mainstream, it transformed how we connected and shared information but posed challenges related to privacy, misinformation and cyberbullying. Self-regulation rules and measures such as the General Data Protection Regulation (GDPR) in Europe took years to evolve. The situation is even more complex in the U.S., where privacy laws at a federal level are difficult to enact, leaving states to handle their own regulations.

This pattern of technology outpacing rules and regulations is now repeating—this time with artificial intelligence (AI). Take ChatGPT, for instance. ChatGPT is a language model based on the generative pre-trained transformer (GPT) architecture. It’s designed to understand and generate human-like text based on the input it receives. By training on vast amounts of text from the internet, ChatGPT can answer questions, generate content and assist with various tasks by predicting the “best” next word in a sequence, allowing it to produce original and unique content. ChatGPT 3.5 took the world by storm in December 2022. Just months later, ChatGPT 4 had already launched, allowing users to engage in deeper, more context-aware conversations, automate complex tasks with greater precision and receive real-time insights from vast datasets. This is a form of “generative AI”—a type of artificial intelligence that can create new content, such as images, code, text, art and even music, by training itself using patterns in existing data.

As a new technology, generative AI is unique due to the sheer pace of innovation and the speed at which it has been democratized and made available to all. According to one study, around 60% of workers currently use or plan to use generative AI while performing their day-to-day tasks. However, while myriad benefits of generative AI are undeniable, they come at a cost. Its broad-scale adoption has brought about challenges around ethical use, data privacy and security. As AI models become more sophisticated — which they will, at pace — the potential for misuse or unintended consequences grows, emphasizing the need for robust oversight and a proactive approach to governance.

The race between innovation and regulation is on, and the stakes have never been higher.

The Hidden Dangers of Generative AI

While much has been discussed about the potential biases and negative outcomes stemming from flawed data inputs in AI, there’s a looming threat that many companies might be overlooking: The heightened cybersecurity risks. AI technologies, by their very nature, can amplify the risk of sophisticated cyberattacks. Simple chatbots, for instance, can inadvertently aid phishing attacks, generate fake accounts on social media platforms without errors, and even rewrite malware to target different programming languages. Moreover, the vast amounts of data fed into these systems can be stored and potentially shared with third parties, increasing the risk of data breaches.

DevOps Unbound Podcast

With tools such as ChatGPT available to anybody with an internet connection and regulatory frameworks struggling to catch up, businesses are inadvertently opening themselves up to a world of unknown threats.

Where Does the Responsibility Lie?

Generative AI’s potential is still unfolding, but it has already raised pressing concerns about security, data handling and compliance. The pace of AI development has outstripped the evolution of regulatory frameworks and policy controls. This has created a void in accountability and transparency, placing the onus on businesses to be the vanguards of security controls and frameworks.

The allure of AI, especially generative AI, is potent. With easy access to such technologies, employees might inadvertently input sensitive or proprietary information into free AI tools, creating a plethora of vulnerabilities. These vulnerabilities could lead to unauthorized access or unintentional disclosure of confidential business information, including intellectual property and personally identifiable information. So, as the de-facto torchbearers of generative AI, what can businesses do to ensure their data remains secure during a period of such fast-paced change?

Steps Risk Managers Can Take to Shore up Security

Businesses cannot change the rules of the game, but they can limit their exposure to it and decide how much they are willing to play. Here are some steps that risk managers can and should be taking in order to mitigate the vulnerabilities associated with generative AI.

1. Identify AI Usage:
Risk managers should begin by pinpointing who within the organization is utilizing AI tools and for what specific purposes. This can be achieved using internal audits and surveys, as well as monitoring endpoints to see which tools are being accessed. This shouldn’t be an attempt to “catch employees out” but rather to understand the demand for AI tools and the potential value they might bring.

2. Conduct a Business Impact Analysis:
Now, it’s time to undertake a thorough analysis to determine the value of each AI use case. Assess its merits, potential security implications and privacy concerns. Ask why employees are adopting certain AI tools and what they — and the business — stand to gain from them. It may be that, with some tweaking to the tool’s data access permissions, the benefits outweigh the risks, and the tool becomes a part of the organization’s tech stack.

3. Establish Governance:
Building on from the second step, the use of AI tools should not be left to individual discretion. Instead, it should align with the company’s policies and risk posture. This might involve creating controlled environments to test AI technologies and their associated risks. Employees shouldn’t be discouraged from exploring new AI use cases, but rather than using them unsupervised, they should bring them “into the fold” for testing so that they can be rolled out in a controlled way. AI output should also be closely reviewed and monitored, particularly in the early stages of deployment.

4. Promote Training and Awareness:
It’s imperative to ensure that every member of the organization, technical or otherwise, comprehends the risks associated with AI technologies. Regular training sessions and workshops will ensure that workforces are up to speed on the threats and challenges that AI tools can pose to their organization as a whole rather than focusing on their own specific needs and gains.

5. Data Classification:
Collaborate with chief information security officers (CISOs), tech teams and enterprise risk management to classify data. This helps in determining which data sets can be used by AI tools without posing significant risks. For instance, highly sensitive data can be siloed and off-limits to certain AI tools, while less sensitive data can be used to experiment with to some degree. Data classification is one of the core principles of good data hygiene and security.

6. Anticipate Regulatory Changes:
Given AI’s uncharted and currently unregulated nature, businesses should anticipate inevitable regulatory oversight. Staying updated with global AI regulations and standards can help businesses adapt swiftly. Investing too heavily in a specific tool and having entire business operations wholly dependent on it is a bad idea at this stage. For now, AI tools should be regarded as business support tools rather than the driver of operations.

The integration of AI technologies into business operations is no longer a matter of if but when. While these technologies promise unprecedented benefits, they also introduce a wide array of cybersecurity challenges. But by proactively identifying and mitigating these risks and embracing new technology in a controlled and well-governed way, businesses can harness the power of AI without compromising their security posture.


文章来源: https://securityboulevard.com/2023/11/the-ai-paradox-balancing-generative-ai-adoption-with-cybersecurity-risks/
如有侵权请联系:admin#unsafe.sh