Last Thursday marked the one-year anniversary of the launch of ChatGPT. Immediately, the first publicly-available generative artificial intelligence (GenAI) tool sparked immense interest in the rise of artificial intelligence (AI) and machine learning (ML) and forever transformed how we work. But, despite the quick acceptance, the technology faced problems, causing organizations to reconsider if users could safely use GenAI tools.
In a famous example, a Samsung engineer pasted internal source code into ChatGPT in an effort to identify errors. The engineer improved the code. However, there is now a risk of sharing sensitive engineering data with competitors. This can happen when training models or serving information to other users. The Internet is permanent, so Samsung may not be able to erase the data from these models. This is true even if ChatGPT owners are willing to help.
One year later, and it’s clear that the GenAI landscape is in constant flux. To protect data and sensitive information, it’s crucial to safely integrate this large language model used worldwide. Here are five trends around GenAI tools that we’re seeing and how they are impacting organizations around the world:
Everyone knows ChatGPT, but dozens of other publicly-available GenAI tools have hit the market in the past year. Developers have the option to use GitHub Copilot, PolyCoder, and Cogram for generating code. On the other hand, content creators can utilize DreamFusion, Jukebox, NeuralTalk2, and Pictory for generating media. No matter what you do, there’s a GenAI tool to help you work more efficiently.
While the extremely high usage of ChatGPT and other GenAI tools we saw at first has slowed down, these platforms are still being utilized frequently. A recent report shows that on average users visit GenAI platforms 32 times per month – an impressive stickiness metric. No doubt that this loyalty will lead to future adoption and growth in the world of AI and large language models.
ChatGPT and other GenAI tools are game-changing technology – forever changing the way people work. Many news articles have discussed how businesses have become more productive in the past year.
However, ChatGPT and its GenAI counterparts are not without ethical dilemmas. Ethical AI concerns, privacy issues, and turmoil at OpenAI, the owner of ChatGPT, have clouded perceptions. As they bring forth productivity gains, the delicate equilibrium between positive impacts and ethical considerations surrounding these powerful tools becomes crucial. Safeguarding against the misuse of training data and potential exposure of sensitive data requires a thoughtful approach.
It wasn’t just Samsung that raised concerns about GenAI risks to their business. Many other organizations and governments have decided to inhibit or ban the use of GenAI tools.
Limiting the use of a powerful productivity tool is likely to turn into a competitive disadvantage. However, a nuanced strategy is essential, striking a balance between productivity enhancement and mitigating security risks. With a strategy like this, organizations can allow safe and ethical use of GenAI tools in the workplace.
The issue is that organizations lack guidance on how to use GenAI tools in a safe, secure, and ethical manner. We need to amend existing acceptable use, privacy, and security policies to reflect new realities. This involves teaching users, improving data loss prevention (DLP) policies, and gaining more insight and control over how users publicly use GenAI tools.
As we all become more familiar with ChatGPT and other GenAI platforms, there are a few assumptions we can make about the future of this technology:
ChatGPT and other GenAI tools are changing the world, but their quick adoption at this stage is causing problems. Organizations should adjust their security strategies to enable secure and safe access to GenAI tools in the workplace. Learn more about how you can secure user access to ChatGPT and other GenAI tools.
The post ChatGPT one year later: Challenges and learnings appeared first on Menlo Security.
*** This is a Security Bloggers Network syndicated blog from Menlo Security authored by Sarah Hartman. Read the original post at: https://www.menlosecurity.com/blog/chatgpt-one-year-later-challenges-and-learnings/