1Password this week announced it has added a Model Context Protocol (MCP) server to the Trelica governance platform for software-as-a-service (SaaS) applications it acquired earlier this year.
In addition, the MCP Server for Trelica by 1Password is also being made available on the Amazon Web Services (AWS) Marketplace for artificial intelligence (AI) agents that just opened online.
Nancy Wang, vice president of engineering for 1Password, said the overall goal is to make it simpler for cybersecurity and IT professionals to launch natural language queries to see which end users and AI agents are accessing specific SaaS applications.
Integrated with more than 350 SaaS app integrations, MCP Server for Trelica by 1Password makes it possible to provide read-only visibility into app usage, access permissions, and policy drift. That capability will enable IT teams to reduce SaaS sprawl, block unauthorized agent access, and ensure responsible usage of AI technologies, said Wang.
The alliance with AWS will make it simpler to acquire and deploy MCP Server for Trelica by 1Password via an online marketplace where many organizations will also be using AI agents in a way that aligns with their existing AWS licensing agreement, she added.
Originally developed by Anthropic, MCP has emerged as a de facto standard for providing AI agents with access to the data they require to perform an action. An MCP server enables a platform to integrate with AI agents that have been extended using MCP client software.
It’s not clear how rapidly organizations are deploying AI agents and MCP servers, but as they become available, the level of expertise required to query data is going to rapidly decline. As a result, more organizations should be able to track, for example, how governance policies are actually being enforced, noted Wang.
Hopefully, as more AI agents are adopted, the overall level of governance fatigue that cybersecurity teams experience will steadily decline. In the meantime, cybersecurity teams should also start to review how governance policies will be applied to AI agents that will significantly outnumber humans in the next few years. Each of the AI agents should be assigned a narrow set of permissions to automate a task. However, it’s also possible that they might exceed the scope of their mission unless a set of guardrails have specifically been put in place to enforce policies. The one thing that has been shown conclusively is that generative AI technologies will, for better or worse, adapt in unexpected ways as they are exposed to more data.
The challenge is that many teams building these AI agents have not informed cybersecurity and governance teams of their efforts, which makes it only more likely that a range of cybersecurity and governance issues that might have been avoided are now going to be encountered. Savvy cybersecurity professionals will, of course, proactively investigate the degree to which their organization is investing in AI agents because, as always, an ounce of prevention is still worth more than a pound of any proverbial cure.
Recent Articles By Author