As we grapple with the ongoing phenomenon of the generative AI (GenAI) boom, one thing we’ve consistently observed is that businesses keep interacting with data in ways that haven’t been previously accounted for. For HR leaders who deal with that most sensitive of data types – data involving people – prioritizing data security has never been more essential. At Forcepoint, we’ve had the opportunity to be early adopters of GenAI business tools, and we’ve witnessed the importance of HR working closely with legal experts to successfully navigate AI transformation. In this blog post, we provide an outline of our cross-functional approach that HR leaders can use as a template to safely and effectively leverage AI while protecting people and intellectual property.
Forcepoint’s HR department works in pursuit of three primary objectives: 1) to prioritize and cultivate a people- and customer-centric focus for team members; 2) to attract and retain high-performing talent; and 3) to curate the employee experience throughout their entire journey. We’ve been interested in how GenAI can help us to better accomplish all three of those goals. Our ability to seize the emerging benefits of AI can provide a key competitive edge, helping us exceed customer expectations and give our people the resources they need in order to win.
But as a key player in the data security sector, we’re well-informed about the risks of implementing GenAI tools irresponsibly, ranging from loss of intellectual property entered into Large Language Models (LLMs) like ChatGPT, to liability related to AI trained on copyright-protected materials, to biases and errors that can skew decision-making and produce inequity. That’s why our approach to AI transformation has been marked by deliberate planning and consideration of the security and legal ramifications of each new tool that we introduce to the workplace.
Taking deliberate steps to adopt GenAI solutions for people-oriented processes requires our HR and Legal teams to work in lockstep. That’s why all decisions about introducing AI-based tools are directed to our purpose-built AI Council, a cross-functional team of experts assembled from across the organization. This council promotes company-wide understanding and consideration of a wide range of AI tools and use cases, clearly communicating the benefits of AI for certain tasks to achieve employee buy-in. It also helps maintain healthy ROI by preventing redundant AI tools from being adopted by the company. Every use case that might be accomplished with GenAI solutions is evaluated independently by the AI Council, and each one receives its own unique requirements and approvals before the decision to adopt is made.
Among other considerations, the AI Council examines potential use cases through a legal lens. We consider not just what a given AI application can provide us, but also how we can best protect our people (both employees and customers), their data and their individual rights. Obviously, preserving control over our intellectual property and trade secrets is an especially key factor with tools that utilize LLMs. We additionally discuss how to identify and neutralize instances of bias that can emerge within AI systems.
Routing this decision-making process through the AI Council calls for a high degree of organization. Internal communication and education are crucial to establish awareness and alignment throughout our organization and allow us to put effective guardrails in place for AI implementation. No GenAI tool is added to our HR toolkit until we are confident that we can use it for good and avoid causing harm to individuals or society.
According to Chief People Officer Emilie McLaughlin::
We are mindful of the very real concerns around AI, especially when it comes to the people space.
We evaluate the risks with eyes wide open, strategizing how we can use the tools appropriately.
That involves maintaining our high level of ethical integrity and being proactive in eliminating or mitigating biases that we discover."
Implementing any new technology is bound to introduce challenges and changes that can be difficult to anticipate in their entirety. That’s why, in addition to careful planning, we’ve determined the order of priority for our GenAI use cases and try to have one in place before proceeding too far with the next one. Applying this step-by-step approach helps us to learn from our experiences with one AI application and apply these lessons to the next one. In order, our first three major use cases for generative AI technology have been as follows:
Adopting tools to enhance individual productivity and automate time-consuming tasks is not the exclusive domain of HR, but it does help us to achieve our objectives; in particular, giving employees the resources they want supports our goal of cultivating a people-centric focus. This is an obvious use case to start with, in no small part because we’ve observed that workers tend to utilize tools to make their jobs faster and easier with or without corporate approval.
To limit the spread of shadow IT and prevent the expropriation of sensitive data including IP, we carefully examined the leading enterprise versions of LLM-based applications like ChatGPT and Microsoft Copilot to find the best fit for our company. After selecting an appropriate tool, we communicated guidelines to employees and provided mandatory training so that users could be aligned and aware of how to use the technology safely. While this initial decision-making process informs our following use cases, we will likely continue to adopt new productivity tools as they become available for different purposes.
Our next focus was how to use automation to increase the efficiency of our employee recruiting efforts and ideally widen our search net, all in support of our objective to attract high-performing talent. A major resolution of ours in this use case was not to lose sight of the human element – the “H” in HR – by leaning too hard on automation or allowing AI to make decisions on its own. We looked for a tool that could level up our recruiting operations but was designed to require and facilitate direct review from humans.
Achieving this balance between automated research and human judgement is necessary to ensure that we don’t ever allow unintentional bias to emerge and negatively impact our ability to hire a diverse and qualified workforce. It also future-proofs our operations by aligning us with recent legislation governing the corporate use of AI and ensuring that we can maintain compliance with workforce laws and regulations worldwide.
With our second AI use case in the implementation phase, our next priority is setting up an employee service center in support of our objective to curate the entire employee journey. This use case aims to reduce our reliance on routine HR inquiries and empower our people to take a greater degree of control using intuitive self-service capabilities. The employee journey starts with hiring and onboarding, during which time new employees should be aided in understanding their roles and setting appropriate goals to maintain high performance. This journey continues through promotion, retention and succession; utilizing AI chat capabilities should be the first step toward increasing employee agency and satisfaction while simultaneously reducing the workload on HR team members and allowing them to focus more attention on core duties.
John Holmes, Chief Legal Officer says,
The power and potential of generative AI to make organizations more efficient and to improve customer experience is profound, but its benefits are accompanied by commensurate risk.
Legal teams provide an essential voice in the decision-making process for tool selection and implementation.
Our role is to make sure we take all appropriate steps to safeguard essential data, while optimally harnessing the transformative power of AI."
The primary focus of any HR team is on its people, and our GenAI adoption policies are designed to keep people firmly in the picture even as we strive to make their jobs more efficient and fulfilling. Here at Forcepoint, we started early but proceeded deliberately in adding AI to our toolkit, taking care to establish that we only added new capabilities once we were confident about the safety of our people and their data.
At Forcepoint we have the advantage of being able to leverage our own Forcepoint GenAI Security Solution into our HR functions. This technology, with its unparalleled data classification accuracy, ensures the security of our sensitive HR data across GenAI applications like ChatGPT, Copilot, Gemini and more. The centralized visibility and control offered by Forcepoint not only boosts our productivity but also has simplified operations and unified policies. The Zero Trust-based technology reduces risks by preventing data breaches, regardless of our HR team’s location.
We hope that organizations looking to achieve AI transformation will follow our example of careful planning when it comes to responsibly enjoying the many benefits of AI and keeping humans at the core of the HR mission.
Chief People Officer
Emilie McLaughlin is the Chief People Officer at Forcepoint. She is responsible for leading Forcepoint’s overall human resources strategy aligned with business goals, global talent acquisition, diversity and inclusion, organizational design and cultural development, leadership development,...
Chief Legal Officer
John D. Holmes is Chief Legal Officer and Corporate Secretary at Forcepoint. As Chief Legal Officer, John leads the company’s legal and regulatory affairs, intellectual property creation and protection, litigation, M&A, ethics, and compliance...