In our previous piece in this series, we discussed the remarkable speed with which AI-powered tools have vaulted into the public consciousness — and how public enthusiasm and extraordinarily rapid rate of adoptions pose some significant privacy and information security concerns.
In this follow-up, we’ll cover some of the specific threats to organizations and review some best practices for safely implementing AI and instituting policies around responsible AI use.
Powerful appeal
The accessibility and connectivity of LLM platforms are, in and of themselves, a vulnerability. Easy access to such powerful tools is made even easier in a world where most people carry around what is essentially a supercomputer in their pockets. Mobile devices, desktops, laptops, and the growing popularity of remote work options make controlling access even trickier.
Couple that accessibility with the lack of basic understanding of cybersecurity in general and the inherent vulnerabilities in large language models (LLMs) like ChatGPT specifically, and it’s a recipe for potential trouble. The ease of cutting and pasting information into a simple user interface means the process of posting dynamic, complex prompts into any one of several popular LLM platforms can be done by anyone in just seconds.
All of this needs to be considered in the context of tools, which aren’t just easy to use but are also enormously powerful. The rapid growth and developing advantages of the AI platform make it all too easy to forget that this is a platform that does not have the gates required to protect your privacy concerns.
Concerns and complexities
LLMs learn over time based on the quality and quantity of input. That simple fact creates a dangerous dynamic where users will find that they can give more (and more detailed) information to generate a better result. In a sense, the systems and the people who make them and use them are incentivized to pour more water into an endless and unfillable public bucket. From commercially sensitive information to personal customer data, exposure can happen easily and inadvertently.
Data privacy concerns around technology are hardly new and certainly not exclusive to AI platforms. Social media comes with its own complications. However, the majority of data shared across social media is not professionally sensitive — and there is no such distinction in ChatGPT and similar tools between personal and work data. Early iterations of ChatGPT were expressly designed not an enterprise solution, but as a tool for consumers. One of the most popular usages for these tools right now is idea generation. Which is fine when the AI is handling personal queries like “Where should I go on vacation?” and more concerning when prompts move into the professional realm (e.g. “How do I sell my product?”).
Unsurprisingly, professionals began using these tools in myriad ways, from reviewing software code to summarizing documents. But putting source code or potentially sensitive documents into a generative AI platform is risky business because it is now in the public realm. And, it doesn’t help that the legal and regulatory guidelines around these fast-evolving tools are not only changing rapidly but are enormously complex. From data privacy and consent laws to thorny moral, ethical, and legal ramifications, the landscape around responsible usage of “data” in general and LLM-based AI tools in particular is extremely difficult to navigate.
The questions around AI tools and privacy are further complicated by the fact that the search for an enterprise equivalent to ChatGPT means that there are functionally three separate and distinct categories of AI-based tools out there. Each of these has its own advantages and disadvantages when it comes to utility and information security.
Companies like Adobe and Microsoft have products in the pipeline that are much more secure, with built-in data protections. These are not likely to be perfect, but will almost certainly be preferable to existing platforms in terms of privacy safeguards. Other companies (especially tech companies) are using these tools in their own environment, putting a “fence” around them to prevent sensitive information from being exposed. Brands like Meta and X are starting from scratch and creating their own versions of AI platforms. For many users, understanding the protections and vulnerabilities of each of these different types of tools will be challenging.
Artificial intelligence — real security
Right now, the best thing decision-makers can do to protect their data, their customers’ private information, and limit their liability and exposure through AI tools is to take meaningful steps to create clear and effective policies and protocols around AI usage.
Here’s how:
- First, make sure you have a strong grasp of current data policies. Form an experienced group of subject matter experts, led by someone who has the skill set to drive the discussion. Bring all stakeholders to level knowledge before undertaking an AI project.
- Remember that this is not a one-time discussion; it's an evolving and living organism. That team should guide the executive leadership as they engage in meaningful discussions and make decisions. Include C Suite as much as possible throughout.
- Make sure this group is fully equipped to understand advanced concepts and use cases, as well as the complexities around platform integration and adaptation. Understanding it is one thing. Deciding what you should and shouldn’t do is another. But making sure that you don't expose the company or its clients legally, commercially, morally, or ethically is where the rubber meets the road.
- Finally (and this is where many organizations fall short), establish clear guidelines and engage in productive dialogue with your team. Because individuals will move a lot faster than an organization's ability to get out in front of issues like this, it’s critical to be proactive when it comes to clearly articulating and communicating guidelines and guardrails.
The bottom line? Have a group, have a direction, and have a plan to be proactively engaged at all levels in planning, processing, and engaging in ongoing dialogue about these rapidly evolving issues.
Guidelines and red lines
Specific policies, priorities, and protections to consider when laying out usage protocols for ChatGPT and other similar platforms include:
Clear explanations
Explain to your workforce what you're doing and why. Lack of dialogue always does more harm than good, especially when people are highly motivated to use these tools. Let them know that you want to support them in doing so — but ensure that they can do it safely and responsibly. Keep things simple and straightforward. When making any policy, clarity and simplicity are key.
Mobile protections
Make sure that remote or work-from-home employees recognize that it doesn’t matter if they are at work or at home — the use of these tools makes no distinction between the two. The same guidelines need to be in place regardless of whether they are accessing these platforms on a work desktop or their mobile device at home.
Keep it personal
Let employees know that if they are using a platform like ChatGPT, they should set their account up with an email that is not connected to the work. It’s a small step that can make it hard for outside parties to build a profile and make connections to your company.
Set bright red lines
Make sure there are bright red lines about what information employees can and cannot enter into these platforms. Everything from internal practices, procedures, and vernacular to client- or customer-related information must be strictly off-limits. Even seemingly innocuous details about how you're going about doing business or formulating your business, either directly or indirectly, can be harmful. Remember, the moment one of these models “knows” something, it will tell. Anyone can ask, and it will tell them everything it knows. The psychologist is the new hacker, and bad actors “interview” a tool until they get the information they seek. And once that information is out there, it’s irretrievable. Those red lines don’t work in retrospect. If you spill the secret sauce, you can’t get it back in the bottle. Even worse — anyone and everyone can have access to the recipe.
Catch up on the conversation by reading The AI Era: Part One. Interested in enhancing your security plan for the age of AI? Pinkerton is here to help.