The wave of AI-mania that has swept the globe since the introduction of ChatGPT in 2022 has brought a technology that has been used primarily behind the scenes into the public consciousness in a dramatic way.
For organizations operating in every vertical, there are potentially far-reaching implications. AI-powered tools and platforms are both alluring in their capabilities and concerning in terms of their potential misuse. There are very real security and privacy concerns organizations need to consider when adopting AI into their workflow. In this three-part blog series, we’ll unpack the buzz around AI and discuss both concerns and opportunities surrounding AI tech. In parts two and three, we’ll discuss the nature of those concerns in more detail and explain what companies can and should be doing to protect their organizations from AI threats.
But first, let’s consider how we got here — and why decision-makers should be extremely cautious about if, when, and how they integrate AI-powered tools and tech into their workflow.
Historic and unavoidable
The accessibility of AI — specifically the kind of large language model (LLM) AI platforms like ChatGPT — is a major breakthrough. While countless organizations, including very familiar names like Amazon and Netflix, have long used AI in a variety of ways and most adults had AI tools in their pockets through their smartphones, ChatGPT felt different. It made the power of AI feel accessible to the average person in a way that resonated, both because of the conversational style interface and because of the way the platform “understands” and interprets data. Consequently, ChatGPT has turned out to be the single fastest widespread adoption of new technology in history in terms of the sheer volume of users. This is a game-changing commodity. AI is already seemingly everywhere, but it won’t be long before it is unavoidable. As technology continues to spread, organizations successfully integrating AI will see competitive advantages.
Exponential growth
Ironically (or perhaps appropriately) enough, the large number of plugins and industry-specific add-ons that are being rolled out for platforms like ChatGPT aren’t just making this tool more useful to growing numbers of professionals, but they are also fueling its growth. Because of the nature of LLMs, as those uses continue to grow, the power, accuracy, and utility of the platform will grow accordingly. With vast amounts of real-world data being fed into the system, the AI model becomes better and better, simultaneously becoming more prevalent and more powerful. It’s a feedback loop that also makes the potential security risks more concerning.
Concerns and vulnerabilities
The inherently connected nature of LLM platforms is, in and of itself, a vulnerability. As security professionals, we spend a lot of time making sure we understand the granular details of how these tools work. Where does the data come from? Where is it stored? How is it used? What, if any, privacy protections are in place? Something as simple as an employee using an AI platform to help brainstorm or answer a simple question could result in commercially sensitive or personally identifiable information being used in a prompt.
Since these new AI tools can be used as productivity tools, they also have the corresponding potential for misuse. They may increase the productivity of bad actors or the impact of a threat, as well, sometimes in surprising ways. In one recent ransomware example, a bad actor offered to sell allegedly compromised data back to a company. The catch? The data was fake, generated via AI, and convincingly made to look like real stolen data.
Accessibility is another concern. Unlike a nuclear weapon or other high-level threat, you don’t need highly specialized expertise to use this kind of AI tool to do great harm. One reasonably tech-savvy person could potentially wreak havoc.
Security, utility, policy
The popularity of this new generation of AI tools — and the allure of the power and potential they offer — is driving virtually every chief information officer (CIO) and decision-maker to think about how to make AI work for them. The pressure is on organizations who feel (probably correctly) that they will need to use AI to their advantage to remain competitive — if not today, then certainly in the not-too-distant future.
Most decision-makers correctly recognize the need to make an enterprise use of AI instead of having employees use it on their own. But how? And how to do so responsibly, in a way that addresses security concerns? While those questions are asked about every technology, AI is different in terms of its complexity and in terms of how users interact with it, which can be unexpected. The combination of extraordinary power and a lack of widespread understanding of the vulnerabilities of AI tools is a formula for concerns.
At Pinkerton, talking to our corporate clients about the risks and opportunities associated with AI tools is an increasingly important piece of the services we offer. What policies do they need to put in place? What safeguards do they need? How can they take a pragmatic and informed approach to AI tech in the workplace and responsibly integrate AI into their workflow? How can they protect against an employee trying to solve a problem or answer a question and picking up a smartphone to leverage AI and inadvertently or unwittingly causing a security exposure?
What is needed is a plan with detailed guidelines about responsible usage — who, where, how, and when — and specific instructions about how that plan will be monitored and policed. We’ll explore what that plan might look like and what other best practices for security optimization and privacy protection organizations should be implementing in future installments in this series.
Stay tuned for the upcoming next two installments of our series The AI Era, where we'll provide deeper insights into AI threats and protective strategies for your company.