Few topics have dominated the headlines in 2023 like artificial intelligence, or AI. For corporate security professionals, AI is both a headache and an exciting new landscape of opportunity filled with new tools and capabilities.

While there has understandably been a lot of chatter about AI (including a sizable number of “think pieces” and unproductive commentary), there has been a noticeable lack of concrete and constructive discussion and debate about the specifics of what responsible and effective use of AI in security could look like. In future pieces, we will begin to dig into some of that detail, tapping into the expertise of subject matter experts and security professionals engaging with the challenges and opportunities that come with AI.

To set the stage, however, we need to begin with a closer look at why it’s so important for the security industry to engage with AI, and what some of the big questions facing brands, businesses, and security professionals mean in terms of real-world considerations, complications, and consequences.

Why the sudden interest in artificial intelligence?

For the casual observer, AI has seemingly burst onto the scene, making headlines and generating a lot of attention. But the reality is that AI-based tools and tech have been evolving for years and are already in wide use in countless everyday applications, from consumer tools like spell checkers and mapping apps to complex business algorithms. The attention-getting accessibility and utility of ChatGPT have simply elevated the profile of AI systems and machine learning and made the extent of technical progress and sophistication more evident to the general public.

Facial recognition highlights potential AI issues

To understand how complex the landscape of AI integration can be, consider one intriguing and powerful new AI-powered tool already in use in many places around the world: facial recognition. While the technology is still not 100% accurate, and most facial recognition systems remain cost-prohibitive for many potential users, the appeal is understandable from a security perspective. That appeal is only going to grow as costs go down and accuracy goes up. Imagine the security benefits from receiving an automated alert when an unauthorized visitor is present in a restricted area of a facility, for example.

But decision-makers need to be ready to answer questions that will arise about privacy and “Big Brother” concerns. The mere presence of cameras in the workplace needs to be carefully considered and discussed, regardless of their role in an AI system. There are different legal limitations, rules, and standards around the world, but generally speaking, a camera positioned in a parking lot for security is almost always going to be permissible and improve security, for example. Placing cameras inside a workplace can be something else entirely, not only from the standpoint of regulatory compliance and legality but also in  terms of employee perception and collective bargaining rights.

Use of facial recognition technology can also raise other thorny issues. Consider a real-world example where items were being taken from desks at a company. An enterprising engineer set up a camera to monitor his workstation because he felt that HR and site security were not being proactive enough. As it turns out, the location of his camera had a view of the hallway that included the entrance to the women’s restroom, which is obviously problematic.

In all scenarios where new AI-powered tools are under consideration, decision-makers need to play devil’s advocate and ask detailed questions about usage and necessity. In the example above, that would include questions like: Is there a genuine need to install a camera for a facial recognition system? Is it legally permissible to install a camera in this location? Will it detect security threats and improve incident response time? How long will that camera be in place? Are there collective bargaining agreements in place that may limit the ability to use the camera? How can we be sure it isn’t in a place where it will make employees uncomfortable?

Balancing innovation urgency with thoughtful diligence

Implementing powerful new security tools always needs to be done thoughtfully. Unfortunately, that is often not the case. The decision-making process about acquiring, implementing, and utilizing new AI tech should also be a collaborative one, conducted in conjunction with legal, HR, labor representation, IT, and other relevant parties. Security professionals and executives should be transparent about their reasoning for using the tool, the capabilities of the technology, the timelines and duration of usage, etc. Clear and consistent employee and third-party communication is a must and often legally required, up to and including signage or other forms of regular reminders or notifications about any intrusive tech. Retention and record-keeping practices also need to be updated and considered when adding new data points, such as cameras.

The reality is that almost all AI tools should be approached in much the same deliberate and thoughtful fashion — especially because many of these new and emerging tools either blur or cross longstanding boundaries regarding privacy or propriety. Exuberance about the power of these admittedly exciting tools can lead to flawed decision-making or overly hasty deployment of AI tools and applications.

The trick is going to be balancing that caution with the drive for valuable security innovation, especially at a time when there is an arms race between bad actors and security teams. Security applications for AI can be hugely beneficial in countless ways, from scanning email to recognizing phishing attempts and other malicious software to flagging worrisome charges on credit cards. Some leading AI tools are collecting crime reports and 911 calls and providing uniquely detailed and nuanced assessments of crime and risk with geographical precision. But AI is ultimately agnostic when it comes to security, as we have seen in the increasing prevalence of scams involving AI-powered voice-mimicking software tricking people into sending money.

Momentum and education for security professionals

The nature of this technology and its learning capabilities means that AI can generate its own momentum when it comes to the sophistication and increasing use of AI-based tools. The arc of innovation is likely to be exponential. There will be inevitable growing pains. However, the characteristics of this type of tech, and the seductive nature of the current and potential power it brings, mean that speed will likely supersede thoughtful examination of privacy concerns and even legal considerations.

With that in mind, corporate security teams need to be proactive, educating themselves so they can subsequently educate their clients and professional partners on best practices and the constantly changing landscape of regulatory requirements, which differ by locality. The pace of advancement in this unique tech sector will be unforgiving for security professionals who fail to do so. Several universities have AI immersion courses, many of which can be taken virtually. The content in those courses covers not just what AI is and what it means for organizations, but information that will help enrollees understand the terms of the debates surrounding these issues. Some traditional corporate security professionals may tend to avoid topics that are extra tech-heavy, but they simply cannot afford to do so with AI. Because AI is already so prevalent and will continue to become an increasingly pivotal piece of the risk profile and the security puzzle, staying ahead of the industry curve on this topic is not a luxury so much as a professional necessity.

Published January 24, 2024