Skip to Content

PCI Compliance and Pen Testing: Fundamental Keys to Success Webinar Register

Dark teal and black gradient

Blog

AI is Ever-Present. Are You Doing Enough to Stay Secure?

As we move further and further into the Gen AI world, it has become clear that we cannot stop users from leveraging the various flavors of chat-based LLMs (Large Language Models). Things like ChatGPT, CoPilot, Gemini, Claude and even Deepseek are becoming an integrated part of many of our daily lives.

As we approach this new technology from the view of a security leader, we mostly see risk or opportunity for security breach. Our users have always been the last, and potentially the weakest, line of defense in our security stack. The pressures to leverage AI are beginning to outweigh our concerns as security leaders, and this should scare us. How do we balance innovation with risk mitigation?

So, we are left with a problem that does not have a clear solution.

The Dilemma: Can You Truly Block AI?

With our existing security tools, we can simply block user access to AI tools on our corporate assets, or we can attempt leverage a DLP engine to prevent certain data types from being uploaded to unsecure AI programs.

But neither of these options really get us what we need. Ultimately the users will find a way to use AI on their mobile device, or on their computer at home. We are only “hiding” the problem with these solutions.

Reflecting on the Past: Parallels with Old-School Firewalls

It reminds me of a time not that long ago when our firewalls were simply “allow” or “deny.” We had no context; we just allowed a port, or we blocked a port. We could potentially specify a group of users or an IP range. But ultimately, we were blind to what was happening on port 80 and 443 if we allowed it.

Then one day – like magic – application-aware firewalls, also known as Layer 7 firewalls, entered the market and completely changed our ability to see what was happening. Suddenly, policy control was possible.

Now, I believe we’re witnessing a similar revolution with the advent of prompt inspection software.

A Paradigm Shift: Introducing the “9th Layer” of Security

In my 17-year career, I’ve only experienced one truly world-shifting, jaw-dropping moment before: the first time I saw a Palo Alto firewall dissect network traffic.

Recently, a demo of these new AI security tools sparked that same awe. This tool effectively adds a “9th layer” of security, targeting the user’s interactions with AI before any data leaves their device. (The user has always been the 8th layer of security.)

Prompt.security, Apex Security, and Magic Mirror are some of the more notable tools offering a much-needed layer of security to the explosion of AI-usage in organizations.

Let’s Explore an Example.

Imagine you are a bank. You want to allow your employees to leverage AI for productivity, but you have some regulated departments – such as stock traders – who simply cannot use AI to assist them in their job.

In this scenario, your instinct may be to ban AI use altogether. But rather than an outright ban, prompt inspection can allow access while filtering out specific requests—such as attempts to get stock advice—thereby enforcing granular security policies.

Let’s Consider Another Scenario.

Imagine you have a team of developers using Visual Studio. Predictably, they all want to install AI helpers. The business wants them to install these AI helpers; it will help them code faster, and more efficiently. But as a security leader, we recognize that AI access to proprietary source code poses risks: exposing intellectual property, API keys, and sensitive credentials.

So, what do we do? Even if you take a hard-lined stance and disallow AI access, the developers may work around you and do it on their personal machines instead. But with prompt inspection, we can now inspect what the AI helper receives prior to any information leaving the developer’s machine. We can monitor and even redact information before it reaches the AI, providing a safeguard without stifling innovation.

The Balance of Security and Innovation

For me this feels like a big moment in time. Today, I’m reflecting on a few specific tools: Prompt.security, Apex Security and MagicMirror. But just like Palo Alto Networks was the first of many in their category, I suspect we will see many new prompt injection tools will enter the market. Just take a peek at the list of AI tools at RSA Conference, 2025.

More importantly, this is an example of an opportunity for security leaders to go from the department of “no”, to the department of “yes”.

Let’s start a conversation. How do you see prompt inspection reshaping the cybersecurity landscape in our AI-driven future? Contact us to keep the discussion going!

Want More?

Join our Webinar, Innovation 2025: What to Look Out for at RSA Conference This Year.

Disclosure: I am not being paid by Prompt, Apex or MagicMirror to write this or promote their technology. However, I am an employee of a consulting firm that does resale of these tools.

About the Author

Josh Johnson is a Pre Sales Architect at Tevora.

Explore More In-Depth Cyber Solution Implementation Resources

View Our Resources