Chris Inglis on Insider Threats, Snowden, and the Power of Behavioral Analytics
When most people picture a cybersecurity discussion, the British Museum probably doesn’t come to mind. Yet, on a recent Tuesday, the historic venue hosted a press roundtable featuring Chris Inglis, former deputy director of the National Security Agency (NSA), alongside representatives from Securonix, a security intelligence platform provider. The topic? The ever-evolving landscape of insider threats—a challenge that continues to plague organizations worldwide.
Inglis, drawing on decades of experience at the NSA and reflecting on the fallout from the Edward Snowden revelations, offered a rare glimpse into how behavioral analytics can help detect and mitigate these risks. His message was clear: traditional security measures are no longer enough.
The Growing Danger of Insider Threats
According to Inglis, the digital age has amplified the potential damage any single insider can cause. “People in possession of computers and network systems today have an opportunity to cause much greater harm in a much faster period of time than they once did,” he said. This shift demands a new approach—one that moves beyond simple vetting and trust.
He argued that organizations can no longer rely solely on perimeter defenses or periodic checkpoints. Instead, they must adopt a real-time understanding of what users are doing with sensitive data. “You have to have some understanding of what’s happening to the data now, in real time,” Inglis emphasized. “That means you have to have data about data—and analytics that can make sense of it.”
Building on this, he stressed that the goal isn’t just to react or track behavior after the fact. “The goal isn’t to react well, or even to track well, it’s to anticipate; to see these things coming and step in before the disaster occurs.”
Behavioral Analytics: The Key to Early Detection
So, how can organizations spot an insider threat before it’s too late? Inglis pointed to detailed user analytics as the linchpin. By monitoring patterns—such as unusual data access, off-hours logins, or excessive downloads—companies can identify anomalies that signal malicious intent or accidental risk.
However, this raises an uncomfortable question: When we start collecting data on employee behavior, are we crossing ethical boundaries? Inglis didn’t shy away from this. “They absolutely do,” he replied when asked if companies have an obligation to be transparent. “You can’t incur on their sense or expectation of privacy without justifying that and having a full conversation about that.”
He noted that the hardest conversation isn’t with the potential “Edward Snowdens” of the world—it’s with the 99.99% of employees who are trustworthy. “The internal population, as much as the external population, has a right to know that they are applying their time and talent to something that is properly controlled.”
Striking a Balance Between Security and Privacy
This brings us to a central tension in modern cybersecurity: how do you protect sensitive data without alienating your workforce? Inglis advocates for raising the ethical threshold. “Let’s really get at the things that are security relevant, because we are imposing on the privacy of individuals, most of whom are simply trying to make a positive difference.”
He warned against treating all employees as potential threats. “In our pursuit of the 1%, or the one in a million in Snowden’s case, we can’t abuse the 99%. We have to keep both entities in mind.” This means designing monitoring programs that encourage inspired work rather than squeezing it out.
Distinguishing Malicious Insiders from Accidental Risks
Another critical issue Inglis addressed is the difference between a malicious insider—someone who intentionally causes harm—and a user who poses a risk simply because they don’t know any better. “Well, not enough, clearly,” he argued when asked if companies fully understand this distinction. “Are they starting to get it? Yes—they are increasingly getting it.”
This distinction matters because the response differs. A malicious actor may require termination or legal action, while an accidental risk might benefit from training or policy changes. By leveraging behavioral analytics, organizations can tailor their responses and avoid unnecessary friction with well-meaning employees.
Lessons from the Snowden Case
The Snowden revelations remain a watershed moment for insider threat management. Inglis, who was at the NSA during that period, noted that the case highlighted systemic failures in monitoring and trust. Snowden was a privileged user with access to vast amounts of classified data—and he exploited that trust for years before detection.
Inglis’s takeaway? Organizations must continuously verify trust, not just grant it once. “You can no longer simply defend perimeters or checkpoints and assume that any mischief inside will be caught at the margins.” Real-time analytics, combined with transparent policies, offer a path forward.
For more insights on managing insider risks, check out our guide on insider threat prevention strategies and learn how to implement effective behavioral analytics tools in your organization.
Conclusion: A Call for Ethical Vigilance
As cybersecurity threats evolve, so must our defenses. Chris Inglis’s roundtable discussion underscores the importance of using insider threats as a lens to rethink security—not just as a technical challenge, but as an ethical one. By combining robust analytics with respect for employee privacy, companies can protect their data without sacrificing trust.
Ultimately, the goal is not to catch every bad actor after the fact, but to create an environment where threats are anticipated and neutralized—while the 99% continue to do their best work.