Connect with us

Infosecurity

EU Cybersecurity Rules: Why Global Regulators Must Act Now on Digital Resilience

Published

on

EU Cybersecurity Rules: Why Global Regulators Must Act Now on Digital Resilience

The European Union’s landmark agreement on cybersecurity rules has sent a clear signal to the world: protecting critical infrastructure is no longer optional. These EU cybersecurity rules, finalized in late 2016, mandate that companies in energy, transportation, health, and banking must fortify their systems against attacks and report significant breaches. This move marks the first time the EU has directly legislated on cybersecurity, reflecting the exponential rise in cyber incidents.

What makes this regulation so significant? For one, it acknowledges that cyber threats now have physical consequences. As software and control systems become deeply integrated, a single breach can disrupt power grids, halt trains, or compromise patient data. The EU, as one of the world’s largest economies, is setting a precedent that others must follow.

The Urgent Need for Digital Resilience

Building digital resilience requires more than just identifying key operators and raising their security standards. The EU cybersecurity rules rightly emphasize notification of incidents, but reporting a breach is often too late. The real goal must be to reduce overall risk to public safety through preventive measures.

Therefore, regulators must mandate controls across the full spectrum—prevention, detection, response, and recovery. This includes requiring vendors of critical infrastructure to embed security from the ground up. Trust must be stamped into hardware and software from inception, with systems hardened and encrypted where appropriate.

Lessons from the EU for Global Cybersecurity Cooperation

The interconnected nature of digital networks means a threat to one nation is a threat to all. This is why the EU cybersecurity rules offer a positive example of what can be gained through closer alliance. However, the challenge lies in implementation. The internet was never built for security, and the field of cybersecurity law is still evolving.

As a result, any new regulations must walk a tightrope: they need to be robust enough to force action but flexible enough to keep pace with technology. For instance, the EU’s rules began as a proposal in 2013 and will only become law this year. In that time, computing power has more than doubled, according to Moore’s law. This lag highlights the need for agile regulatory frameworks.

Preventive Technologies: The Core of Cyber Threat Prevention

Effective cyber threat prevention goes beyond compliance. It requires a holistic approach that integrates cybersecurity operations with national and global regulations. Governments and companies must anticipate both current and upcoming rules, adapting them to specific needs—from executive oversight to procedural controls and technological implementation.

Moreover, reporting a security breach is only part of the battle. We need to protect the confidentiality and integrity of entire systems with preventive technologies. Should an incident occur, the response must be swift enough to remediate vulnerabilities before adversaries exploit them.

What Other Regions Can Learn from the EU

Countries in the GCC and beyond should watch the EU’s unfolding regulations closely. These rules enhance security not just for EU nations but also for trading partners. For example, DarkMatter advocates for truly integrating cybersecurity with global regulations, a stance that aligns with the EU’s approach.

In addition, regulators must consider that the internet is less than 30 years old and was never built for security. It’s only in the last two decades, as it became a platform for global commerce, that this became a fundamental concern. Therefore, the time to effect these changes is now.

To explore more on this topic, read our guide to cybersecurity trends or learn about critical infrastructure protection strategies.

Ultimately, the EU cybersecurity rules are a vital step. But they must be implemented with precision, ensuring that technology advances do not outpace the laws meant to protect us.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Infosecurity

Chris Inglis on Insider Threats, Snowden, and the Power of Behavioral Analytics

Published

on

Chris Inglis on Insider Threats, Snowden, and the Power of Behavioral Analytics

When most people picture a cybersecurity discussion, the British Museum probably doesn’t come to mind. Yet, on a recent Tuesday, the historic venue hosted a press roundtable featuring Chris Inglis, former deputy director of the National Security Agency (NSA), alongside representatives from Securonix, a security intelligence platform provider. The topic? The ever-evolving landscape of insider threats—a challenge that continues to plague organizations worldwide.

Inglis, drawing on decades of experience at the NSA and reflecting on the fallout from the Edward Snowden revelations, offered a rare glimpse into how behavioral analytics can help detect and mitigate these risks. His message was clear: traditional security measures are no longer enough.

The Growing Danger of Insider Threats

According to Inglis, the digital age has amplified the potential damage any single insider can cause. “People in possession of computers and network systems today have an opportunity to cause much greater harm in a much faster period of time than they once did,” he said. This shift demands a new approach—one that moves beyond simple vetting and trust.

He argued that organizations can no longer rely solely on perimeter defenses or periodic checkpoints. Instead, they must adopt a real-time understanding of what users are doing with sensitive data. “You have to have some understanding of what’s happening to the data now, in real time,” Inglis emphasized. “That means you have to have data about data—and analytics that can make sense of it.”

Building on this, he stressed that the goal isn’t just to react or track behavior after the fact. “The goal isn’t to react well, or even to track well, it’s to anticipate; to see these things coming and step in before the disaster occurs.”

Behavioral Analytics: The Key to Early Detection

So, how can organizations spot an insider threat before it’s too late? Inglis pointed to detailed user analytics as the linchpin. By monitoring patterns—such as unusual data access, off-hours logins, or excessive downloads—companies can identify anomalies that signal malicious intent or accidental risk.

However, this raises an uncomfortable question: When we start collecting data on employee behavior, are we crossing ethical boundaries? Inglis didn’t shy away from this. “They absolutely do,” he replied when asked if companies have an obligation to be transparent. “You can’t incur on their sense or expectation of privacy without justifying that and having a full conversation about that.”

He noted that the hardest conversation isn’t with the potential “Edward Snowdens” of the world—it’s with the 99.99% of employees who are trustworthy. “The internal population, as much as the external population, has a right to know that they are applying their time and talent to something that is properly controlled.”

Striking a Balance Between Security and Privacy

This brings us to a central tension in modern cybersecurity: how do you protect sensitive data without alienating your workforce? Inglis advocates for raising the ethical threshold. “Let’s really get at the things that are security relevant, because we are imposing on the privacy of individuals, most of whom are simply trying to make a positive difference.”

He warned against treating all employees as potential threats. “In our pursuit of the 1%, or the one in a million in Snowden’s case, we can’t abuse the 99%. We have to keep both entities in mind.” This means designing monitoring programs that encourage inspired work rather than squeezing it out.

Distinguishing Malicious Insiders from Accidental Risks

Another critical issue Inglis addressed is the difference between a malicious insider—someone who intentionally causes harm—and a user who poses a risk simply because they don’t know any better. “Well, not enough, clearly,” he argued when asked if companies fully understand this distinction. “Are they starting to get it? Yes—they are increasingly getting it.”

This distinction matters because the response differs. A malicious actor may require termination or legal action, while an accidental risk might benefit from training or policy changes. By leveraging behavioral analytics, organizations can tailor their responses and avoid unnecessary friction with well-meaning employees.

Lessons from the Snowden Case

The Snowden revelations remain a watershed moment for insider threat management. Inglis, who was at the NSA during that period, noted that the case highlighted systemic failures in monitoring and trust. Snowden was a privileged user with access to vast amounts of classified data—and he exploited that trust for years before detection.

Inglis’s takeaway? Organizations must continuously verify trust, not just grant it once. “You can no longer simply defend perimeters or checkpoints and assume that any mischief inside will be caught at the margins.” Real-time analytics, combined with transparent policies, offer a path forward.

For more insights on managing insider risks, check out our guide on insider threat prevention strategies and learn how to implement effective behavioral analytics tools in your organization.

Conclusion: A Call for Ethical Vigilance

As cybersecurity threats evolve, so must our defenses. Chris Inglis’s roundtable discussion underscores the importance of using insider threats as a lens to rethink security—not just as a technical challenge, but as an ethical one. By combining robust analytics with respect for employee privacy, companies can protect their data without sacrificing trust.

Ultimately, the goal is not to catch every bad actor after the fact, but to create an environment where threats are anticipated and neutralized—while the 99% continue to do their best work.

Continue Reading

Infosecurity

IoT Deployments Often Rely on Isolated Networks and Sub-Nets: How to Secure the Expanding Attack Surface

Published

on

IoT Deployments Often Rely on Isolated Networks and Sub-Nets: How to Secure the Expanding Attack Surface

The Internet of Things (IoT) promises to revolutionise business processes, offering unprecedented efficiency and new ways to engage customers. However, a critical challenge lurks beneath the surface: IoT security isolated networks and sub-nets are becoming the norm, creating a complex landscape for network defenders. According to a recent Quocirca report covering the UK and German-speaking regions, 68% of organisations already see IoT making an impact or expect it to do so soon. Yet, as deployments grow, so does the attack surface—and traditional security measures are struggling to keep pace.

Understanding IoT Sub-Nets and Their Security Implications

In many IoT deployments, devices are not directly connected to the corporate network. Instead, they operate on IoT sub-nets—isolated segments where communication flows through a central hub. For example, a well-configured home network places all smart devices behind a secure router. In enterprise settings, however, most IoT endpoints attach directly or indirectly to the main network. This creates a headache for security teams: a rapidly expanding attack surface that is difficult to monitor.

Network administrators often feel confident identifying and controlling traditional devices like PCs and printers. But as more unusual IoT gadgets—sensors, smart locks, environmental monitors—join the mix, the challenge intensifies. Many of these devices run lightweight operating systems such as TinyOS or Nano-RK, designed for low energy use and limited processing power. This means they cannot support standard endpoint security agents, leaving a gap in visibility.

The Agentless Security Challenge in IoT Deployments

One of the biggest hurdles in IoT security isolated networks is the inability to install software agents on devices. In the past, when most network-attached devices ran Windows or Linux, agent-based management was feasible. However, the rise of BYOD (bring your own device) and guest access has already pushed organisations toward agentless approaches. Now, IoT compounds the problem: fewer than 4% of survey respondents said agentless support was unimportant, yet 12% still rely on specialist agents, while a staggering 72% depend on rudimentary controls like network passwords or Wi-Fi keys.

This unsatisfactory situation explains why 45% of organisations plan to deploy new network security technology within 18 months. Among those expecting IoT to play a larger role, that figure jumps to 54%. The need for continuous, real-time visibility of every device—known or unknown—is urgent. Fortunately, Network Access Control (NAC) technology has evolved to meet this demand.

How NAC Technology Addresses IoT Security Gaps

NAC solutions have been used for years to identify and control how traditional IT devices join corporate networks. Now, vendors are adapting NAC for the IoT era. ForeScout Technologies, which sponsored Quocirca’s latest research, claims to lead this adaptation with agentless discovery and classification, automated policy-based controls, and integration with other security products. Other key players include Cisco, Aruba (now part of HP), Pulse Secure, Bradford Networks, Trustwave, and Portnox.

These tools can enforce policies without requiring agents on IoT devices—a critical capability given the diversity of operating systems. For example, a sensor running TinyOS can be automatically quarantined if it exhibits suspicious behaviour, without any manual intervention. This is essential for maintaining agentless network security across isolated sub-nets.

Building a Future-Proof IoT Security Strategy

To prepare for the coming wave of IoT devices, organisations must act now. Start by assessing your current network architecture: identify which sub-nets host IoT endpoints and how they connect to the corporate backbone. Implement NAC technology that offers agentless visibility and policy enforcement. As the Quocirca report highlights, only 37% of firms have well-established IoT policy controls in place—meaning the majority have room for improvement.

Consider integrating your NAC solution with existing security tools, such as SIEM systems or firewalls, to create a unified defence. For more insights, explore our guide on IoT network segmentation best practices and learn how to deploy agentless security for smart devices. Remember, the time to adapt is now—before the next wave of connected devices overwhelms your defences.

Continue Reading

Infosecurity

Why Organizations Should Aim for a Risk-Adverse Culture, Not Just Compliance

Published

on

Why Organizations Should Aim for a Risk-Adverse Culture, Not Just Compliance

For many organizations, security training boils down to a checkbox exercise: prove that every employee completed the mandatory awareness course. However, according to John Curran, principal consultant at FTR Solutions and co-founder of Intrinsic Aware, this approach misses the mark entirely. Instead, companies should focus on cultivating a risk-adverse culture — one where security is embedded in everyday behavior, not just a one-time lesson.

Curran argues that a risk-adverse culture goes beyond policies and procedures. It requires shifting away from a blame-oriented mindset, where employees fear reporting mistakes, toward an environment that encourages open dialogue about security incidents. “Unfortunately, many organizations have created a blame culture, and an environment where people don’t think of the information security function as good people to talk to when something bad happens,” Curran explained during a recent presentation.

The Pitfalls of a Blame Culture in Cybersecurity

When employees are afraid to speak up, the entire security posture suffers. A blame culture discourages incident reporting, leaving vulnerabilities unaddressed. Statistics show that nearly half of all security breaches stem from human error — including phishing attacks and lost USB drives. Yet, despite this reality, organizations invest only 3-5% of their security budgets in awareness and training. This underinvestment, Curran says, is a critical oversight.

Building on this, he emphasizes that having policies in place is not the same as engaging staff. “All too often, organizations make the mistake of thinking that simply having policies and procedures in place for user awareness is sufficient. This is not the same thing as engaging your staff and ensuring they understand the company’s security needs.”

How to Foster a Risk-Adverse Culture Through Training

Creating a risk-adverse culture requires more than just annual training sessions. Curran outlines several goals for effective security awareness programs:

  • Employees should clearly understand what is expected of them.
  • They must learn appropriate skills and behaviors for different situations.
  • Ultimately, staff should feel willing and able to discuss or report suspected incidents. “Having a culture in which people are open to the discussion of risk and that they feel safe and able to report incidents is core,” Curran notes.

To achieve these goals, organizations need to move beyond passive learning. Curran advises using interactive methods such as testing, immediate feedback, and personalized learning pathways. For example, creating security learning pathways tailored to different roles can help employees retain information better. Additionally, providing rationale at the end of training modules reinforces why security matters.

Practical Tips for Engaging Security Training

Curran offers several actionable strategies for designing awareness courses that stick:

  • Be careful with branding when creating training materials — keep them professional yet relatable.
  • Create learning security pathways that guide employees through progressive topics.
  • Offer immediate feedback during the test process to reinforce correct answers.
  • Provide rationale at the end of each module to explain the “why” behind security rules.
  • Trace performance, progress, and levels of engagement to identify areas for improvement.

He also references the Chimp Paradox Theory to explain why changing behavior is difficult. “Our goal in the awareness process is to keep the monkey quiet while we are talking to the human and push as much of that into the computer as possible,” Curran said. In other words, training should aim to automate good security habits so they become second nature.

The Role of Incident Reporting in a Risk-Adverse Culture

One of the most critical components of a risk-adverse culture is encouraging incident reporting. When employees feel safe admitting mistakes, organizations can respond faster and prevent larger breaches. Curran stresses that a blame-free environment is essential for stakeholder engagement. “People shouldn’t be afraid of reporting incidents,” he says. “It’s not conducive to stakeholder engagement.”

To build this trust, companies should celebrate reporting rather than punishing errors. For more insights on creating a positive security culture, check out our guide on building a security-first workplace.

Conclusion: Moving Beyond Compliance

In summary, organizations must shift their focus from mere compliance to cultivating a risk-adverse culture. This means investing in ongoing, engaging training that empowers employees to act as the first line of defense. By addressing the root causes of human error and fostering open communication, companies can significantly reduce their risk exposure. As Curran aptly puts it, “Having a culture in which people are open to the discussion of risk and that they feel safe and able to report incidents is core.”

Ready to transform your security awareness program? Explore our best practices for security awareness training to get started.

Continue Reading

Trending