Connect with us

CyberSecurity

DeepLoad Malware Uses AI Code and ClickFix to Evade Security

Published

on

A New Breed of Stealthy Malware Emerges

Cybersecurity researchers have sounded the alarm on a sophisticated new threat. Dubbed DeepLoad, this malware campaign is actively targeting businesses by stealing user credentials and establishing a stubborn foothold on infected networks. What makes it particularly concerning is its dual-threat approach: it uses clever social engineering to get in the door and then deploys AI-assisted techniques to hide in plain sight.

First spotted on dark web forums in February, DeepLoad initially focused on pilfering cryptocurrency wallets. Its ambitions have since expanded. The malware now systematically hunts for enterprise usernames and passwords, providing attackers with a direct line into corporate networks.

The ClickFix Delivery: A Social Engineering Trap

How does DeepLoad get onto a system in the first place? The answer lies in a technique called ClickFix. This isn’t a complex software exploit. It’s a psychological trick.

Attackers lure users to a malicious website, often through a compromised site or a poisoned search engine result. Imagine an employee researching a work-related topic. They click a link that seems legitimate. The site then instructs them to run a specific command, like pasting text into a PowerShell window or a system dialog box. The user, thinking they’re fixing an error or downloading necessary software, unknowingly executes the malware themselves.

Researchers believe this is the most likely infection vector. It bypasses traditional file-based defenses because the user is the one initiating the malicious action. The barrier to entry isn’t a software vulnerability; it’s human trust.

AI-Powered Obfuscation and Hidden Persistence

Once executed, DeepLoad reveals its second, more technically advanced layer. The core malicious payload is buried under a mountain of meaningless code. We’re talking about thousands of lines of random variable assignments and redundant functions that serve no purpose other than to confuse security scanners.

The scale and consistency of this obfuscation are telltale signs. “The sheer volume of padding likely rules out a human author,” noted analysts from ReliaQuest, who first detailed the campaign. This points directly to the use of generative AI. What might have taken a human coder days to manually write and test can now be generated in an afternoon. This isn’t just about saving time; it’s about creating a dynamic threat.

The AI can be prompted to generate new, unique obfuscation layers for each attack wave. This means the malware’s digital fingerprint can change constantly, rendering static detection signatures useless almost as soon as they’re created.

DeepLoad doesn’t stop at hiding its code. It also hides its activity. The malware embeds itself within a Windows lock screen process, an area most security tools don’t routinely inspect. More insidiously, it sets up a hidden persistence mechanism using Windows Management Instrumentation (WMI).

Here’s the kicker: if the initial infection is found and cleaned up, this WMI subscription acts as a sleeper agent. It waits three days and then silently re-infects the machine, restoring the attacker’s access. It’s a built-in recovery system for the malware.

How to Defend Against DeepLoad and Similar Threats

This campaign signals a shift. Defenses need to move beyond just looking for bad files. They must understand behavior. ReliaQuest researchers warn that “coverage needs to be behavior-based, durable, and built for fast iteration.”

For network administrators, several immediate steps can harden defenses. Enabling PowerShell Script Block Logging provides crucial visibility into the commands being run on systems. Regularly auditing WMI subscriptions on exposed hosts can help uncover hidden persistence mechanisms like the one DeepLoad employs.

User education remains the first line of defense against ClickFix-style attacks. Training staff to be skeptical of unsolicited instructions to run commands is critical. If an infection is suspected, changing the affected user’s password is a necessary step to cut off stolen credential access.

The emergence of DeepLoad is a clear warning. Attackers are rapidly integrating AI into their toolkits, not for complex reasoning, but for generating massive, evolving layers of camouflage. The fight is no longer just against malicious code, but against the automated systems designed to make that code invisible.

CyberSecurity

Vibe Coding Security: UK NCSC Chief Calls for AI Development Safeguards

Published

on

Vibe Coding Security: UK NCSC Chief Calls for AI Development Safeguards

The head of Britain’s National Cyber Security Centre has issued a clear challenge to the cybersecurity industry. Speaking at the RSA Conference in San Francisco, Richard Horne argued that AI-assisted software development—often called vibe coding—represents both a massive opportunity and a significant risk. His message was straightforward: we must harness this technology’s potential while building robust protections against its dangers.

The Double-Edged Sword of AI-Generated Code

Imagine a world where software vulnerabilities become rare instead of commonplace. That’s the promise Horne sees in well-trained AI coding tools. The current reality of manually produced software is bleak—consistently vulnerable, perpetually patched. Vibe coding could disrupt that cycle entirely.

But there’s a catch. Software produced without proper human oversight might simply propagate existing flaws at machine speed. “The attractions are clear,” Horne acknowledged. “Disrupting the status quo is a huge opportunity, but not without risk.” The tools themselves must be designed from the ground up to avoid introducing new weaknesses.

Can AI actually produce more secure code than humans? The NCSC believes it’s possible, but only with deliberate effort. The agency envisions AI tooling that writes secure-by-design software as its default output. That transformation won’t happen by accident.

Building Guardrails for the AI Coding Revolution

While Horne delivered his keynote, NCSC’s Chief Technology Officer for Architecture published complementary guidance. David C’s blog post presented a pragmatic view: AI-generated code currently poses “intolerable risks” for many organizations, yet shows “glimpses of a new paradigm.” Experienced developers could see their productivity skyrocket—if security keeps pace.

The business benefits are too compelling to ignore. Adoption will surge whether security professionals are ready or not. That’s why the NCSC insists we must engage with these risks immediately, embedding core security principles before vulnerable patterns become entrenched.

Six Commandments for Secure Vibe Coding

The agency’s framework outlines specific safeguards:

Secure by Default: AI models must generate hardened code from the start, not as an afterthought. Every output should meet baseline security standards automatically.

Trust but Verify: Demand provable model provenance. Organizations need assurance that AI-generated code contains no hidden backdoors or malicious components.

AI-Powered Reviews: Turn the technology on itself. Use AI to audit all code—whether human-written or machine-generated—scanning continuously for vulnerabilities.

Deterministic Guardrails: Implement strict, rule-based controls that limit what code can do, even if compromised. These boundaries should be non-negotiable.

Secure Hosting Platforms: Build environments that sandbox and protect against bad code, regardless of its origin. The platform itself becomes a defensive layer.

Automated Security Hygiene: Let AI handle documentation, testing, fuzzing, and threat modeling for every software component. Routine tasks become automated safeguards.

Starting Now, Not Waiting for Perfection

The most urgent message from the NCSC? Begin implementation immediately. “Don’t wait five years for the vibe future,” David C emphasized. Early guardrails established today will shape how this technology evolves.

Consider legacy systems. Many organizations struggle with outdated, vulnerable applications they can’t easily replace. AI could help harden that code, paying down “technical and security debt” accumulated over decades. Even maintaining simple allow-lists of permitted URLs—a tedious manual task—could become automated and more secure.

There’s an intriguing possibility on the horizon. AI-generated code might eventually become more restricted and locked down by default than the best on-premises or SaaS products available today. That outcome would require deliberate design choices, but it’s within reach.

Ironically, this approach might address longstanding concerns about cloud migration. Organizations that have resisted moving critical systems for security reasons might find AI-assisted development provides the control they’ve sought. The future of coding isn’t just about writing faster—it’s about building smarter, safer software from the first line to the last.

Continue Reading

CyberSecurity

Greek Spyware Scandal: Intellexa Founder Points Finger at Government

Published

on

The ‘Greek Watergate’ and a Conviction

Tal Dilian, the founder of spyware company Intellexa, isn’t going quietly. Following a Greek court’s decision to convict and sentence him to eight years in prison, Dilian has announced plans to appeal. His conviction, along with three other executives, centers on charges of illegally obtaining personal data as part of a massive wiretapping operation that rocked the nation.

This isn’t just another corporate scandal. Dubbed the “Greek Watergate,” the affair saw the phones of senior ministers, opposition leaders, military brass, and journalists infiltrated by Intellexa’s Predator spyware. This powerful tool can crack iPhones and Android devices, silently harvesting call logs, texts, emails, and location data—often with just a single malicious click from the target.

A Government Under Fire and a Claim of Scapegoating

The fallout was immediate and severe. Revelations about the hacking of journalists’ phones forced the resignations of top officials, including the head of Greece’s national intelligence service and a senior aide to Prime Minister Kyriakos Mitsotakis. Yet, despite the political tremors, no government official has faced conviction. Critics have long accused the Mitsotakis administration of orchestrating a cover-up.

Now, Dilian is fueling those accusations. In a statement first reported by Reuters, the convicted spyware magnate declared he would not be a “scapegoat.” This pointed remark stands as the most direct insinuation from within Intellexa that the Greek government itself sanctioned the widespread surveillance.

“I believe a conviction without evidence is not justice,” Dilian told Reuters. “It could be part of a cover-up and even a crime.” He added that he is prepared to hand over evidence to both national and international regulators, a challenge that puts further pressure on Athens.

The Global Reach of Predator and Mounting Pressure

Dilian’s defense hinges on a key industry claim. He told Reuters that advanced surveillance technologies like Predator are almost exclusively sold to sovereign governments. The implication is clear: if the tool was used, a government client was responsible for its lawful—or unlawful—application.

The scandal’s ripples extend far beyond Greece. The United States government imposed sanctions on Dilian in 2024 after Predator was discovered on the phones of American officials and journalists. These sanctions effectively criminalize any business dealings with Dilian and his associates, isolating him on the global stage.

As Dilian prepares his appeal, the central question remains unanswered. Who, ultimately, gave the order? The convicted businessman’s claim of scapegoating ensures the shadow of the “Greek Watergate” will linger over the government for some time to come.

Continue Reading

CyberSecurity

Conntour’s AI Video Search Engine Secures $7M from General Catalyst and Y Combinator

Published

on

Surveillance Tech at a Crossroads

The surveillance industry faces intense scrutiny. Recent controversies involving ICE accessing Flock’s camera network and Ring developing features for police requests have sparked heated debates about privacy, safety, and oversight. Yet market demand persists, driven by rapid advancements in vision-language AI models.

Companies continue developing tools to help organizations monitor their premises. The ethical dimension, however, has become impossible to ignore.

A Startup With Selective Ethics

Matan Goldner, Conntour’s co-founder and CEO, emphasizes that ethics guide his company’s client selection. For a startup barely two years old, turning away business might seem risky. Goldner argues their existing customer base provides that luxury.

“Having large customers allows us to stay in control,” Goldner told TechCrunch. “We select who uses it and for what purpose. We apply our judgment to ensure use is both moral and legal.” Current clients include Singapore’s Central Narcotics Bureau and other major government and publicly-listed entities.

This principled stance hasn’t scared off investors. Conntour recently closed a $7 million seed round led by General Catalyst and Y Combinator, with participation from SV Angel and Liquid 2 Ventures. The funding round wrapped up in just 72 hours.

“We scheduled about 90 meetings in eight days,” Goldner recalled. “We started on Monday and were done by Wednesday afternoon.”

How Conntour’s AI Search Engine Works

Conntour’s platform transforms security video monitoring. Instead of relying on preset motion detectors or object definitions, it uses natural language queries. Security personnel can ask questions like “Find instances of someone in sneakers passing a bag in the lobby.”

The system scans live or recorded footage across thousands of camera feeds, returning relevant video clips with timestamps. It functions like a Google search engine specifically for surveillance video.

Beyond search, the platform monitors feeds autonomously based on configured rules, surfacing alerts automatically. It can generate incident reports and answer questions about footage in text, accompanied by the relevant video evidence.

Technical Scalability and Efficiency

Goldner highlights scalability as Conntour’s key differentiator. The system is engineered to handle massive deployments efficiently. A single consumer-grade GPU, like an Nvidia RTX 4090, can process up to 50 camera feeds simultaneously.

“Other AI video search services exist,” Goldner explained, “but they aren’t built for systems with thousands of feeds.” Conntour achieves this by employing multiple AI models and logic systems. Its algorithm intelligently selects the most efficient model for each query, minimizing computational load while delivering accurate results.

The platform offers flexible deployment: fully on-premises, completely cloud-based, or a hybrid model. It integrates with most existing security systems or can operate as a standalone surveillance platform.

Overcoming Industry Challenges

Video surveillance has a persistent weak link: garbage in, garbage out. A blurry, poorly-lit feed from a dirty camera lens yields useless data, regardless of sophisticated AI.

Conntour addresses this by providing a confidence score with every search result. If camera quality is subpar, the system indicates low confidence in its findings, alerting users to potential inaccuracies.

Looking ahead, Goldner identifies a core technical challenge. “We face a contradiction,” he said. “We want full natural language flexibility—let users ask anything, LLM-style. Simultaneously, we need extreme efficiency to process thousands of feeds without insane resource demands. Solving this contradiction is our biggest technical barrier.”

The $7 million in new capital will fuel that effort, pushing the boundaries of what’s possible in AI-powered video security while navigating the complex ethical landscape that defines the industry.

Continue Reading

Trending