Connect with us

CyberSecurity

Greek Spyware Scandal: Intellexa Founder Points Finger at Government

Published

on

The ‘Greek Watergate’ and a Conviction

Tal Dilian, the founder of spyware company Intellexa, isn’t going quietly. Following a Greek court’s decision to convict and sentence him to eight years in prison, Dilian has announced plans to appeal. His conviction, along with three other executives, centers on charges of illegally obtaining personal data as part of a massive wiretapping operation that rocked the nation.

This isn’t just another corporate scandal. Dubbed the “Greek Watergate,” the affair saw the phones of senior ministers, opposition leaders, military brass, and journalists infiltrated by Intellexa’s Predator spyware. This powerful tool can crack iPhones and Android devices, silently harvesting call logs, texts, emails, and location data—often with just a single malicious click from the target.

A Government Under Fire and a Claim of Scapegoating

The fallout was immediate and severe. Revelations about the hacking of journalists’ phones forced the resignations of top officials, including the head of Greece’s national intelligence service and a senior aide to Prime Minister Kyriakos Mitsotakis. Yet, despite the political tremors, no government official has faced conviction. Critics have long accused the Mitsotakis administration of orchestrating a cover-up.

Now, Dilian is fueling those accusations. In a statement first reported by Reuters, the convicted spyware magnate declared he would not be a “scapegoat.” This pointed remark stands as the most direct insinuation from within Intellexa that the Greek government itself sanctioned the widespread surveillance.

“I believe a conviction without evidence is not justice,” Dilian told Reuters. “It could be part of a cover-up and even a crime.” He added that he is prepared to hand over evidence to both national and international regulators, a challenge that puts further pressure on Athens.

The Global Reach of Predator and Mounting Pressure

Dilian’s defense hinges on a key industry claim. He told Reuters that advanced surveillance technologies like Predator are almost exclusively sold to sovereign governments. The implication is clear: if the tool was used, a government client was responsible for its lawful—or unlawful—application.

The scandal’s ripples extend far beyond Greece. The United States government imposed sanctions on Dilian in 2024 after Predator was discovered on the phones of American officials and journalists. These sanctions effectively criminalize any business dealings with Dilian and his associates, isolating him on the global stage.

As Dilian prepares his appeal, the central question remains unanswered. Who, ultimately, gave the order? The convicted businessman’s claim of scapegoating ensures the shadow of the “Greek Watergate” will linger over the government for some time to come.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

CyberSecurity

Vibe Coding Security: UK NCSC Chief Calls for AI Development Safeguards

Published

on

Vibe Coding Security: UK NCSC Chief Calls for AI Development Safeguards

The head of Britain’s National Cyber Security Centre has issued a clear challenge to the cybersecurity industry. Speaking at the RSA Conference in San Francisco, Richard Horne argued that AI-assisted software development—often called vibe coding—represents both a massive opportunity and a significant risk. His message was straightforward: we must harness this technology’s potential while building robust protections against its dangers.

The Double-Edged Sword of AI-Generated Code

Imagine a world where software vulnerabilities become rare instead of commonplace. That’s the promise Horne sees in well-trained AI coding tools. The current reality of manually produced software is bleak—consistently vulnerable, perpetually patched. Vibe coding could disrupt that cycle entirely.

But there’s a catch. Software produced without proper human oversight might simply propagate existing flaws at machine speed. “The attractions are clear,” Horne acknowledged. “Disrupting the status quo is a huge opportunity, but not without risk.” The tools themselves must be designed from the ground up to avoid introducing new weaknesses.

Can AI actually produce more secure code than humans? The NCSC believes it’s possible, but only with deliberate effort. The agency envisions AI tooling that writes secure-by-design software as its default output. That transformation won’t happen by accident.

Building Guardrails for the AI Coding Revolution

While Horne delivered his keynote, NCSC’s Chief Technology Officer for Architecture published complementary guidance. David C’s blog post presented a pragmatic view: AI-generated code currently poses “intolerable risks” for many organizations, yet shows “glimpses of a new paradigm.” Experienced developers could see their productivity skyrocket—if security keeps pace.

The business benefits are too compelling to ignore. Adoption will surge whether security professionals are ready or not. That’s why the NCSC insists we must engage with these risks immediately, embedding core security principles before vulnerable patterns become entrenched.

Six Commandments for Secure Vibe Coding

The agency’s framework outlines specific safeguards:

Secure by Default: AI models must generate hardened code from the start, not as an afterthought. Every output should meet baseline security standards automatically.

Trust but Verify: Demand provable model provenance. Organizations need assurance that AI-generated code contains no hidden backdoors or malicious components.

AI-Powered Reviews: Turn the technology on itself. Use AI to audit all code—whether human-written or machine-generated—scanning continuously for vulnerabilities.

Deterministic Guardrails: Implement strict, rule-based controls that limit what code can do, even if compromised. These boundaries should be non-negotiable.

Secure Hosting Platforms: Build environments that sandbox and protect against bad code, regardless of its origin. The platform itself becomes a defensive layer.

Automated Security Hygiene: Let AI handle documentation, testing, fuzzing, and threat modeling for every software component. Routine tasks become automated safeguards.

Starting Now, Not Waiting for Perfection

The most urgent message from the NCSC? Begin implementation immediately. “Don’t wait five years for the vibe future,” David C emphasized. Early guardrails established today will shape how this technology evolves.

Consider legacy systems. Many organizations struggle with outdated, vulnerable applications they can’t easily replace. AI could help harden that code, paying down “technical and security debt” accumulated over decades. Even maintaining simple allow-lists of permitted URLs—a tedious manual task—could become automated and more secure.

There’s an intriguing possibility on the horizon. AI-generated code might eventually become more restricted and locked down by default than the best on-premises or SaaS products available today. That outcome would require deliberate design choices, but it’s within reach.

Ironically, this approach might address longstanding concerns about cloud migration. Organizations that have resisted moving critical systems for security reasons might find AI-assisted development provides the control they’ve sought. The future of coding isn’t just about writing faster—it’s about building smarter, safer software from the first line to the last.

Continue Reading

CyberSecurity

Conntour’s AI Video Search Engine Secures $7M from General Catalyst and Y Combinator

Published

on

Surveillance Tech at a Crossroads

The surveillance industry faces intense scrutiny. Recent controversies involving ICE accessing Flock’s camera network and Ring developing features for police requests have sparked heated debates about privacy, safety, and oversight. Yet market demand persists, driven by rapid advancements in vision-language AI models.

Companies continue developing tools to help organizations monitor their premises. The ethical dimension, however, has become impossible to ignore.

A Startup With Selective Ethics

Matan Goldner, Conntour’s co-founder and CEO, emphasizes that ethics guide his company’s client selection. For a startup barely two years old, turning away business might seem risky. Goldner argues their existing customer base provides that luxury.

“Having large customers allows us to stay in control,” Goldner told TechCrunch. “We select who uses it and for what purpose. We apply our judgment to ensure use is both moral and legal.” Current clients include Singapore’s Central Narcotics Bureau and other major government and publicly-listed entities.

This principled stance hasn’t scared off investors. Conntour recently closed a $7 million seed round led by General Catalyst and Y Combinator, with participation from SV Angel and Liquid 2 Ventures. The funding round wrapped up in just 72 hours.

“We scheduled about 90 meetings in eight days,” Goldner recalled. “We started on Monday and were done by Wednesday afternoon.”

How Conntour’s AI Search Engine Works

Conntour’s platform transforms security video monitoring. Instead of relying on preset motion detectors or object definitions, it uses natural language queries. Security personnel can ask questions like “Find instances of someone in sneakers passing a bag in the lobby.”

The system scans live or recorded footage across thousands of camera feeds, returning relevant video clips with timestamps. It functions like a Google search engine specifically for surveillance video.

Beyond search, the platform monitors feeds autonomously based on configured rules, surfacing alerts automatically. It can generate incident reports and answer questions about footage in text, accompanied by the relevant video evidence.

Technical Scalability and Efficiency

Goldner highlights scalability as Conntour’s key differentiator. The system is engineered to handle massive deployments efficiently. A single consumer-grade GPU, like an Nvidia RTX 4090, can process up to 50 camera feeds simultaneously.

“Other AI video search services exist,” Goldner explained, “but they aren’t built for systems with thousands of feeds.” Conntour achieves this by employing multiple AI models and logic systems. Its algorithm intelligently selects the most efficient model for each query, minimizing computational load while delivering accurate results.

The platform offers flexible deployment: fully on-premises, completely cloud-based, or a hybrid model. It integrates with most existing security systems or can operate as a standalone surveillance platform.

Overcoming Industry Challenges

Video surveillance has a persistent weak link: garbage in, garbage out. A blurry, poorly-lit feed from a dirty camera lens yields useless data, regardless of sophisticated AI.

Conntour addresses this by providing a confidence score with every search result. If camera quality is subpar, the system indicates low confidence in its findings, alerting users to potential inaccuracies.

Looking ahead, Goldner identifies a core technical challenge. “We face a contradiction,” he said. “We want full natural language flexibility—let users ask anything, LLM-style. Simultaneously, we need extreme efficiency to process thousands of feeds without insane resource demands. Solving this contradiction is our biggest technical barrier.”

The $7 million in new capital will fuel that effort, pushing the boundaries of what’s possible in AI-powered video security while navigating the complex ethical landscape that defines the industry.

Continue Reading

CyberSecurity

AiTM Phishing Campaign Targets TikTok for Business Accounts

Published

on

A Coordinated Attack on Digital Advertisers

Security researchers have spotted a fresh and highly coordinated phishing operation. The target? TikTok for Business accounts. This campaign uses a sophisticated Adversary-in-the-Middle (AiTM) technique, where attackers secretly intercept communication between a user and a legitimate service.

Push Security identified a cluster of malicious pages all registered within a mere nine-second window on March 24. The technical precision suggests an automated, large-scale attack is underway. These pages are cleverly hidden behind Cloudflare’s infrastructure and registered through Nicenic International Group, a registrar notorious for hosting bulk phishing domains.

How the TikTok Phishing Trap Works

The attack begins with a deceptive link, likely delivered via a convincingly crafted email. While the exact delivery method isn’t confirmed, it mirrors a previous campaign that used fake Google Careers pages. Clicking the link sends you on a brief detour through a legitimate Google Cloud Storage site—a trick to build false trust—before landing on the malicious page.

To evade automated security scanners, the site first presents a Cloudflare Turnstile check. Once past this gate, victims see a professional-looking page themed around either TikTok or Google careers. The process seems normal: fill out a basic form, then proceed to login.

That login page is the heart of the scam. It’s not a real TikTok page but a reverse proxy. As you enter your credentials, the AiTM kit silently captures them and forwards them to the actual TikTok server, logging you in seamlessly. You might not notice anything is wrong, but the attackers now have full access to your account.

Why TikTok for Business is a Lucrative Target

At first glance, TikTok seems an unusual focus for cybercriminals. Most phishing kits aim for universal Single Sign-On (SSO) platforms like Google or Microsoft. So why the shift?

TikTok for Business accounts are the digital wallets for company advertising. Marketing teams use them to fund and manage campaigns, often with significant budgets attached. Compromising one of these accounts is like stealing the keys to a company’s promotional treasury.

There’s another, more sinister angle. Many users choose “Log in with Google” for their TikTok accounts. A successful phishing attack here can compromise two accounts at once: the TikTok ad manager and the linked Google account. This double breach can trigger an exploitation chain. Attackers could hijack Google Ad Manager accounts to run malicious advertising (malvertising) or drain funds from both platforms.

The Bigger Threat Landscape

This campaign didn’t emerge from a vacuum. TikTok’s platform has a history of being abused by threat actors. It’s been a distribution channel for infostealer malware, often disguised in “ClickFix” style tutorials with AI-generated videos posing as software activation guides.

The platform is also a known hunting ground for cryptocurrency scammers. By targeting the business and advertising side, attackers are simply following the money upstream. They’re moving from scamming individual users to directly attacking the corporate financial mechanisms on the platform.

The domains used in this attack follow a predictable pattern, like variations of welcome.careers*[.]com. Security experts warn this list will almost certainly grow as the campaign expands. For any team managing social media advertising, vigilance is no longer optional—it’s a critical business defense.

Continue Reading

Trending