Connect with us

CyberSecurity

Google Warns of New Threat Group Targeting BPOs and Helpdesks via Live Chat

Published

on

New Threat Group Targets BPOs and Helpdesks via Live Chat: Google Warns

A new financially motivated threat cluster, tracked as UNC6783, is actively targeting business process outsourcers (BPOs) and large enterprises, using live chat channels to steal sensitive data for extortion. Google Threat Intelligence Group (GTIG) principal threat analyst Austin Larsen recently detailed the group’s tactics, which involve sophisticated social engineering and multi-factor authentication (MFA) bypass techniques.

According to Larsen, UNC6783 may be linked to the “Raccoon” persona and has already targeted several dozen “high-value corporate entities” across multiple sectors. The group primarily focuses on BPOs but also directly attacks in-house helpdesk and support teams. The end goal is clear: data theft for extortion.

UNC6783 Tactics: Live Chat Phishing and MFA Bypass

This BPO helpdesk threat group relies heavily on social engineering through live chat to direct employees to malicious, spoofed Okta login pages. Larsen noted that these domains often mimic the targeted organization using patterns like [.]zendesk-support<##>[.]com. The phishing kit used by UNC6783 is designed to bypass standard MFA verification by stealing clipboard contents, allowing attackers to enroll their own devices for persistent access.

In addition to this approach, GTIG has observed UNC6783 using fake security software updates to trick users into downloading remote access malware. Following data exfiltration, the group sometimes uses Proton Mail accounts to deliver ransom notes. These methods are reminiscent of other extortion-focused groups like Scattered Lapsus$ Hunters.

Last year, similar campaigns emerged using Zendesk phishing domains to harvest employee credentials. Hackers also submitted fraudulent tickets to helpdesk staff to infect them with remote access trojans (RATs).

Protecting BPOs and Helpdesk Teams from Social Engineering

Given the sophistication of UNC6783, organizations must take proactive steps to defend their helpdesk and BPO operations. Larsen outlined several key recommendations for helpdesk social engineering defense.

Implement Phishing-Resistant MFA

Larsen urges organizations to deploy phishing-resistant MFA, such as FIDO2 hardware security keys like Titan Security Keys, for all users, especially those in high-risk roles like support and helpdesk. This can prevent attackers from bypassing standard MFA through clipboard theft.

Monitor Live Chat for Suspicious Activity

Live chat channels should be actively monitored for interactions that direct users to external links or ask for sensitive information. Employees must be educated on this specific campaign to recognize red flags.

Proactively Block Malicious Domains

Organizations should proactively block any unauthorized domains following the [.]zendesk-support[.]com pattern. Additionally, monitoring for unauthorized binary execution, especially installers or “updates” downloaded during support sessions, is critical.

Audit MFA Devices Regularly

Regular audits of newly enrolled MFA devices across the organization can help identify unauthorized additions. This simple step can prevent attackers from maintaining persistent access.

As this live chat phishing campaign evolves, BPOs and enterprises must remain vigilant. For more on securing helpdesk operations, see our guide on helpdesk security best practices. Additionally, explore how to prevent MFA bypass attacks for further insights.

Ultimately, the threat from UNC6783 highlights the growing sophistication of social engineering attacks targeting support channels. Building on these insights, organizations should integrate these defenses into their broader cybersecurity strategy. This means that regular training and technical controls are both essential to mitigate the risk of BPO data extortion.

CyberSecurity

Anthropic co-founder confirms company briefed Trump administration on dangerous Mythos AI model

Published

on

Anthropic co-founder confirms company briefed Trump administration on dangerous Mythos AI model

In a revealing interview at the Semafor World Economy summit, Anthropic co-founder Jack Clark confirmed that the AI company had briefed the Trump administration about its new Mythos model. The model, announced just last week, is considered so dangerous that it will not be released to the public, primarily due to its powerful cybersecurity capabilities.

Why Anthropic engaged with the government despite ongoing legal disputes

This confirmation comes at a time when Anthropic is simultaneously suing the U.S. government. In March, the company filed a lawsuit against Trump’s Department of Defense after the agency labeled Anthropic a supply-chain risk. The dispute stemmed from the Pentagon’s desire for unrestricted access to Anthropic’s AI systems for uses including mass surveillance and fully autonomous weapons—a deal that ultimately went to OpenAI instead.

However, Clark downplayed the significance of this conflict during his interview. He described the supply-chain risk designation as a “narrow contracting dispute” and emphasized that it should not overshadow the company’s commitment to national security. “Our position is the government has to know about this stuff,” Clark stated. “We have to find new ways for the government to partner with a private sector that is making things that are truly revolutionizing the economy.”

Mythos AI model: A cybersecurity powerhouse deemed too risky for public release

The Mythos model represents a significant leap in AI capabilities, particularly in the realm of cybersecurity. Its potential for both defensive and offensive applications made it a subject of intense interest for government agencies. Reports indicate that Trump officials were encouraging major banks—including JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley—to test the model.

Clark confirmed the briefings directly: “So absolutely, we talked to them about Mythos, and we’ll talk to them about the next models as well.” This transparency, he argued, is essential for balancing innovation with national security concerns.

What makes Mythos different from other AI models

Unlike many AI systems that focus on general-purpose tasks, Mythos was specifically designed for cybersecurity applications. Its capabilities are so advanced that Anthropic decided against a public release, fearing misuse by malicious actors. This decision aligns with the company’s broader philosophy of responsible AI development, even if it means forgoing commercial opportunities.

AI’s impact on employment: Clark offers a nuanced view

Beyond the Mythos model, Clark addressed broader questions about AI’s societal impact, particularly on employment. While Anthropic CEO Dario Amodei has warned that AI could bring unemployment to Depression-era levels, Clark offered a slightly different perspective. He explained that Amodei’s estimates are based on the belief that AI will become much more powerful than people expect, very quickly.

Clark, who leads a team of economists at Anthropic, noted that the company is currently seeing “some potential weakness in early graduate employment” across select industries. However, he emphasized that Anthropic is prepared for major employment shifts should they occur.

Advice for college students in the age of AI

When asked what majors students should pursue or avoid in light of AI’s impact, Clark offered broad but insightful advice. He suggested that the most valuable fields are those that “involve synthesis across a whole variety of subjects and analytical thinking about that.”

“That’s because what AI allows us to do is it allows you to have access to sort of an arbitrary amount of subject matter experts in different domains,” Clark explained. “But the really important thing is knowing the right questions to ask and having intuitions about what would be interesting if you collided different insights from many different disciplines.”

This advice underscores a key theme: as AI becomes more capable, human skills like critical thinking, interdisciplinary synthesis, and curiosity become even more valuable. For more on how AI is reshaping the workforce, check out our guide on navigating the AI job market in 2025.

The balancing act: National security, corporate interests, and public safety

The Anthropic case highlights the delicate balance that AI companies must strike. On one hand, they have a responsibility to ensure their technologies are not misused. On the other, they must engage with governments to address national security concerns. This tension is likely to intensify as AI capabilities continue to advance.

Clark’s confirmation that Anthropic briefed the Trump administration on Mythos—despite ongoing litigation—suggests that the company prioritizes national security over corporate disputes. Whether this approach will serve as a model for other AI companies remains to be seen. For a deeper look at similar cases, read our analysis of how AI companies partner with governments.

As the AI landscape evolves, one thing is clear: the conversation between Silicon Valley and Washington is only just beginning. The Mythos model may be too dangerous for public release, but its existence is already shaping the future of AI governance.

Continue Reading

CyberSecurity

Someone Planted Backdoors in Dozens of WordPress Plugins—Thousands of Sites at Risk

Published

on

WordPress Plugin Backdoor Attack Hits Thousands of Sites

A sophisticated supply chain attack has compromised dozens of WordPress plugins, potentially exposing thousands of websites to malicious code. The incident, first reported by security researcher Austin Ginder, involves backdoors planted by a new corporate owner of the plugin developer Essential Plugin. This WordPress plugin backdoor attack highlights the growing risk of plugin ownership changes going unnoticed by site administrators.

According to Ginder, the backdoor was inserted into the source code of multiple plugins after an anonymous buyer acquired Essential Plugin last year. The malicious code remained dormant for months before activating earlier this month, distributing harmful payloads to any site running the affected plugins. WordPress’s plugin directory shows that over 20,000 active installations are impacted, while Essential Plugin claims more than 400,000 installs and 15,000 customers.

How the WordPress Plugin Backdoor Attack Works

Plugins are essential for extending WordPress functionality, but they also grant deep access to a website’s core files. In this case, the attackers exploited that trust. The backdoor allowed them to inject arbitrary code into websites, potentially stealing data, redirecting traffic, or installing further malware.

What makes this attack particularly dangerous is the lack of transparency. WordPress does not notify users when a plugin changes ownership. As a result, site owners may unknowingly run software controlled by malicious actors. Ginder warns that this is the second plugin hijacking discovered in as many weeks, suggesting a broader trend.

Affected Plugins and Immediate Steps

The compromised plugins have been removed from the WordPress directory, with their status listed as “permanent” closure. However, if you have any of these plugins installed, they may still be active on your site. Ginder has published a full list of affected plugins on his blog.

To protect your website, follow these steps immediately:

  • Check your installed plugins against the affected list.
  • Delete any compromised plugins completely—not just deactivate them.
  • Scan your site for malware using a reputable security plugin like Wordfence.
  • Change all admin passwords and review user accounts for suspicious activity.

Security researchers have long warned about the risks of supply chain attacks in open-source ecosystems. When a plugin changes hands, the new owner can alter its code without users’ knowledge, turning a trusted tool into a vector for attack.

Why Plugin Ownership Changes Are a Security Blind Spot

WordPress powers over 40% of all websites, making it a prime target for attackers. Plugin developers often sell their products to third parties, but the platform provides no automated alert system for ownership transfers. This leaves site owners vulnerable to what security experts call “plugin hijacking.”

In this case, the backdoor was added shortly after the sale and remained hidden for months. The delayed activation suggests a planned, patient attack designed to maximize impact. Ginder believes that similar attacks may already be underway on other plugins.

What the Industry Can Learn

This incident underscores the need for better security practices in the WordPress ecosystem. Plugin directories should implement ownership change notifications, and site owners should regularly audit their plugins for unusual behavior. Additionally, using a comprehensive WordPress security checklist can help mitigate risks.

Representatives for Essential Plugin have not responded to requests for comment. Meanwhile, the WordPress community is urging users to remain vigilant and report any suspicious plugin activity.

Final Thoughts on the WordPress Plugin Backdoor Attack

This WordPress plugin backdoor attack serves as a stark reminder that trust in third-party code must be earned and verified. As supply chain attacks become more common, site owners must take proactive steps to secure their installations. Removing compromised plugins, monitoring for anomalies, and staying informed about security advisories are essential practices.

Have you checked your WordPress plugins today? If not, now is the time to act before your site becomes the next victim.

Continue Reading

CyberSecurity

Governance Gaps Emerge as AI Agents Drive 76% Increase in Non-Human Identities

Published

on

Governance Gaps Emerge as AI Agents Drive 76% Increase in Non-Human Identities

The rapid adoption of AI agents in enterprise workflows is outpacing security efforts, according to a new report from the SANS Institute. The organization’s 2026 State of Identity Threats & Defenses Survey, based on interviews with over 500 security professionals worldwide, reveals that non-human identities (NHIs)—such as service accounts, API keys, and automation bots—have surged by 76% across most organizations. This growth is largely driven by agentic AI, with 74% of companies already deploying AI agents that require credentials. However, the study warns that AI agents governance gaps are leaving enterprises vulnerable to new security risks.

The Rise of Non-Human Identities and Agentic AI

Non-human identities are quietly multiplying within organizations, often doubling or tripling in number. This explosion is tied to the increasing use of agentic AI systems, which operate autonomously and need access permissions to interact with critical infrastructure. Unlike traditional NHIs that follow fixed logic, agentic AI interprets instructions and can take unpredictable actions. This makes them behave like over-privileged insiders, but at machine speed—a scenario that introduces risks like hallucinations and unauthorized data access.

As a result, the SANS Institute highlights a pressing need for NHI governance frameworks. Without proper controls, these identities can become vectors for breaches. Forrester Research warned last year that an agentic AI deployment will cause a publicly disclosed data breach by the end of 2026, urging organizations to adopt a “minimum viable security” approach.

Credential Hygiene Failures Expose Weaknesses

One of the most alarming findings from the survey is the widespread credential hygiene failures in managing NHIs. A staggering 92% of organizations fail to rotate machine credentials on a 90-day cycle, fearing that this might disrupt service accounts. Most (59%) rotate fewer than half of their NHI credentials quarterly, while 15% don’t even know their rotation rate. Additionally, 5% of respondents are unaware if their organization is running agentic AI at all.

These gaps are compounded by reliance on manual processes. Many organizations still use ticket-based provisioning and periodic access reviews, which simply cannot scale when environments have large volumes of NHIs operating across DevOps, cloud, and SaaS systems. Effective NHI security strategies require automation and centralized oversight.

AI Governance Lags Behind Deployment

The SANS study underscores that most organizations lack a coordinated security-first approach to AI deployment. Richard Greene, a certified instructor at SANS Institute, warns: “We’ve already seen what happens when non-human identities scale without guardrails, and agentic AI is moving even faster.” He notes that while some progress is visible—nearly 40% of organizations now use human-in-the-loop approvals for AI agent actions—the real challenge is staying ahead as these systems shift from pilots to core operations.

To bridge these AI agents governance gaps, the SANS Institute recommends adopting secrets vaults, automated credential rotation, and scoped least-privilege access. However, scaling these measures to match the continued growth of NHIs is critical. Zero-trust principles for NHIs can help mitigate risks by limiting permissions and enforcing continuous monitoring.

Recommendations for Closing the Governance Gap

Building on these findings, organizations must prioritize several actions to address NHI governance challenges. First, implement automated credential management to eliminate manual rotation failures. Second, enforce least-privilege access for all AI agents, ensuring they only have permissions necessary for their tasks. Third, establish human oversight mechanisms, such as approval workflows for high-risk actions. Finally, conduct regular audits to detect unknown NHIs and assess their behavior.

As agentic AI continues to evolve, the need for robust governance frameworks becomes urgent. Without them, the 76% increase in NHIs could translate into a proportional rise in security incidents. Building a comprehensive AI security framework is no longer optional—it’s a business imperative.

Continue Reading

Trending