Connect with us

CyberSecurity

Adobe releases critical patch for PDF zero-day bug exploited for months by hackers

Published

on

Adobe patches PDF zero-day vulnerability exploited for months by hackers

Adobe has released an urgent security update for its widely-used PDF software, Acrobat and Reader, to fix a critical vulnerability that hackers have been actively exploiting for at least four months. The flaw, tracked as CVE-2026-34621, allows attackers to remotely install malware on a victim’s device simply by tricking them into opening a maliciously crafted PDF file on Windows or macOS. This is a classic PDF zero-day vulnerability that was being used in the wild before Adobe could develop a patch.

According to Adobe’s advisory, the bug affects Acrobat DC, Reader DC, and Acrobat 2024. The company confirmed it is aware of active exploitation, meaning hackers have been leveraging this weakness to break into computers worldwide. While the full scale of the campaign remains unknown, the ubiquity of Adobe’s PDF software makes it a prime target for both cybercriminals and state-sponsored hackers.

How the PDF zero-day vulnerability was discovered

Security researcher Haifei Li, founder of the exploit-detection platform EXPMON, uncovered the CVE-2026-34621 exploit after a malicious PDF was uploaded to his malware scanner. In a detailed blog post, Li revealed that another copy of the same malicious file first appeared on VirusTotal, a popular online malware analysis service, as early as late November 2025. This timeline indicates that attackers had been using the PDF zero-day vulnerability for months before Adobe’s patch.

Li’s analysis showed that opening the poisoned PDF could give the attacker full control over the victim’s system. “This could lead to full control of the victim’s system,” Li wrote, adding that the hacker could then steal a wide range of sensitive data. Unfortunately, it remains unclear who is behind the campaign or what specific targets were chosen, as Li could not retrieve additional exploits from the attacker’s servers.

Why this Adobe security patch matters for users

This Adobe security patch is critical because PDF files are exchanged daily across industries—from legal contracts to academic papers. A malicious PDF malware attack can infiltrate even well-protected networks if a user unknowingly opens a booby-trapped document. The zero-day attack Adobe faced here underscores the persistent threat to widely deployed software.

Adobe has urged all users of Acrobat DC, Reader DC, and Acrobat 2024 to update their software immediately to the latest versions. The patch is available through the software’s automatic update mechanism or via the Adobe website. For enterprise environments, IT administrators should prioritize this update to mitigate the risk of Acrobat Reader bug exploitation.

Protecting against future PDF exploits

Beyond applying the latest patch, users can adopt safer practices to reduce exposure to similar threats. Always verify the source of PDF files before opening them, especially if they arrive unexpectedly via email or downloads. Consider using built-in security features like Adobe’s Protected View, which opens PDFs in a sandboxed environment to limit potential damage.

Security experts also recommend using dedicated PDF readers with enhanced security controls or enabling automatic updates across all software. For organizations, deploying endpoint detection and response (EDR) tools can help identify suspicious behavior linked to malicious PDF malware. As this incident shows, even trusted software can harbor hidden dangers for months before a fix is released.

In conclusion, the PDF zero-day vulnerability patched by Adobe serves as a stark reminder of the evolving threat landscape. Staying vigilant and updating software promptly are the best defenses against such stealthy attacks. For more on securing your digital workspace, check out our guide on cybersecurity best practices for remote teams and learn how to secure PDF files against malware.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

CyberSecurity

Silent Security Risk: Google API Keys Quietly Grant Gemini Access on Android

Published

on

Silent Security Risk: Google API Keys Quietly Grant Gemini Access on Android

A newly uncovered flaw in Google’s API key system is putting Android applications at risk. According to a CloudSEK advisory published on April 8, the issue allows existing API keys to silently access Google’s Gemini AI platform without developer knowledge or user consent. This means that millions of Android users could be exposed to data breaches, unexpected costs, and service disruptions.

The vulnerability revolves around Google’s long-standing API key format, originally designed for public-facing services like Maps and Firebase. When the Gemini API is enabled in a Google Cloud project, existing keys automatically gain access to AI endpoints—no notification, no warning. This quiet shift creates a widespread risk that many developers are unaware of.

How the Google API Keys Gemini Access Flaw Works

CloudSEK’s research analyzed 10,000 Android apps using its BeVigil platform. The team identified 32 active keys across 22 applications, which collectively account for more than 500 million installs. In one confirmed case, researchers accessed user-uploaded audio files from an English-learning app via the Gemini Files API. The data included file metadata, timestamps, and accessible links—clear evidence that private content could be retrieved using exposed keys.

This behavior marks a departure from earlier Google guidance, which stated that such keys were safe to embed in client-side code. Developers who followed those recommendations may now be unknowingly exposing credentials linked to advanced AI systems. As a result, the Android app vulnerability is not just a theoretical risk—it’s a practical threat.

The Financial and Security Implications of API Key Exposure

The risks linked to this flaw are substantial. Attackers can access private files stored in Gemini, generate unauthorized API usage leading to financial losses, and disrupt services through quota exhaustion. Real-world incidents highlight the potential impact: one developer reported $15,400 in charges within hours of a compromised key being exploited. Another organization faced losses of $128,000, despite implementing security controls.

Furthermore, the mobile ecosystem amplifies the threat. App packages can be easily downloaded and analyzed to extract embedded keys. Many of these keys persist across multiple versions, increasing long-term exposure. This means that even if a developer updates their app, old keys may still be vulnerable.

What Developers and Users Should Do Now

CloudSEK’s advisory is clear: this is a structural flaw. ‘Google merged the concept of public keys with server-side AI secrets,’ the researchers wrote. ‘Enabling Gemini should have triggered a mandatory key restriction or forced the creation of a new, scoped key.’

Therefore, developers must take immediate action. First, audit all Google Cloud projects to identify which keys have Gemini API access. Second, rotate any exposed keys immediately. Third, restrict API access to only the services required. For users, the best defense is to keep apps updated and monitor for unusual activity.

Infosecurity Magazine has reached out to Google for comment on these findings, but has not received a response at the time of publication. In the meantime, the Android app vulnerability remains a pressing concern for the entire mobile ecosystem.

For more on AI security, read our article on Security Researchers Sound the Alarm on Vulnerabilities in AI-Generated Code. Additionally, learn about best practices for securing cloud APIs.

Image credit: Nwz / Shutterstock.com

Continue Reading

CyberSecurity

Anthropic co-founder confirms company briefed Trump administration on dangerous Mythos AI model

Published

on

Anthropic co-founder confirms company briefed Trump administration on dangerous Mythos AI model

In a revealing interview at the Semafor World Economy summit, Anthropic co-founder Jack Clark confirmed that the AI company had briefed the Trump administration about its new Mythos model. The model, announced just last week, is considered so dangerous that it will not be released to the public, primarily due to its powerful cybersecurity capabilities.

Why Anthropic engaged with the government despite ongoing legal disputes

This confirmation comes at a time when Anthropic is simultaneously suing the U.S. government. In March, the company filed a lawsuit against Trump’s Department of Defense after the agency labeled Anthropic a supply-chain risk. The dispute stemmed from the Pentagon’s desire for unrestricted access to Anthropic’s AI systems for uses including mass surveillance and fully autonomous weapons—a deal that ultimately went to OpenAI instead.

However, Clark downplayed the significance of this conflict during his interview. He described the supply-chain risk designation as a “narrow contracting dispute” and emphasized that it should not overshadow the company’s commitment to national security. “Our position is the government has to know about this stuff,” Clark stated. “We have to find new ways for the government to partner with a private sector that is making things that are truly revolutionizing the economy.”

Mythos AI model: A cybersecurity powerhouse deemed too risky for public release

The Mythos model represents a significant leap in AI capabilities, particularly in the realm of cybersecurity. Its potential for both defensive and offensive applications made it a subject of intense interest for government agencies. Reports indicate that Trump officials were encouraging major banks—including JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley—to test the model.

Clark confirmed the briefings directly: “So absolutely, we talked to them about Mythos, and we’ll talk to them about the next models as well.” This transparency, he argued, is essential for balancing innovation with national security concerns.

What makes Mythos different from other AI models

Unlike many AI systems that focus on general-purpose tasks, Mythos was specifically designed for cybersecurity applications. Its capabilities are so advanced that Anthropic decided against a public release, fearing misuse by malicious actors. This decision aligns with the company’s broader philosophy of responsible AI development, even if it means forgoing commercial opportunities.

AI’s impact on employment: Clark offers a nuanced view

Beyond the Mythos model, Clark addressed broader questions about AI’s societal impact, particularly on employment. While Anthropic CEO Dario Amodei has warned that AI could bring unemployment to Depression-era levels, Clark offered a slightly different perspective. He explained that Amodei’s estimates are based on the belief that AI will become much more powerful than people expect, very quickly.

Clark, who leads a team of economists at Anthropic, noted that the company is currently seeing “some potential weakness in early graduate employment” across select industries. However, he emphasized that Anthropic is prepared for major employment shifts should they occur.

Advice for college students in the age of AI

When asked what majors students should pursue or avoid in light of AI’s impact, Clark offered broad but insightful advice. He suggested that the most valuable fields are those that “involve synthesis across a whole variety of subjects and analytical thinking about that.”

“That’s because what AI allows us to do is it allows you to have access to sort of an arbitrary amount of subject matter experts in different domains,” Clark explained. “But the really important thing is knowing the right questions to ask and having intuitions about what would be interesting if you collided different insights from many different disciplines.”

This advice underscores a key theme: as AI becomes more capable, human skills like critical thinking, interdisciplinary synthesis, and curiosity become even more valuable. For more on how AI is reshaping the workforce, check out our guide on navigating the AI job market in 2025.

The balancing act: National security, corporate interests, and public safety

The Anthropic case highlights the delicate balance that AI companies must strike. On one hand, they have a responsibility to ensure their technologies are not misused. On the other, they must engage with governments to address national security concerns. This tension is likely to intensify as AI capabilities continue to advance.

Clark’s confirmation that Anthropic briefed the Trump administration on Mythos—despite ongoing litigation—suggests that the company prioritizes national security over corporate disputes. Whether this approach will serve as a model for other AI companies remains to be seen. For a deeper look at similar cases, read our analysis of how AI companies partner with governments.

As the AI landscape evolves, one thing is clear: the conversation between Silicon Valley and Washington is only just beginning. The Mythos model may be too dangerous for public release, but its existence is already shaping the future of AI governance.

Continue Reading

CyberSecurity

Google Warns of New Threat Group Targeting BPOs and Helpdesks via Live Chat

Published

on

New Threat Group Targets BPOs and Helpdesks via Live Chat: Google Warns

A new financially motivated threat cluster, tracked as UNC6783, is actively targeting business process outsourcers (BPOs) and large enterprises, using live chat channels to steal sensitive data for extortion. Google Threat Intelligence Group (GTIG) principal threat analyst Austin Larsen recently detailed the group’s tactics, which involve sophisticated social engineering and multi-factor authentication (MFA) bypass techniques.

According to Larsen, UNC6783 may be linked to the “Raccoon” persona and has already targeted several dozen “high-value corporate entities” across multiple sectors. The group primarily focuses on BPOs but also directly attacks in-house helpdesk and support teams. The end goal is clear: data theft for extortion.

UNC6783 Tactics: Live Chat Phishing and MFA Bypass

This BPO helpdesk threat group relies heavily on social engineering through live chat to direct employees to malicious, spoofed Okta login pages. Larsen noted that these domains often mimic the targeted organization using patterns like [.]zendesk-support<##>[.]com. The phishing kit used by UNC6783 is designed to bypass standard MFA verification by stealing clipboard contents, allowing attackers to enroll their own devices for persistent access.

In addition to this approach, GTIG has observed UNC6783 using fake security software updates to trick users into downloading remote access malware. Following data exfiltration, the group sometimes uses Proton Mail accounts to deliver ransom notes. These methods are reminiscent of other extortion-focused groups like Scattered Lapsus$ Hunters.

Last year, similar campaigns emerged using Zendesk phishing domains to harvest employee credentials. Hackers also submitted fraudulent tickets to helpdesk staff to infect them with remote access trojans (RATs).

Protecting BPOs and Helpdesk Teams from Social Engineering

Given the sophistication of UNC6783, organizations must take proactive steps to defend their helpdesk and BPO operations. Larsen outlined several key recommendations for helpdesk social engineering defense.

Implement Phishing-Resistant MFA

Larsen urges organizations to deploy phishing-resistant MFA, such as FIDO2 hardware security keys like Titan Security Keys, for all users, especially those in high-risk roles like support and helpdesk. This can prevent attackers from bypassing standard MFA through clipboard theft.

Monitor Live Chat for Suspicious Activity

Live chat channels should be actively monitored for interactions that direct users to external links or ask for sensitive information. Employees must be educated on this specific campaign to recognize red flags.

Proactively Block Malicious Domains

Organizations should proactively block any unauthorized domains following the [.]zendesk-support[.]com pattern. Additionally, monitoring for unauthorized binary execution, especially installers or “updates” downloaded during support sessions, is critical.

Audit MFA Devices Regularly

Regular audits of newly enrolled MFA devices across the organization can help identify unauthorized additions. This simple step can prevent attackers from maintaining persistent access.

As this live chat phishing campaign evolves, BPOs and enterprises must remain vigilant. For more on securing helpdesk operations, see our guide on helpdesk security best practices. Additionally, explore how to prevent MFA bypass attacks for further insights.

Ultimately, the threat from UNC6783 highlights the growing sophistication of social engineering attacks targeting support channels. Building on these insights, organizations should integrate these defenses into their broader cybersecurity strategy. This means that regular training and technical controls are both essential to mitigate the risk of BPO data extortion.

Continue Reading

Trending