Connect with us

Artificial Intelligence

Apple Finally Builds the AI Photo Editor That Google and Samsung Have Had for Years

Published

on

Apple Finally Builds the AI Photo Editor That Google and Samsung Have Had for Years

For years, Google and Samsung have offered AI-powered photo editing tools that Apple users could only envy. Now, according to a report from Bloomberg‘s Mark Gurman, Apple is preparing its own Apple AI photo editor for the next major software update. The features, dubbed Extend, Enhance, and Reframe, will arrive as part of a dedicated “Apple Intelligence Tools” section inside the Photos app on iOS 27, iPadOS 27, and macOS 27.

This move signals a major shift for Apple, which has lagged behind competitors in integrating generative AI into its photo editing suite. While Google’s Google Photos introduced Magic Editor in 2023 and Samsung’s Galaxy AI followed with similar capabilities, Apple’s only offering so far has been the underwhelming Clean Up tool. But the Cupertino giant is now ready to step up its game.

What Will the New Apple Intelligence Photo Editing Tools Do?

The three new features in the Apple AI photo editor are designed to tackle common editing pain points—expanding images, enhancing quality, and adjusting perspective—all while running entirely on-device. Apple promises that edits will complete in seconds, a hallmark of its privacy-first approach.

Extend: Expanding Your Photos Seamlessly

The Extend feature uses AI to generate new imagery around the edges of a photo, effectively expanding the frame. For example, you can add surrounding context to a close-up shot or create negative space on either side of the subject. This is similar to Google’s Magic Editor, which lets you reframe images by generating missing content. However, Apple’s implementation relies on on-device machine learning, meaning your photos never leave your device.

Enhance: One-Tap Quality Boost

Enhance is a one-tap button that instantly adjusts color, lighting, and overall image quality. Instead of fiddling with multiple sliders, users can simply tap to improve a photo’s appearance. This feature is reminiscent of Samsung’s Galaxy AI’s “Photo Assist” tool, which offers similar automatic enhancements. For casual users, this could be a game-changer, making professional-looking edits accessible to everyone.

Reframe: For Spatial Photos on Vision Pro

Reframe is designed specifically for spatial photos captured on the Apple Vision Pro. It allows users to shift the perspective of a 3D image after it’s been taken, moving from a front-facing to a side-facing view. This is a niche but powerful feature for those using Apple’s mixed-reality headset, giving them more control over their immersive content.

Is Apple Actually Ready to Release All Three Features?

Not quite yet. According to Gurman, both Extend and Reframe are producing inconsistent results during internal testing. The underlying AI models may need more refinement before they can deliver reliable outputs. If the results don’t improve significantly by Apple’s September launch event, the company might delay these features or scale them back.

This is a familiar pattern for Apple, which often prioritizes polish over speed. However, the pressure is mounting. Google’s Magic Editor has been praised for its accuracy, and Samsung’s Galaxy AI features are now widely available on devices like the Galaxy S24. Apple’s Clean Up tool, which was introduced in iOS 18, has been criticized for being less effective than its rivals. As a result, the success of the Apple AI photo editor hinges on these new features working flawlessly.

In my opinion, Apple genuinely needs Extend, Enhance, and Reframe to work—and work in time for a showcase at WWDC 2026 and a public release in September. The company’s reputation for seamless user experience is at stake. If these features deliver on their promise, they could finally close the gap with Google and Samsung. If not, Apple risks falling further behind in the AI photo editing race.

For now, all eyes are on Apple’s next moves. The company has a history of entering markets late but executing with precision. Whether it can do the same with AI photo editing remains to be seen. But one thing is clear: the battle for the best smartphone photo editor is heating up, and Apple is finally ready to play.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Google Gemini’s Next Leap: Reading Your Emails and Calendar to Act Before You Ask

Published

on

Google Gemini’s Next Leap: Reading Your Emails and Calendar to Act Before You Ask

Imagine an assistant that doesn’t wait for you to say its name. Instead, it scans your inbox, checks your schedule, and offers help before you even realize you need it. That’s exactly what Google Gemini is aiming to deliver with a newly discovered feature called Proactive Assistance. According to a deep dive into the latest Google app beta by 9To5Google, the code reveals a system designed to anticipate your needs—without you lifting a finger.

This marks a significant shift in how we interact with AI. Instead of reactive commands, Gemini will soon offer proactive suggestions based on what it learns from your digital life. But how does this work, and what does it mean for your privacy? Let’s break it down.

How Gemini Proactive Assistance Works

The core idea behind Gemini Proactive Assistance is simple: the AI monitors your apps and triggers helpful actions automatically. During initial setup, you choose which services Gemini can access. Gmail and Google Calendar are the primary examples, but the feature also extends to incoming notifications and on-screen content—only if you grant permission.

For instance, if you have a meeting scheduled, Gemini might proactively send a notification with a practice quiz generated by its AI. Google actually demonstrated this exact scenario at I/O 2025, showing how the assistant noticed a test on your calendar and offered help without being asked. It’s a glimpse of a future where your phone becomes a true personal assistant.

What About the Daily Brief?

Interestingly, the previously named “Your Day” feed has been rebranded as “Daily Brief.” This is likely the first visible component of the broader Proactive Assistance rollout. Think of it as a morning digest that adapts based on your schedule, emails, and priorities—all without manual input.

Building on this, the feature isn’t just about calendar events. It can also read your notifications to identify urgent tasks or important messages. However, all of this happens only with your explicit consent, which brings us to a crucial topic: privacy.

Privacy: Does Gemini Send Your Data to the Cloud?

One of the most reassuring aspects of Gemini Proactive Assistance is its commitment to on-device processing. According to the beta code, everything the assistant accesses is handled in a private, encrypted environment directly on your phone. None of this data feeds into Google’s AI model training or leaves your device.

This means that while Gemini is reading your emails and calendar, it’s doing so locally. The AI learns your patterns without sending sensitive information to the cloud. For users concerned about privacy, this is a significant win. It’s a model that balances convenience with control—a rare combination in the world of AI assistants.

However, it’s worth noting that this feature is still in beta. The information comes from an APK teardown, meaning Google hasn’t officially confirmed the details. So while the potential is exciting, we should temper expectations until an official announcement.

What This Means for the Future of AI Assistants

Proactive Assistance represents a fundamental shift in how AI interacts with users. Instead of waiting for commands, the assistant learns your habits and offers help at the right moment. This could include sending reminders based on email threads, suggesting responses to calendar invites, or even preparing summaries of unread notifications.

As a result, the line between reactive and proactive AI is blurring. Google Gemini is positioning itself as a tool that understands context—not just words. For example, if you receive an email about a flight delay, Gemini might automatically suggest rescheduling your calendar appointments. This level of integration could save time and reduce cognitive load.

On the other hand, this raises questions about dependency. Will users become too reliant on AI to manage their lives? And how will Google handle edge cases where the AI misinterprets data? These are challenges that the company will need to address as the feature rolls out.

How to Prepare for Gemini Proactive Assistance

If you’re eager to try this feature, keep an eye on the Google app beta updates. Once it’s officially released, you’ll likely see a setup wizard that asks for permissions. Start by granting access to apps you trust, like Gmail and Calendar, and review your notification settings to ensure Gemini can read what’s relevant.

Additionally, consider exploring how to enable Gemini on Android to get familiar with the assistant’s current capabilities. For those who value privacy, remember that you can revoke permissions at any time. The key is to strike a balance between convenience and control.

In conclusion, Gemini Proactive Assistance is a bold step toward a more intuitive AI. By reading your emails, calendar, and notifications, it aims to help you before you even ask. While privacy safeguards are encouraging, the feature’s success will depend on how well it understands context—and how much trust users are willing to place in it.

Continue Reading

Artificial Intelligence

The next iPhone moment might come from an AI company, not Samsung or Apple

Published

on

The next iPhone moment might come from an AI company, not Samsung or Apple

Your smartphone is cluttered with dozens of apps. OpenAI wants to change that by replacing them all with a single AI agent that handles tasks seamlessly. According to a report from analyst Ming-Chi Kuo, the company is developing its own smartphone, complete with a custom processor co-designed with MediaTek and Qualcomm. This ambitious project could mark the next iPhone moment in tech history.

Sam Altman, OpenAI’s CEO, has hinted at this shift. In a post on X, he wrote, “feels like a good time to seriously rethink how operating systems and user interfaces are designed.” That statement is hardly a subtle clue about the company’s direction.

Why would OpenAI want to make a phone?

Previous attempts at AI-first devices, such as the Rabbit R1 and Humane AI Pin, failed because they lacked deep integration with existing apps and services. OpenAI aims to avoid those pitfalls by building its own hardware.

Full control over hardware and software

To deliver a truly comprehensive AI agent experience, OpenAI needs complete authority over both the operating system and the device. Depending on Android or iOS means following someone else’s rules.

Access to personal data

Your smartphone knows more about you than any other gadget. It tracks your location, habits, and daily context in real time. That data is invaluable for an AI agent that wants to anticipate your needs before you ask.

Scaling to the biggest device category

Smartphones remain the largest device category worldwide. For OpenAI to scale its technology, this is the platform to target.

How will the AI actually work on this phone?

According to Ming-Chi Kuo, the OpenAI smartphone will use a two-layer system. Lighter tasks, such as understanding your context and managing memory, will run on the device itself. Heavier processing will be offloaded to the cloud.

This approach resembles Apple’s Private Cloud Compute, but OpenAI has a working AI model—unlike what many critics call Apple’s struggling AI efforts. On the business side, OpenAI may bundle hardware with subscriptions, similar to how Apple bundles services, and build a developer ecosystem around its AI agents.

For more on how AI is reshaping hardware, check out our analysis of AI device trends.

Who is helping OpenAI build this thing?

Kuo reports that MediaTek and Qualcomm are the processor co-development partners. Luxshare, a Chinese manufacturer, is the exclusive system co-design and manufacturing partner. This partnership is significant.

Luxshare has long tried to challenge Hon Hai (Foxconn) in Apple’s supply chain without much success. This project gives Luxshare an early foothold in what could be the next major smartphone generation—a big deal for the company.

Building on this, the timeline is set for 2028. That feels distant, but if OpenAI succeeds, the smartphone you use today might look very different in the near future. As we’ve seen with the evolution of AI smartphones, the industry is ripe for disruption.

In summary, the next iPhone moment may not come from Apple or Samsung. Instead, it could emerge from an AI company rethinking how we interact with technology. The question is: Are we ready for a phone that thinks for itself?

Continue Reading

Artificial Intelligence

Adobe Firefly AI Assistant Is Now Live in Public Beta — Here’s How It Reinvents Creative Workflows

Published

on

Adobe Firefly AI Assistant Is Now Live in Public Beta — Here’s How It Reinvents Creative Workflows

Adobe has officially rolled out the public beta of its Adobe Firefly AI Assistant, a conversational AI agent designed to sit across the entire Creative Cloud suite and execute complex, multi-step workflows on your behalf. Instead of jumping between tools manually, you simply describe what you need — and the assistant figures out which applications to use and in what order.

This launch marks a significant shift in how creatives interact with Adobe’s ecosystem. The assistant can orchestrate tasks across Photoshop, Lightroom, Premiere Pro, Firefly, and other apps, automating repetitive steps while keeping you in the driver’s seat.

What Can Adobe Firefly AI Assistant Do for You?

The assistant comes loaded with Creative Skills — pre-built workflows designed around common creative tasks. These include batch photo editing, mood board creation, portrait retouching, and generating social media variations optimized for platforms like Instagram, TikTok, Snapchat, and Facebook all at once.

Building on this, the assistant taps into over 60 pro-grade tools across Adobe’s apps, such as Auto Tone, Generative Fill, Remove Background, and Vectorize. For example, if you’re a graphic designer needing a product mockup, you can upload a logo, a product image, and describe the outcome you want.

The assistant then handles scaling, alignment, lighting, and perspective automatically. However, you stay in control the whole time — you can see every step the assistant takes and jump in to redirect or adjust at any point. Over time, it also learns your preferred tools, workflows, and aesthetic choices to deliver more tailored results.

Is Adobe Firefly AI Assistant Coming to Other Platforms?

Yes, and that is where things get interesting. Adobe is actively working on bringing Firefly AI Assistant’s pro-grade tools to third-party AI platforms. Anthropic’s Claude is already on the list, which means you could eventually access Adobe’s creative toolkit directly from outside the Creative Cloud ecosystem.

In addition, Adobe is adding new AI models to the Firefly app itself, including OpenAI’s GPT Image 2, Google’s Veo 3.1, Runway’s Gen-4.5, and ElevenLabs’ Multilingual v2, among others. This cross-platform approach could redefine how creatives integrate AI into their daily workflows.

Who Can Access the Public Beta?

The public beta is now available for Creative Cloud Pro subscribers and paid Firefly plan holders across Pro, Pro Plus, and Premium tiers. Eligible users will also receive complimentary daily generative credits during the beta period, which resets every day.

As a result, this beta offers a hands-on opportunity to test the assistant’s capabilities before a wider release. For more insights on optimizing your creative workflow, check out our guide on AI-powered workflow tips for designers.

What This Means for Creative Professionals

This launch signals Adobe’s commitment to embedding AI deeply into its tools — not as a gimmick, but as a practical assistant that saves time and reduces friction. The ability to describe a complex task and have the assistant execute it across multiple apps could dramatically speed up repetitive processes.

Nevertheless, the assistant is designed to enhance — not replace — human creativity. You retain full control over every step, and the learning algorithms adapt to your personal style over time. For those curious about the broader implications, explore our article on how generative AI is reshaping creative industries.

In conclusion, the Adobe Firefly AI Assistant public beta is a bold step toward a more conversational, integrated creative experience. Whether you’re a seasoned designer or a content creator, this tool promises to make your workflow smoother — and maybe even more enjoyable.

Continue Reading

Trending