Connect with us

Artificial Intelligence

iOS 27 Could Let You Pick Your Own AI Model for Text and Image Tasks — Here’s What That Means

Published

on

iOS 27 Could Let You Pick Your Own AI Model for Text and Image Tasks — Here’s What That Means

Imagine controlling which artificial intelligence powers your iPhone’s writing tools, image generation, and even Siri. That’s exactly what iOS 27 might deliver, according to a new report from Bloomberg’s Mark Gurman. The upcoming operating system update could let users choose from multiple third-party AI models for core Apple Intelligence features. This shift transforms Apple from a builder of AI into a marketplace for it, putting you in the driver’s seat.

For years, Apple kept its AI tightly controlled. But with iOS 27 AI model selection, the company is opening the door to competition. You’ll be able to pick which service handles tasks like proofreading text, generating stickers, or answering Siri queries. Think of it like choosing your default search engine or music streaming app — but for artificial intelligence.

What Is the “Extensions” Feature in iOS 27?

According to Gurman, Apple is internally calling this new capability “Extensions.” It will appear in the Settings app, allowing you to assign a specific AI model to each Apple Intelligence tool. These tools include Writing Tools (for summarizing and proofreading), Image Playground (for creating stickers and funny images), and Siri itself.

This means you could use Google Gemini for writing tasks, Anthropic Claude for image generation, and OpenAI ChatGPT for Siri — or mix and match as you like. The report suggests Apple has already tested the system with Google and Anthropic, making Gemini and Claude likely early options. Providers will need to opt in through their App Store apps, similar to how streaming services offer subscriptions.

Building on this, Apple may also let you assign different Siri voices depending on which AI model handles the backend. So if you prefer Claude’s tone for Siri, you could set that up easily.

How Does This Change Apple Intelligence?

Until now, OpenAI’s ChatGPT enjoyed exclusive access to Apple Intelligence, reaching over two billion active devices. However, iOS 27 AI model selection threatens that monopoly. The report notes that ChatGPT engagement on Apple devices fell short of expectations for both companies. Additionally, tensions may be rising, as OpenAI has reportedly been poaching Apple engineers for its own hardware projects.

For everyday users, the payoff is genuine control. You’ll be able to assign an AI model to a particular task and switch it at will. This flexibility could encourage more experimentation with different AI services, driving competition and potentially improving quality across the board.

Moreover, Apple’s pivot from AI builder to AI marketplace is a calculated hedge. Instead of developing its own large language models from scratch, Apple can monetize access to its ecosystem. This strategy mirrors how the App Store works: Apple takes a cut of revenue while third-party developers provide the content. Learn more about Apple’s AI marketplace strategy.

Why This Matters for You

Choice is the key benefit here. You’re no longer locked into a single AI provider. If you prefer how Claude handles creative writing or how Gemini processes images, you can set that as your default. This also means better privacy options: some models process data on-device, while others use cloud servers. You’ll be able to pick the one that aligns with your privacy preferences.

In addition, this move could accelerate AI innovation. When users can easily switch models, providers must compete on performance, features, and price. That’s good news for anyone who relies on AI tools for work, creativity, or daily tasks.

However, there’s a catch: not all AI models will be available at launch. Apple will likely approve providers through a review process, similar to App Store apps. Expect a curated selection at first, with more options rolling out over time. Check out the full list of iOS 27 features.

What About Siri?

Siri is arguably the biggest beneficiary of this change. Currently, Siri relies on Apple’s own AI, which has lagged behind competitors like Google Assistant and Amazon Alexa. With iOS 27, Siri could tap into third-party models, potentially making it smarter and more responsive. You might even assign different voices to different AI models, adding a personal touch.

Yet, this raises questions about consistency. If you switch models, will Siri behave differently? Apple will need to ensure a smooth experience, regardless of which AI powers the assistant. The company hasn’t released details on how it will handle these transitions, but early tests suggest the system is designed to be seamless.

When Can You Expect iOS 27?

Apple typically announces major iOS updates at its Worldwide Developers Conference (WWDC) in June, with a public release in September. That timeline suggests iOS 27 will debut in late 2025. However, features like AI model selection could be tested in beta versions before the final release.

For now, the report remains unconfirmed by Apple. But given Gurman’s track record, this feature is likely real. If it ships, it could reshape how we interact with AI on our devices — giving us the power to choose, rather than having Apple choose for us.

As a result, the era of one-size-fits-all AI on iPhones may be ending. iOS 27 AI model selection promises a future where your device adapts to your preferences, not the other way around. Explore our complete guide to Apple Intelligence.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

OpenAI Goes Hollywood With ‘Critterz,’ a Cannes-Bound Feature Film Built on AI Tools

Published

on

OpenAI Goes Hollywood With ‘Critterz,’ a Cannes-Bound Feature Film Built on AI Tools

The debate over AI in Hollywood is about to hit its most prominent stage yet. AGC Studios is bringing Critterz to the upcoming Cannes Film Market, positioning it as the first mainstream commercial animated family film to incorporate AI assistance throughout its production pipeline. This feature-length expansion of a 2023 viral short originally created using OpenAI’s creative tools marks a significant moment for the entertainment industry.

What Is Critterz Actually About?

The story follows a nervous but courageous woodland creature who teams up with a ragtag group of outsiders. Their shared mission is to find her missing brother. Director Nik Kleverov, co-founder of AI production studio Native Foreign, has described the film as a love letter to 1980s adventure films.

Critterz is no fringe experiment or low-budget short. It’s a full-length feature with serious creative talent behind it and an estimated $30 million budget—a figure that would have been far higher without AI tools in the mix. The original short was itself one of the earliest films to use OpenAI’s technology, and this expansion represents a major leap forward for generative AI in filmmaking.

AI May Be Involved, but the Creative Team Is Very Much Human

The screenplay comes from James Lamont and Jon Foster, the duo behind Paddington in Peru and Cartoon Network’s The Amazing World of Gumball. They’re joined by Tom Butterworth, known for Birthday Girl and Ashes to Ashes. Despite the AI-assisted production, the voice cast is expected to be entirely human.

Chad Nelson, a creative strategist at OpenAI, is producing alongside Vertigo Films’ Allan Niblo and James Richardson. AGC’s Stuart Ford has been careful to frame AI as a tool that supports human artists rather than replacing them. The studio sees Critterz as proof that filmmakers can stay creatively in charge while AI handles the visual heavy lifting.

Building on this perspective, the production team emphasizes that AI was used for tasks like background rendering, character design iterations, and visual effects—not for core storytelling or voice acting. This distinction is crucial as the industry grapples with where to draw the line.

Where Does Hollywood Stand on AI in Movies?

Critterz is arriving at a moment when Hollywood is still figuring out where artificial intelligence belongs in the industry. Cannes has banned films where AI serves as the principal authoring tool from its main competition. Meanwhile, the Academy of Motion Picture Arts and Sciences recently updated its rulebook, making it explicit that AI can be used in production but cannot be credited or awarded an Oscar for acting or writing.

Earlier this year, Steven Spielberg made his position equally clear, stating he has never used AI in his films and strongly opposes AI replacing human creativity. However, not everyone is drawing the same line. The upcoming indie film As Deep as the Grave used generative AI to reconstruct the late Val Kilmer’s voice and performance, raising its own set of questions about consent and creative legacy.

These contrasting approaches highlight the complexity of integrating AI into creative workflows. For more on how AI is reshaping other industries, check out our guide on AI tools for productivity.

What Critterz Means for the Future of Filmmaking

Critterz lands right in the middle of this ongoing debate. Whether it ends up being a proof of concept for a smarter way to make films or a cautionary tale, the conversation it starts may matter more than the film itself. The project demonstrates that AI can reduce costs and speed up production without sacrificing artistic vision—but it also raises valid concerns about job displacement and creative integrity.

As a result, industry insiders are watching closely. If Critterz succeeds at Cannes, it could pave the way for more studios to adopt similar hybrid workflows. If it fails, it might reinforce skepticism about AI’s role in storytelling. Either way, the film serves as a litmus test for how far Hollywood is willing to embrace generative AI.

For filmmakers exploring these tools, understanding the ethical and practical boundaries is essential. Learn more about AI ethics in creative industries to stay informed.

Continue Reading

Artificial Intelligence

OpenAI Could Launch Its First AI Agent Smartphone in 2027: A New Era for Mobile Computing

Published

on

OpenAI Could Launch Its First AI Agent Smartphone in 2027: A New Era for Mobile Computing

The race to build the first true AI agent smartphone is heating up, and OpenAI may be leading the charge. According to a recent report from TF Securities analyst Ming-Chi Kuo, the company is actively developing its debut smartphone, with mass production potentially starting in the first half of 2027. While OpenAI has not officially confirmed the news, supply chain insights suggest the project is accelerating rapidly. This move marks a significant shift for the AI giant, which has primarily focused on software and cloud-based models like GPT-4 until now.

Why OpenAI Is Building an AI Agent Smartphone

So, why would a company known for ChatGPT and DALL-E suddenly dive into hardware? The answer lies in control. By designing both the software and the hardware, OpenAI can deliver a seamless AI agent experience that current smartphones simply cannot match. Today’s devices rely heavily on apps and cloud processing, which introduces latency and limits contextual awareness. An AI-first phone, on the other hand, would prioritize task-based interactions—users would focus on outcomes, not navigating multiple apps.

This approach also allows OpenAI to gather continuous real-time user context, such as location, activity, and usage patterns. This data is critical for AI inference, enabling the device to anticipate needs and act proactively. For a deeper look at how AI agents are transforming mobile experiences, check out our guide on AI agent applications in everyday life.

Key Specifications: Built Around AI Workloads

The rumored OpenAI smartphone is not your average flagship. Instead of competing on camera megapixels or screen refresh rates, it focuses entirely on on-device AI capabilities. Here are the standout features expected:

MediaTek Dimensity Custom Chipset

According to Kuo, MediaTek is the frontrunner to supply the processor. The chip will likely be a customized version of the future Dimensity 9600, manufactured using TSMC’s N2P process. This next-generation node promises exceptional efficiency and performance—critical for running complex AI models locally.

Dual NPU Architecture

Unlike conventional phones with a single neural processing unit, OpenAI’s device is expected to feature a dual NPU setup. This allows the phone to handle layered AI tasks simultaneously, such as real-time language translation, visual recognition, and contextual computing. The result? Faster, more responsive interactions without relying on the cloud.

Memory and Storage Upgrades

To reduce bottlenecks, the phone will reportedly include LPDDR6 RAM and UFS 5.0 storage. These components are designed to keep up with the high data throughput required by AI workloads. An enhanced image signal processor (ISP) will also improve high dynamic range output, supporting real-world visual perception for AI systems that rely on camera input.

Security Features

Security is a top priority. The device is expected to include pKVM (protected Kernel-based Virtual Machine) and inline hashing, ensuring data integrity and device-level protection. This is especially important for an AI agent that handles sensitive user data.

Partnerships and Production Timeline

Beyond MediaTek, OpenAI is reportedly working with Qualcomm on custom processors and Luxshare as a key manufacturing partner. The approach combines on-device AI for real-time processing with cloud-based AI for more complex tasks. If everything stays on track, production could begin in late 2026, with shipments reaching around 30 million units across 2027 and 2028.

However, timelines remain speculative. Much depends on execution, partnerships, and market readiness. OpenAI’s strengths in consumer reach, data, and AI models position it well to build a new ecosystem. The company may even bundle the hardware with subscription services, driving the next major smartphone upgrade cycle.

What This Means for Users and the Market

If launched, the OpenAI AI agent smartphone could introduce a new category of devices centered around AI-first interactions. For consumers, this means faster responses, improved privacy (since more processing happens on-device), and more seamless integration of AI into daily tasks. Imagine a phone that understands your schedule, predicts your needs, and executes commands without you having to open a single app.

For the industry, it signals intensifying competition. Companies like Apple, Google, and Samsung are also investing heavily in on-device AI, but OpenAI’s focus on AI agent technology gives it a unique edge. The timing may also be strategic: a hardware product could strengthen OpenAI’s long-term positioning, particularly if the company is considering major financial milestones such as a future IPO.

To learn more about the broader trend of AI-first devices, read our analysis on AI hardware trends shaping the next decade.

Challenges and What Comes Next

Building a smartphone from scratch is no small feat. OpenAI faces significant hurdles, including supply chain management, software optimization, and consumer adoption. The company must also compete with established players who have years of hardware expertise. Yet, if successful, the payoff could be enormous.

As AI continues to move closer to the device level, OpenAI’s reported plans suggest that the next phase of competition may not just be about better models—but about the hardware that runs them. Whether the 2027 timeline holds remains to be seen, but one thing is clear: the era of the AI agent smartphone is approaching fast.

Continue Reading

Artificial Intelligence

Online Ads Are Spilling Your Secrets: AI Reconstructs Private Life from What You See

Published

on

Online Ads Are Spilling Your Secrets: AI Reconstructs Private Life from What You See

You probably think the ads popping up on your screen are just annoying interruptions. However, a groundbreaking study reveals they are doing far more than promoting products. Artificial intelligence can now analyze the advertisements displayed to you and reconstruct sensitive details about your online ads private life. This includes your political beliefs, education level, employment status, age, gender, and financial standing. The most unsettling part? You do not need to click on anything. Simply viewing the ads is enough for the AI to build a detailed profile.

How AI Decodes Your Ad Stream

Researchers from UNSW Sydney examined over 435,000 Facebook ads shown to 891 participants. They gathered this data through the Australian Ad Observatory, a citizen science project. Then, they fed these ad streams into widely available large language models — the same kind of AI many people use daily as assistants. The results were astonishing.

Building on this, the AI constructed personal profiles from very short browsing sessions. It did not require your browsing history or any information you voluntarily shared. In fact, the process was over 200 times cheaper and 50 times faster than using human analysts to perform the same task.

Why does this work? Ad delivery systems are not random. Platforms like Facebook optimize the ads you see based on inferred profiles built from your behavior. This optimization leaves a digital fingerprint. Now, AI can read that fingerprint with remarkable accuracy.

Why Current Privacy Protections Fall Short

Even though major platforms restrict advertisers from directly targeting sensitive categories, the study shows these traits still get encoded indirectly into ad delivery patterns. This means your online ads private life remains vulnerable even when platforms claim to protect you.

Furthermore, researchers flagged a hidden danger: common browser extensions. Tools like ad blockers or coupon finders could quietly collect this ad data in the background without raising any red flags. This creates a silent surveillance channel that most users never notice.

This means that the very tools you use to improve your browsing experience might be compromising your privacy. The threat is not just theoretical; it is embedded in the architecture of the digital advertising ecosystem.

Practical Steps to Reduce Your Risk

Researchers suggest users can reduce risk by limiting browser extension permissions. You should also adjust your ad personalization settings on platforms like Facebook and Google. However, they also emphasize that individuals cannot solve this problem alone. The vulnerability is built into the ad ecosystem itself. Stronger platform-level safeguards are necessary to address this systemic issue.

For more on protecting your digital footprint, check out our guide on essential digital privacy tips. You might also want to learn about browser extension security risks.

The Bigger Picture: A Call for Systemic Change

This research underscores a fundamental shift in how we must think about privacy. In the past, we worried about what we actively shared online. Now, the threat comes from passive exposure. The ads you see are not just selling products; they are revealing your identity.

Therefore, the onus is on tech companies to redesign their ad systems. They must implement stronger safeguards that prevent AI from reconstructing sensitive profiles from ad streams alone. Until then, your online ads private life remains an open book for anyone with the right tools.

Continue Reading

Trending