Connect with us

Artificial Intelligence

Meta’s AI Now Scans Photos for Bone Structure to Catch Underage Users on Instagram and Facebook

Published

on

Meta’s AI Now Scans Photos for Bone Structure to Catch Underage Users on Instagram and Facebook

Meta is taking a bold new step in age verification. The company now uses AI visual analysis age verification to scan photos and videos on Instagram and Facebook for physical clues like height and bone structure. This goes far beyond simply checking what users type in their profiles.

The goal is straightforward: find and remove accounts belonging to children under 13 who may have signed up using a fake birthday. By analyzing visual cues, Meta aims to close a loophole that has long frustrated parents and regulators alike.

How Does the Visual Analysis Actually Work?

First, a key clarification: this is not facial recognition. The AI does not identify who someone is. Instead, it scans for general physical indicators — such as body proportions and skeletal features — to estimate a broad age range.

This AI visual analysis age verification system works alongside existing text-based detection. That older method looks for contextual clues like birthday mentions, references to school grades, and information in bios, posts, captions, and comments. Meta also plans to expand this text analysis to Instagram Reels, Instagram Live, and Facebook Groups.

What Happens When an Account Is Flagged?

If an account is flagged as potentially underage, it gets deactivated immediately. The user then needs to verify their age to get it back. If they cannot, the account is permanently deleted. This visual analysis is currently live in select countries, with a broader rollout planned soon.

However, privacy advocates have raised concerns. Critics worry that scanning photos for bone structure could lead to false positives or misuse. Meta insists the technology is designed to protect children, not to profile them.

What Else Is Meta Doing for Teen Safety?

Beyond age verification, Meta is expanding its Teen Accounts system. This feature automatically places users the platform suspects are between 13 and 15 into a stricter account experience. That means private accounts by default, direct messages limited to people they already know, and hidden harmful comments.

This expansion now covers Instagram in Brazil and 27 EU countries. It follows earlier content restrictions modeled on film ratings. Notably, Facebook in the US is getting this feature for the first time, with the UK and EU following in June. Meta has also given parents visibility into their kids’ AI chats as part of the same broader push.

In addition, Meta is rolling out new educational resources for teens. The company hopes these tools will help young users make smarter choices online. For more on how social platforms handle youth safety, you can read about youth safety features on social media.

Legal and Regulatory Pressure Mounts

These moves come as Meta faces mounting legal and regulatory pressure over child safety. The company recently paid a $375 million penalty in New Mexico over privacy violations. Meanwhile, the European Commission is investigating whether Meta’s platforms are doing enough to keep children off them.

This is not just about compliance. It is about rebuilding trust. Parents and lawmakers alike are demanding stronger protections. Meta’s new AI-driven approach is a direct response to that demand.

Yet questions remain. Can AI accurately estimate age from bone structure? Will false positives harm legitimate users? And how will Meta handle privacy concerns in regions with strict data protection laws? These are issues the company must address as it rolls out the technology globally.

For a deeper look at how AI is reshaping online safety, check out AI ethics and privacy in social media. And if you are curious about the technical side, explore how age estimation technology works.

Ultimately, Meta’s use of AI to scan for bone structure marks a significant shift in digital age verification. It is a powerful tool, but one that must be wielded carefully. The balance between safety and privacy has never been more delicate.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Perplexity’s new Premium Health Sources aim to make AI medical advice more trustworthy

Published

on

Perplexity’s new Premium Health Sources aim to make AI medical advice more trustworthy

When you ask an AI chatbot about a health concern, how do you know the answer is reliable? That question has haunted the industry for years. Now, Perplexity is taking a bold step to address it. The company has launched Premium Health Sources, a feature that pulls directly from peer-reviewed medical journals and clinical databases. This move could reshape how people use AI for health information.

What exactly are Premium Health Sources?

Perplexity’s Premium Health Sources connect the AI to medical content that was previously locked behind paywalls. Instead of scanning random websites, the system now draws from authoritative publications like the New England Journal of Medicine (NEJM) and BMJ Group. These are the same sources that doctors and researchers rely on daily.

But that is just the start. Perplexity plans to add nine more medical journals and databases soon. Upcoming integrations include Micromedex for drug information, VisualDx for clinical images, and EBSCOhost for broader research. Every answer comes with clear citations, so users can click through and verify the original source themselves.

This approach changes the game for AI-generated health content. Instead of vague summaries, users get information that is medically grounded and transparent.

Why health queries matter more than ever

Health-related searches already account for over 10% of all queries on Perplexity. That is a huge number, and it comes with serious responsibility. A wrong answer about medication dosage or symptom interpretation can have real consequences.

Perplexity designed this feature for two distinct audiences. First, it helps everyday people understand diagnoses, treatments, or prescriptions. Second, it supports healthcare professionals and researchers who need fast access to evidence-backed data. This means a nurse looking up a drug interaction or a patient researching a new diagnosis can both trust the information they receive.

Building on this, Perplexity is also expanding its ecosystem. The upcoming integration of Micromedex will provide detailed drug databases, while VisualDx will offer clinical imagery for visual diagnosis support. These additions will make the platform even more valuable for medical professionals.

Accuracy versus trust: the real challenge

Many AI chatbots have faced criticism for spreading misinformation or even reinforcing harmful thoughts. Reports have linked AI advice to serious mental health issues in some cases. The core problem is that general-purpose AI models often prioritize fluency over factual accuracy.

Perplexity aims to close that gap. By grounding responses in verified medical sources, the platform reduces the risk of misleading advice. However, the company is careful to note that Premium Health Sources do not replace professional medical consultation. Instead, they provide a better starting point for informed decisions.

This distinction is crucial. In healthcare, sounding right is not enough. Being right is everything. Perplexity’s new feature does not solve every problem, but it sets a higher standard for AI-generated health content.

For more on how AI is transforming healthcare, check out this analysis of AI in medicine. You can also explore our guide to the top AI chatbots for different use cases.

As AI continues to evolve, tools like Premium Health Sources could become the norm. For now, Perplexity is leading the charge toward a more trustworthy future for digital health advice.

Continue Reading

Artificial Intelligence

iOS 27 Could Let You Pick Your Own AI Model for Text and Image Tasks — Here’s What That Means

Published

on

iOS 27 Could Let You Pick Your Own AI Model for Text and Image Tasks — Here’s What That Means

Imagine controlling which artificial intelligence powers your iPhone’s writing tools, image generation, and even Siri. That’s exactly what iOS 27 might deliver, according to a new report from Bloomberg’s Mark Gurman. The upcoming operating system update could let users choose from multiple third-party AI models for core Apple Intelligence features. This shift transforms Apple from a builder of AI into a marketplace for it, putting you in the driver’s seat.

For years, Apple kept its AI tightly controlled. But with iOS 27 AI model selection, the company is opening the door to competition. You’ll be able to pick which service handles tasks like proofreading text, generating stickers, or answering Siri queries. Think of it like choosing your default search engine or music streaming app — but for artificial intelligence.

What Is the “Extensions” Feature in iOS 27?

According to Gurman, Apple is internally calling this new capability “Extensions.” It will appear in the Settings app, allowing you to assign a specific AI model to each Apple Intelligence tool. These tools include Writing Tools (for summarizing and proofreading), Image Playground (for creating stickers and funny images), and Siri itself.

This means you could use Google Gemini for writing tasks, Anthropic Claude for image generation, and OpenAI ChatGPT for Siri — or mix and match as you like. The report suggests Apple has already tested the system with Google and Anthropic, making Gemini and Claude likely early options. Providers will need to opt in through their App Store apps, similar to how streaming services offer subscriptions.

Building on this, Apple may also let you assign different Siri voices depending on which AI model handles the backend. So if you prefer Claude’s tone for Siri, you could set that up easily.

How Does This Change Apple Intelligence?

Until now, OpenAI’s ChatGPT enjoyed exclusive access to Apple Intelligence, reaching over two billion active devices. However, iOS 27 AI model selection threatens that monopoly. The report notes that ChatGPT engagement on Apple devices fell short of expectations for both companies. Additionally, tensions may be rising, as OpenAI has reportedly been poaching Apple engineers for its own hardware projects.

For everyday users, the payoff is genuine control. You’ll be able to assign an AI model to a particular task and switch it at will. This flexibility could encourage more experimentation with different AI services, driving competition and potentially improving quality across the board.

Moreover, Apple’s pivot from AI builder to AI marketplace is a calculated hedge. Instead of developing its own large language models from scratch, Apple can monetize access to its ecosystem. This strategy mirrors how the App Store works: Apple takes a cut of revenue while third-party developers provide the content. Learn more about Apple’s AI marketplace strategy.

Why This Matters for You

Choice is the key benefit here. You’re no longer locked into a single AI provider. If you prefer how Claude handles creative writing or how Gemini processes images, you can set that as your default. This also means better privacy options: some models process data on-device, while others use cloud servers. You’ll be able to pick the one that aligns with your privacy preferences.

In addition, this move could accelerate AI innovation. When users can easily switch models, providers must compete on performance, features, and price. That’s good news for anyone who relies on AI tools for work, creativity, or daily tasks.

However, there’s a catch: not all AI models will be available at launch. Apple will likely approve providers through a review process, similar to App Store apps. Expect a curated selection at first, with more options rolling out over time. Check out the full list of iOS 27 features.

What About Siri?

Siri is arguably the biggest beneficiary of this change. Currently, Siri relies on Apple’s own AI, which has lagged behind competitors like Google Assistant and Amazon Alexa. With iOS 27, Siri could tap into third-party models, potentially making it smarter and more responsive. You might even assign different voices to different AI models, adding a personal touch.

Yet, this raises questions about consistency. If you switch models, will Siri behave differently? Apple will need to ensure a smooth experience, regardless of which AI powers the assistant. The company hasn’t released details on how it will handle these transitions, but early tests suggest the system is designed to be seamless.

When Can You Expect iOS 27?

Apple typically announces major iOS updates at its Worldwide Developers Conference (WWDC) in June, with a public release in September. That timeline suggests iOS 27 will debut in late 2025. However, features like AI model selection could be tested in beta versions before the final release.

For now, the report remains unconfirmed by Apple. But given Gurman’s track record, this feature is likely real. If it ships, it could reshape how we interact with AI on our devices — giving us the power to choose, rather than having Apple choose for us.

As a result, the era of one-size-fits-all AI on iPhones may be ending. iOS 27 AI model selection promises a future where your device adapts to your preferences, not the other way around. Explore our complete guide to Apple Intelligence.

Continue Reading

Artificial Intelligence

OpenAI Goes Hollywood With ‘Critterz,’ a Cannes-Bound Feature Film Built on AI Tools

Published

on

OpenAI Goes Hollywood With ‘Critterz,’ a Cannes-Bound Feature Film Built on AI Tools

The debate over AI in Hollywood is about to hit its most prominent stage yet. AGC Studios is bringing Critterz to the upcoming Cannes Film Market, positioning it as the first mainstream commercial animated family film to incorporate AI assistance throughout its production pipeline. This feature-length expansion of a 2023 viral short originally created using OpenAI’s creative tools marks a significant moment for the entertainment industry.

What Is Critterz Actually About?

The story follows a nervous but courageous woodland creature who teams up with a ragtag group of outsiders. Their shared mission is to find her missing brother. Director Nik Kleverov, co-founder of AI production studio Native Foreign, has described the film as a love letter to 1980s adventure films.

Critterz is no fringe experiment or low-budget short. It’s a full-length feature with serious creative talent behind it and an estimated $30 million budget—a figure that would have been far higher without AI tools in the mix. The original short was itself one of the earliest films to use OpenAI’s technology, and this expansion represents a major leap forward for generative AI in filmmaking.

AI May Be Involved, but the Creative Team Is Very Much Human

The screenplay comes from James Lamont and Jon Foster, the duo behind Paddington in Peru and Cartoon Network’s The Amazing World of Gumball. They’re joined by Tom Butterworth, known for Birthday Girl and Ashes to Ashes. Despite the AI-assisted production, the voice cast is expected to be entirely human.

Chad Nelson, a creative strategist at OpenAI, is producing alongside Vertigo Films’ Allan Niblo and James Richardson. AGC’s Stuart Ford has been careful to frame AI as a tool that supports human artists rather than replacing them. The studio sees Critterz as proof that filmmakers can stay creatively in charge while AI handles the visual heavy lifting.

Building on this perspective, the production team emphasizes that AI was used for tasks like background rendering, character design iterations, and visual effects—not for core storytelling or voice acting. This distinction is crucial as the industry grapples with where to draw the line.

Where Does Hollywood Stand on AI in Movies?

Critterz is arriving at a moment when Hollywood is still figuring out where artificial intelligence belongs in the industry. Cannes has banned films where AI serves as the principal authoring tool from its main competition. Meanwhile, the Academy of Motion Picture Arts and Sciences recently updated its rulebook, making it explicit that AI can be used in production but cannot be credited or awarded an Oscar for acting or writing.

Earlier this year, Steven Spielberg made his position equally clear, stating he has never used AI in his films and strongly opposes AI replacing human creativity. However, not everyone is drawing the same line. The upcoming indie film As Deep as the Grave used generative AI to reconstruct the late Val Kilmer’s voice and performance, raising its own set of questions about consent and creative legacy.

These contrasting approaches highlight the complexity of integrating AI into creative workflows. For more on how AI is reshaping other industries, check out our guide on AI tools for productivity.

What Critterz Means for the Future of Filmmaking

Critterz lands right in the middle of this ongoing debate. Whether it ends up being a proof of concept for a smarter way to make films or a cautionary tale, the conversation it starts may matter more than the film itself. The project demonstrates that AI can reduce costs and speed up production without sacrificing artistic vision—but it also raises valid concerns about job displacement and creative integrity.

As a result, industry insiders are watching closely. If Critterz succeeds at Cannes, it could pave the way for more studios to adopt similar hybrid workflows. If it fails, it might reinforce skepticism about AI’s role in storytelling. Either way, the film serves as a litmus test for how far Hollywood is willing to embrace generative AI.

For filmmakers exploring these tools, understanding the ethical and practical boundaries is essential. Learn more about AI ethics in creative industries to stay informed.

Continue Reading

Trending