Here’s the thing about Apple and AI: they’re the richest tech company on the planet, and they’re somehow still playing catch-up. In March 2026, that gap is more visible than ever.
The Siri Situation Is Getting Embarrassing
Apple announced its big AI push — Apple Intelligence — back at WWDC 2024. The centerpiece was supposed to be a completely rebuilt Siri, powered by large language models, capable of understanding context, handling multi-step tasks, and actually being useful for the first time in a decade.
That was almost two years ago. The new Siri still isn’t here.
According to Bloomberg, the AI-powered Siri revamp has been delayed again. It was supposed to ship with iOS 26.4 in March 2026. Now it’s been pushed to May at the earliest, with some features potentially not landing until iOS 27 in September. The reasons? The new Siri is too slow, struggles with complex commands, and doesn’t integrate well with Apple’s own AI models.
For a company that spent $30 billion on R&D last year, that’s a rough look.
The Google Gemini Partnership Changes Everything
The biggest Apple AI news this year isn’t something Apple built — it’s something they bought. Apple and Google announced that the next generation of Apple Foundation Models will be based on Google’s Gemini architecture and cloud infrastructure.
Let that sink in. Apple, the company that built its brand on vertical integration and controlling every layer of the stack, is outsourcing the brain of its AI assistant to Google.
From an API developer’s perspective, this is fascinating. It means:
Siri’s capabilities will be Gemini-class. That’s a massive upgrade from whatever Apple was cooking internally. Gemini can handle multi-modal inputs, complex reasoning, and long-context conversations. If Apple can actually ship this integration, Siri goes from a joke to a legitimate competitor overnight.
The developer API story gets complicated. Apple Intelligence APIs currently let developers integrate with on-device models for things like text summarization and image understanding. But if the backend is now Gemini, what happens to the API surface? Do developers get access to Gemini-level capabilities through Apple’s SDK? Or does Apple keep the good stuff locked behind Siri?
Privacy claims need an asterisk. Apple’s entire AI pitch has been “we process everything on-device.” With Gemini in the mix, some processing will inevitably move to Google’s cloud. Apple says they’ll use “Private Cloud Compute” to keep data secure, but the optics are different when your data touches Google infrastructure.
What Apple Intelligence Actually Does Today
Strip away the hype and delays, and here’s what Apple Intelligence can actually do right now in early 2026:
Writing tools. Rewrite, proofread, and summarize text across any app. These work well and run on-device. Probably the most useful Apple Intelligence feature shipping today.
Image generation (Image Playground). Create cartoon-style images from text prompts. It’s fun but limited — no photorealistic output, no fine control. More of a party trick than a productivity tool.
Notification summaries. AI-generated summaries of notification stacks. Hit or miss. Sometimes helpful, sometimes hilariously wrong (the BBC News summary incident was peak comedy).
Photo search and cleanup. Natural language photo search (“show me photos from the beach last summer”) and object removal. Both work surprisingly well.
Basic Siri improvements. Better natural language understanding, on-screen awareness, some ChatGPT integration for complex queries. But the big conversational AI upgrade? Still coming.
For developers building on Apple’s platform, the current API surface is limited. You get access to the writing tools framework and some on-device ML capabilities through Core ML, but there’s no general-purpose LLM API comparable to what Google offers with Gemini or what OpenAI provides through their SDK.
The Competition Isn’t Waiting
While Apple delays, everyone else ships.
Google has Gemini baked into Android, Chrome, Workspace, and basically everything else. Their on-device Gemini Nano runs on Pixel phones and handles tasks that Apple Intelligence can’t touch yet.
Samsung shipped Galaxy AI features months before Apple Intelligence launched, and their latest Galaxy S26 series has real-time translation, AI photo editing, and a Gemini-powered assistant that actually works.
Microsoft has Copilot everywhere — Windows, Office, Edge, even the keyboard. Love it or hate it, it’s shipping and iterating fast.
Apple’s strategy of “we’ll take our time and get it right” worked when they were setting the pace. When you’re two years behind on a promised feature, patience starts looking like incompetence.
What This Means for Developers
If you’re building apps for Apple’s ecosystem, here’s the practical takeaway:
Don’t bet on Apple Intelligence APIs for critical features yet. The platform is still evolving too fast. What ships in iOS 26.4 might look completely different in iOS 27 once the Gemini integration lands.
Core ML is still solid. For on-device inference — image classification, NLP, custom models — Core ML remains excellent. Apple’s Neural Engine hardware is genuinely best-in-class for on-device ML workloads.
Watch the Gemini integration closely. If Apple exposes Gemini-level capabilities through a developer API, that’s a significant shift for iOS apps. Imagine Siri Shortcuts that can reason about complex multi-step tasks, or app intents that understand nuanced natural language.
Cross-platform AI is the safer bet. Until Apple’s AI story stabilizes, building on OpenAI’s API, Google’s Gemini API, or Anthropic’s Claude API gives you more control and fewer platform dependencies.
The Honest Assessment
Apple’s AI strategy in 2026 is a weird mix of genuine technical capability and frustrating execution delays. The hardware is there — the M-series and A-series chips are monsters for ML workloads. The on-device privacy story is compelling. The writing tools and photo features are genuinely useful.
But the flagship feature — the AI-powered Siri that was supposed to change everything — keeps slipping. And the decision to partner with Google for the underlying models, while pragmatic, undermines the “we do everything ourselves” narrative that Apple fans love.
My prediction: when the Gemini-powered Siri finally ships (probably September 2026 with iOS 27), it’ll be good. Maybe even great. But by then, Google and Samsung will have moved on to the next thing, and Apple will still be playing catch-up.
The richest company in the world shouldn’t be this far behind on the most important technology shift in a decade. And yet, here we are.
🕒 Last updated: · Originally published: March 12, 2026