AI Haven
AI News

Meta Delays Avocado AI Model Launch to May After Benchmark Disappointment

Meta delays Avocado AI model to May 2026 after internal benchmarks showed it couldn't match Google Gemini 3.0, OpenAI, and Anthropic models.

March 14, 2026

Meta Delays Avocado AI Model After Internal Benchmarks Fall Short of Rivals

Meta has pushed back the release of its next-generation AI model, code-named Avocado, from a planned March 2026 launch to at least May 2026 after internal testing revealed the model failed to match the performance of leading competitors from Google, OpenAI, and Anthropic.

According to reports from The New York Times and Reuters, internal benchmarks showed Avocado's capabilities falling between Google's Gemini 2.5 and Gemini 3 models. While Avocado outperformed Meta's previous models including Llama 4, it couldn't keep pace with the top-tier frontier models from rivals, prompting the delay.

This marks another setback for Meta's AI division, following similar postponements for Llama 4 and the Behemoth model due to benchmark shortfalls. The company has invested heavily in AI infrastructure, planning between $115 billion and $135 billion in capital expenditures for 2026 alone.

Competitive Pressure Mounts

The delay puts Meta's substantial AI investments under increased scrutiny. While competitors like Google and OpenAI continue releasing improved models on aggressive timelines, Meta has struggled to deliver a frontier-level model that matches the market leaders.

As a short-term workaround, Meta is reportedly considering licensing Google's Gemini model while working on improvements to Avocado. The company is also planning successor models under the codenames "Watermelon" and "Mango" to eventually close the competitive gap.

The postponement impacted investor sentiment, with Meta's shares slipping following the announcement. The delay highlights the intensifying competition in the AI race, where even massive investments don't guarantee comparable results when rival labs are pulling ahead on model architecture, training data, and inference optimization.

Source: The New York Times / ReutersView original →