Google and Meta update their AI models amid the rise of “AlphaChip”

You May Be Interested In:‘We got stuck in puddles’: skiers upset by lack of snow on Swedish slopes


It’s been a wildly busy week in AI news thanks to OpenAI, including a controversial blog post from CEO Sam Altman, the wide rollout of Advanced Voice Mode, 5GW data center rumors, major staff shake-ups, and dramatic restructuring plans.

But the rest of the AI world doesn’t march to the same beat, doing its own thing and churning out new AI models and research by the minute. Here’s a roundup of some other notable AI news from the past week.

Google Gemini updates

On Tuesday, Google announced updates to its Gemini model lineup, including the release of two new production-ready models that iterate on past releases: Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002. The company reported improvements in overall quality, with notable gains in math, long context handling, and vision tasks. Google claims a 7 percent increase in performance on the MMLU-Pro benchmark and a 20 percent improvement in math-related tasks. But as you know, if you’ve been reading Ars Technica for a while, AI benchmarks aren’t as useful as we would like them to be.

Along with model upgrades, Google introduced substantial price reductions for Gemini 1.5 Pro, cutting input token costs by 64 percent and output token costs by 52 percent for prompts under 128,000 tokens. As AI researcher Simon Willison noted on his blog, “For comparison, GPT-4o is currently $5/[million tokens] input and $15/m output and Claude 3.5 Sonnet is $3/m input and $15/m output. Gemini 1.5 Pro was already the cheapest of the frontier models and now it’s even cheaper.”

Google also increased rate limits, with Gemini 1.5 Flash now supporting 2,000 requests per minute and Gemini 1.5 Pro handling 1,000 requests per minute. Google reports that the latest models offer twice the output speed and three times lower latency compared to previous versions. These changes may make it easier and more cost-effective for developers to build applications with Gemini than before.

Meta launches Llama 3.2

Llama 3.2 promotional graphic

On Wednesday, Meta announced the release of Llama 3.2, a significant update to its open-weights AI model lineup that we have covered extensively in the past. The new release includes vision-capable large language models (LLMs) in 11 billion and 90B parameter sizes, as well as lightweight text-only models of 1B and 3B parameters designed for edge and mobile devices. Meta claims the vision models are competitive with leading closed-source models on image recognition and visual understanding tasks, while the smaller models reportedly outperform similar-sized competitors on various text-based tasks.

share Paylaş facebook pinterest whatsapp x print

Similar Content

Japanese railway shelter replaced in less than 6 hours by 3D-printed model
Japanese railway shelter replaced in less than 6 hours by 3D-printed model
Console makers seek to avoid 25% price bump driven by Trump’s trade war
Console makers seek to avoid 25% price bump driven by Trump’s trade war
An illustration of a circuit board and a chip with "AI" written on it.
OpenAI’s secret weapon against Nvidia dependence takes shape
Close-up of a windswept yorkie dog sticking its head out of an open car window - stock photo
The AI war between Google and OpenAI has never been more heated
Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better
Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better
US bridges are at risk of catastrophic ship collisions every few years
US bridges are at risk of catastrophic ship collisions every few years
The News Spectrum | © 2024 | News