Advocates for Google CEO: Stop YouTube AI that could harm children

Advocates for Google CEO: Stop YouTube AI that could harm children

Children use a lot of AI slop. And child safety advocates are concerned. In a letter sent to Google CEO Sundar Pichai and YouTube CEO Neil Mohan, a coalition of national organizations and child development experts is calling for changes to YouTube’s policies to reduce AI slop, including a complete ban on “Made for Kids” … Read more

‘Last One Laughing UK’ Season 2 review: If you’re not watching, you’re missing out

‘Last One Laughing UK’ Season 2 review: If you’re not watching, you’re missing out

It’s rare that jokes actually make you laugh out loud, though The Last Laugh UK – despite being a show that revolves around people not laughing — easy handling. Back on Prime Video for a second season with new characters, the Jimmy Carr-hosted game show/reality TV challenge sees 10 comedians locked in a room together … Read more

Hugging Face Releases TRL v1.0: Integrated Post-Training Stack for SFT, Reward Model, DPO, and GRPO Workflow

Hugging Face Releases TRL v1.0: Integrated Post-Training Stack for SFT, Reward Model, DPO, and GRPO Workflow

Hugging Face has officially been released TRL (Transformer Reinforcement Learning) v1.0marking a significant transition for the library from a research-oriented repository to a stable, production-ready framework. For AI experts and developers, this release includes code After Training pipeline—a key sequence of Supervised Fine-Tuning (SFT), Reward Modeling, and Alignment—in a unified, standardized API. In the early … Read more

Liquid AI Releases LFM2.5-350M: A Compact 350M Parametric Model Trained on 28T Tokens with Trimmed Reinforcement Learning

Liquid AI Releases LFM2.5-350M: A Compact 350M Parametric Model Trained on 28T Tokens with Trimmed Reinforcement Learning

In the current state of productive AI, ‘scaling laws’ have generally dictated that more parameters equal more intelligence. However, Liquid AI is challenging this convention with the release of LFM2.5-350M. This model is actually a technical case study in intelligence density with more pre-training (from 10T to 28T tokens) and massive reinforcement learning. The importance … Read more