Z.ai Introduces GLM-5V-Turbo: A Multimodal Native Code Model Optimized for OpenClaw and High-Performance Agentic Engineering Workflows Everywhere

Z.ai Introduces GLM-5V-Turbo: A Multimodal Native Code Model Optimized for OpenClaw and High-Performance Agentic Engineering Workflows Everywhere

In the field of visual language models (VLMs), the ability to bridge the gap between visual perception and logical coding has often faced performance trade-offs. Many models excel at describing an image but struggle to translate that visual information into the robust syntax required for software engineering. Zipu AI’s (Z.ai) GLM-5V-Turbo is a conceptual code … Read more

How to Build a Production-Ready Gemma 3 1B Order Generation AI Pipeline with Hugging Face Transformers, Conversation Templates, and Colab Inference

How to Build a Production-Ready Gemma 3 1B Order Generation AI Pipeline with Hugging Face Transformers, Conversation Templates, and Colab Inference

In this tutorial, we create and implement a Colab workflow Gemma 3 1B Teach using Hugging Face Transformers and HF Tokens, in a practical, reproducible, and easy-to-follow step-by-step method. We start by installing the required libraries, securely authenticating with our Hugging Face token, and uploading the token and model to an available device with the … Read more

Hugging Face Releases TRL v1.0: Integrated Post-Training Stack for SFT, Reward Model, DPO, and GRPO Workflow

Hugging Face Releases TRL v1.0: Integrated Post-Training Stack for SFT, Reward Model, DPO, and GRPO Workflow

Hugging Face has officially been released TRL (Transformer Reinforcement Learning) v1.0marking a significant transition for the library from a research-oriented repository to a stable, production-ready framework. For AI experts and developers, this release includes code After Training pipeline—a key sequence of Supervised Fine-Tuning (SFT), Reward Modeling, and Alignment—in a unified, standardized API. In the early … Read more

Liquid AI Releases LFM2.5-350M: A Compact 350M Parametric Model Trained on 28T Tokens with Trimmed Reinforcement Learning

Liquid AI Releases LFM2.5-350M: A Compact 350M Parametric Model Trained on 28T Tokens with Trimmed Reinforcement Learning

In the current state of productive AI, ‘scaling laws’ have generally dictated that more parameters equal more intelligence. However, Liquid AI is challenging this convention with the release of LFM2.5-350M. This model is actually a technical case study in intelligence density with more pre-training (from 10T to 28T tokens) and massive reinforcement learning. The importance … Read more

How to Build and Evolve a Custom OpenAI Agent with A-Evolve Using Benchmarks, Skills, Memory, and Workspace Transformations

How to Build and Evolve a Custom OpenAI Agent with A-Evolve Using Benchmarks, Skills, Memory, and Workspace Transformations

In this lesson, we work directly with Evolve framework at Colab and build a complete agent pipeline for evolution from the ground up. We set up a repository, configure an OpenAI-enabled agent, define a custom benchmark, and build our evolution engine to see how A-Evolve actually improves the agent with changing workspaces. In code, we … Read more

AI Conversations Feel Human

AI Conversations Feel Human

Do you remember the first AI voice conversation you had? Admittedly, it felt silly to get live responses from a talking bot. But one thing that was sorely lacking in the interaction was the feeling of someone answering your questions. Over the years, we are now seeing more advanced AI models in this matter. And … Read more

Alibaba Qwen Team Releases Qwen3.5 Omni: A Native Multimodal Model for Text, Audio, Video, and Real-Time Interaction

Alibaba Qwen Team Releases Qwen3.5 Omni: A Native Multimodal Model for Text, Audio, Video, and Real-Time Interaction

The landscape of large multimodal linguistic models (MLLMs) has shifted from experimental ‘wrappers’—where separate visual or audio encoders are sewn onto a text-based core—to native, end-to-end ‘omnimodal’ architectures. Alibaba Qwen’s latest team, Qwen3.5-Omnirepresents a milestone in this evolution. Designed as a direct competitor to flagship models such as the Gemini 3.1 Pro, the Qwen3.5-Omni series … Read more

Microsoft AI Releases Harrier-OSS-v1: A New Family of Multilingual Embedding Models Beats SOTA in Multilingual MTEB v2

Microsoft AI Releases Harrier-OSS-v1: A New Family of Multilingual Embedding Models Beats SOTA in Multilingual MTEB v2

Microsoft announced the release of the Harrier-OSS-v1a family of three multilingual text embedding models designed to provide high-quality semantic representations in a wide range of languages. Emissions include three different scales: a 270M model parameter, a 0.6B model, and a 27B model. Harrier-OSS-v1 models have achieved modern results (SOTA) in the MTEB Multilingual (Large Text … Read more

20+ Solved ML Projects to Boost Your Resume

20+ Solved ML Projects to Boost Your Resume

Projects are a bridge between learning and becoming a professional. While theory builds the basics, employers hire people to solve real problems. A strong, diverse portfolio demonstrates practical skills, breadth of expertise, and problem-solving ability. This guide covers 20+ projects solved in all ML domains, from basic regression and prediction to NLP and Computer Vision. … Read more