Technology NewsTechnology NewsTechnology News
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Reading: Liquid AI Launches LFM2 to Power Fast On-Device AI
Share
Font ResizerAa
Technology NewsTechnology News
Font ResizerAa
Search
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Follow US
  • Cookie Policy (EU)
  • Contact
  • About
© 2025 NEWSLINKER - Powered by LK SOFTWARE
AI

Liquid AI Launches LFM2 to Power Fast On-Device AI

Highlights

  • LFM2 streamlines on-device AI, offering speed and efficiency for edge hardware.

  • Benchmarks indicate LFM2 competes with larger models across multilingual and task categories.

  • The open licensing model encourages diverse adoption in research and smaller enterprises.

Samantha Reed
Last updated: 14 July, 2025 - 7:30 pm 7:30 pm
Samantha Reed 4 hours ago
Share
SHARE

Liquid AI has introduced its LFM2 family of small-scale foundation models tailored for on-device AI deployment across a range of hardware including smartphones, laptops, vehicles, and other edge devices. The release signals a strategic shift for organizations seeking real-time capabilities, reduced latency, and enhanced privacy without reliance on cloud infrastructure. As technology leaders strive to balance computing performance with privacy and cost efficiency, the LFM2 models aim to redefine how generative AI workloads are handled locally. The company’s decision to offer these models under an open license framework underscores a growing trend towards democratizing AI technology, inviting academic, research, and certain commercial users to freely experiment and incorporate LFM2 in their own applications.

Contents
What distinguishes LFM2 technically?How does LFM2 stack up against comparable models?What are the training methods behind LFM2?

Recent coverage about edge AI models has highlighted rising competition as more companies prioritize efficient, localized processing for intelligent agents. Earlier discussions centered on broadening access through open licenses and smaller, multitask-capable models from various tech organizations. However, previous reports often emphasized cloud dependence or larger model requirements, while Liquid AI’s approach with LFM2 pivots to prioritizing on-device performance, resource conservation, and privacy, illustrating a distinct focus in this evolving sector.

What distinguishes LFM2 technically?

LFM2 is built on a hybrid architecture that integrates multiplicative gates and short convolutions, split across 16 specialized blocks. This design contributes to doubling decode and prefill speeds compared to Qwen3 on CPU hardware, and is reported to deliver threefold improvements in training efficiency relative to the inaugural LFM models. The models’ sizes—0.35B, 0.7B, and 1.2B parameters—offer organizations flexibility to deploy AI applications based on available device resources and workload demands.

“We’ve optimized LFM2 for lean yet powerful on-device experiences, making AI practical for billions of endpoints,”

a Liquid AI spokesperson explained.

How does LFM2 stack up against comparable models?

Automated benchmarks and a large language model (LLM)-based evaluation framework assessed LFM2’s performance, showing that its largest variant, LFM2-1.2B, matches or exceeds the accuracy of models like Qwen3-1.7B, despite having fewer parameters. In tests spanning knowledge tasks, instruction following, mathematics, and multilingual competence, LFM2 models also outperformed peers such as Gemma 3 1B IT and Llama 3.2 1B Instruct in similar parameter ranges. These results suggest that efficient model design can bridge the gap between resource limitations and task complexity on edge devices.

What are the training methods behind LFM2?

Liquid AI crafted LFM2 through a multi-stage process designed to maximize efficiency and generalist capability. The company utilized 10 trillion tokens sourced predominantly from English text, with supplementary multilingual and code data. LFM1-7B served as the “teacher” in a knowledge distillation paradigm, whereby the smaller LFM2 variants learned from outputs generated by their larger predecessor. This process also extended the models’ native context length to 32,000 tokens. Further post-training steps used supervised fine-tuning and custom algorithms such as Direct Preference Optimization, ensuring that the final models were robust for real-world applications and adaptable to specific tasks through refined sample curation and composite checkpoint selection.

Market adoption of local AI hinges on striking a balance between model performance, hardware constraints, and data privacy needs. LFM2 seeks to fill this niche, and Liquid AI’s full-stack solution includes not just the models, but an enterprise-grade deployment stack that integrates architecture, optimization, and deployment engines. The open license based on Apache 2.0 encourages academic use and allows smaller commercial users to incorporate LFM2, in line with calls in the AI community for broader inclusion and flexibility.

For organizations assessing their next steps in AI-enabled products, the introduction of LFM2 provides new alternatives. Solid benchmarking data and public access through platforms like Hugging Face position the models for rapid experimentation and integration. As the need for low-latency, privacy-centric AI grows, such on-device solutions may become increasingly critical. It is important to evaluate specific operational goals, device capabilities, and data handling needs before migrating from cloud-based to edge AI models. Technical teams considering deployment should analyze how LFM2’s hybrid architecture, open licensing terms, and training efficiency can be leveraged within their infrastructure.

You can follow us on Youtube, Telegram, Facebook, Linkedin, Twitter ( X ), Mastodon and Bluesky

You Might Also Like

MOTOR Ai Secures $20 Million to Launch Certified Self-Driving Cars

Apera AI Updates Apera Forge, Expands 4D Vision Robotics Tools

Supply Chain Robotics Experts Address Industry Setbacks and Progress

Toyota Research Institute Boosts Robot Learning with Large Behavior Models

Hugging Face Rolls Out Reachy Mini for AI Robotics Enthusiasts

Share This Article
Facebook Twitter Copy Link Print
Samantha Reed
By Samantha Reed
Samantha Reed is a 40-year-old, New York-based technology and popular science editor with a degree in journalism. After beginning her career at various media outlets, her passion and area of expertise led her to a significant position at Newslinker. Specializing in tracking the latest developments in the world of technology and science, Samantha excels at presenting complex subjects in a clear and understandable manner to her readers. Through her work at Newslinker, she enlightens a knowledge-thirsty audience, highlighting the role of technology and science in our lives.
Previous Article Tesla Expands Robotaxi Service and Updates Key App Features in Austin
Next Article Tesla Megapacks Arrive, Power Stanwell Battery Project in Queensland

Stay Connected

6.2kLike
8kFollow
2.3kSubscribe
1.7kFollow

Latest News

Experts Flag Major Security Flaws in Grok 4 as xAI Expands Federal Access
Cybersecurity
Tesla Robotaxi Expands Service Area, Outpaces Waymo in Austin
Electric Vehicle
Tesla Megapacks Arrive, Power Stanwell Battery Project in Queensland
Electric Vehicle
Tesla Expands Robotaxi Service and Updates Key App Features in Austin
Electric Vehicle
AMD Brings Smooth Motion Tech to All Games with Driver Integration
Computing
NEWSLINKER – your premier source for the latest updates in ai, robotics, electric vehicle, gaming, and technology. We are dedicated to bringing you the most accurate, timely, and engaging content from across these dynamic industries. Join us on our journey of discovery and stay informed in this ever-evolving digital age.

ARTIFICAL INTELLIGENCE

  • Can Artificial Intelligence Achieve Consciousness?
  • What is Artificial Intelligence (AI)?
  • How does Artificial Intelligence Work?
  • Will AI Take Over the World?
  • What Is OpenAI?
  • What is Artifical General Intelligence?

ELECTRIC VEHICLE

  • What is Electric Vehicle in Simple Words?
  • How do Electric Cars Work?
  • What is the Advantage and Disadvantage of Electric Cars?
  • Is Electric Car the Future?

RESEARCH

  • Robotics Market Research & Report
  • Everything you need to know about IoT
  • What Is Wearable Technology?
  • What is FANUC Robotics?
  • What is Anthropic AI?
Technology NewsTechnology News
Follow US
About Us   -  Cookie Policy   -   Contact

© 2025 NEWSLINKER. Powered by LK SOFTWARE
Welcome Back!

Sign in to your account

Register Lost your password?