Technology NewsTechnology NewsTechnology News
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Reading: MIT Researchers Use Vision to Guide Robots Without Sensors
Share
Font ResizerAa
Technology NewsTechnology News
Font ResizerAa
Search
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Follow US
  • Cookie Policy (EU)
  • Contact
  • About
© 2025 NEWSLINKER - Powered by LK SOFTWARE
AI

MIT Researchers Use Vision to Guide Robots Without Sensors

Highlights

  • MIT’s NJF uses vision instead of sensors for robot control and learning tasks.

  • NJF adapts across robot types using only visual feedback, but retraining is required.

  • Limitations persist around tactile sensing and generalizing control between robots.

Kaan Demirel
Last updated: 29 June, 2025 - 4:09 pm 4:09 pm
Kaan Demirel 5 hours ago
Share
SHARE

A soft robotic hand curls its fingers in a laboratory setting—but unlike most robotic systems, this one operates without any embedded sensors or hand-crafted mathematical models. At the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (MIT CSAIL), researchers have introduced a novel control approach using only visual observation to drive a robot’s actions. This model, termed Neural Jacobian Fields (NJF), eliminates the need for tactile or specialized hardware, enabling robots to learn to interpret and act upon their physical form solely from camera data. Such advancements may accelerate robotics development, opening pathways for non-rigid and irregular machine designs to operate with enhanced autonomy. As automation becomes more prevalent, cost reduction in hardware and programming could help broaden access for smaller businesses or hobbyists.

Contents
How Does Neural Jacobian Fields Work Differently?What Capabilities Has MIT’s NJF Demonstrated?How Might NJF Impact Real-World Robotics?

Earlier work in the robotics field heavily relied on rigid structures and sensor-dense assemblies, as seen in brands like Allegro Hand or industrial robotic arms. Recent vision-based systems tried to replicate awareness by mapping robots’ geometry using hand-designed models and sensor fusion, with varying degrees of flexibility and autonomy. In contrast, MIT CSAIL’s latest NJF approach strives for greater simplicity: it learns robot movements via vision alone, without predefined knowledge or infrastructure dependencies. Prior attempts offered partial solutions or required substantial computational resources, but NJF leverages neural networks based on extensions of neural radiance fields, redefining self-awareness in robotics with notable distinction.

How Does Neural Jacobian Fields Work Differently?

NJF separates itself from traditional systems by making the learning and control model entirely dependent on visual input. Rather than relying on sensors, cameras record the robot during random, exploratory movements. This data drives a neural network to understand the robot’s shape and how it physically reacts to different control commands. Once the learning phase concludes, only a single monocular camera is needed for real-time operation—enabling cost-effective deployment and freeing designers to experiment with unconventional materials or morphologies.

What Capabilities Has MIT’s NJF Demonstrated?

Testing has spanned a range of platforms, including a pneumatic soft robotic hand, the Allegro hand, a 3D-printed robotic arm, and a rotating platform without sensors. In every case, NJF successfully linked visual information and control signals, allowing each robot to localize and manipulate objects effectively.

“Think about how you learn to control your fingers: You wiggle, you observe, you adapt,” explained Sizhe Lester Li, MIT CSAIL researcher. “That’s what our system does.”

Early simulations have demonstrated that even with noisy or incomplete data, the model infers relationships between commands and bodily movement, allowing for rapid adaptation.

How Might NJF Impact Real-World Robotics?

Researchers envision practical applications in fields like agriculture, construction, and logistics, where traditional control methods falter due to unpredictable environments or high equipment costs. By eliminating reliance on GPS, external tracking, or complex onboard sensors, NJF-based robots could navigate or manipulate objects within cluttered or dynamic settings. MIT points to the system’s versatility, suggesting that even casual users might soon train a robot using only video from smartphones, supporting accessible robotics exploration for non-experts.

Nonetheless, limits remain around NJF’s generalizability, as the system must undergo retraining for each new robot, and currently cannot handle tactile or force data crucial for contact-intensive tasks. The MIT team aims to refine the framework for broader adaptability and to better manage occluded or highly variable environments. Collaborative research efforts continue, with current findings jointly authored by Sizhe Lester Li, Vincent Sitzmann, Daniela Rus, and colleagues, highlighting robust interdisciplinary work between computer vision and soft robotics teams. Support for this project comes from the Solomon Buchsbaum Research Fund, MIT, NSF, and additional partners, with results recently published in Nature.

Techniques like NJF signal progress in reducing complexity and cost in robotics control, shifting from rigid programming paradigms to adaptive, observation-based technologies. While scaling NJF for varied robot types or complex, tactile interactions remains a challenge, the method presents a promising alternative to sensor-dependent, hardware-centric solutions. Readers tracking commercial robotics can anticipate advances where simple camera systems replace costly sensor arrays, facilitating the development of more flexible, affordable, and accessible robots. In this evolving landscape, understanding the principles and limitations of systems like NJF can guide engineers and end-users who need adaptable automation for unpredictable, real-world tasks.

  • MIT’s NJF uses vision instead of sensors for robot control and learning tasks.
  • NJF adapts across robot types using only visual feedback, but retraining is required.
  • Limitations persist around tactile sensing and generalizing control between robots.
You can follow us on Youtube, Telegram, Facebook, Linkedin, Twitter ( X ), Mastodon and Bluesky

You Might Also Like

Court Rulings Allow Meta and Anthropic to Train A.I. on Books

Anthropic Puts Claude AI to the Test as Office Shopkeeper

Nvidia Drives Growth With Physical AI Ambitions

Meta Attracts Top AI Talent as Zuckerberg Intensifies Recruitment

Nvidia Surpasses Microsoft to Regain Lead in Market Value

Share This Article
Facebook Twitter Copy Link Print
Kaan Demirel
By Kaan Demirel
Kaan Demirel is a 28-year-old gaming enthusiast residing in Ankara. After graduating from the Statistics department of METU, he completed his master's degree in computer science. Kaan has a particular interest in strategy and simulation games and spends his free time playing competitive games and continuously learning new things about technology and game development. He is also interested in electric vehicles and cyber security. He works as a content editor at NewsLinker, where he leverages his passion for technology and gaming.
Previous Article Hacker Tracks FBI Agent, Cartel Targets Witnesses in El Chapo Case
Next Article Nvidia RTX 5090 Lures Scammers as GPU Fraud Cases Rise

Stay Connected

6.2kLike
8kFollow
2.3kSubscribe
1.7kFollow

Latest News

Nvidia RTX 5090 Lures Scammers as GPU Fraud Cases Rise
Computing
Hacker Tracks FBI Agent, Cartel Targets Witnesses in El Chapo Case
Cybersecurity Technology
Comau Launches MyMR Robots and Cobots at Automatica 2025
Robotics
Distalmotion Sells Multiple DEXTER Surgical Robots to AdventHealth
Robotics
Xiaomi CEO Praises Tesla After First Self-Driving Delivery
Electric Vehicle
NEWSLINKER – your premier source for the latest updates in ai, robotics, electric vehicle, gaming, and technology. We are dedicated to bringing you the most accurate, timely, and engaging content from across these dynamic industries. Join us on our journey of discovery and stay informed in this ever-evolving digital age.

ARTIFICAL INTELLIGENCE

  • Can Artificial Intelligence Achieve Consciousness?
  • What is Artificial Intelligence (AI)?
  • How does Artificial Intelligence Work?
  • Will AI Take Over the World?
  • What Is OpenAI?
  • What is Artifical General Intelligence?

ELECTRIC VEHICLE

  • What is Electric Vehicle in Simple Words?
  • How do Electric Cars Work?
  • What is the Advantage and Disadvantage of Electric Cars?
  • Is Electric Car the Future?

RESEARCH

  • Robotics Market Research & Report
  • Everything you need to know about IoT
  • What Is Wearable Technology?
  • What is FANUC Robotics?
  • What is Anthropic AI?
Technology NewsTechnology News
Follow US
About Us   -  Cookie Policy   -   Contact

© 2025 NEWSLINKER. Powered by LK SOFTWARE
Welcome Back!

Sign in to your account

Register Lost your password?