Factories constantly seek solutions that can reliably handle unpredictable production demands and environments. Inbolt, a Paris-based robotics startup, has introduced its latest bin-picking system built to address these industrial challenges. Their approach centers on moving advanced vision technology onto the robot arm itself, providing enhanced real-time object detection and manipulation for robotic systems. Manufacturers, including firms like Stellantis, are pursuing these innovations to cut operational costs and boost efficiency in dynamic settings. The flexible nature of Inbolt’s solution aims to eliminate some of the complexity and rigidity found in traditional bin-picking setups.
Recent updates on vision-guided robotic picking highlighted persistent limitations in fixed-camera systems, particularly regarding system calibration and adaptability to random part arrangements. Earlier solutions tended to rely on costly overhead sensors and extensive programming to define grasp strategies—a method sometimes resulting in production downtime when bins or parts shifted unexpectedly. Inbolt’s latest system stands out by integrating the camera onto the moving robot arm, offering greater versatility over earlier models. This approach allows robots to adapt instantly to changes, potentially minimizing the need for extensive recalibration compared to previously reported solutions.
How Does On-Arm Vision Benefit Bin Picking Tasks?
Bringing the 3D camera and AI system directly onto the robot arm allows for continuous, adaptive perception of objects, making the equipment less dependent on precise bin positioning or ideal lighting conditions. The company states that the adaptable setup reduces the cost and complexity of deploying robots across varied workflows. Inbolt partners with RealSense to ensure robust vision capability under challenging lighting. According to Albane Dersy, chief operating officer at Inbolt,
“We designed our solution to adapt in real time, able to see, grasp, and adjust the way a human would.”
This design means manufacturers can employ the same robotic setup in multiple scenarios by eliminating the need for several expensive, fixed cameras.
What Role Does Inbolt’s Proprietary AI Play in Robot Performance?
Inbolt’s AI leverages vision-language-action models to offer an array of grasp strategies per pick, aiming for 95% success in active production. The robots analyze objects based on their pickable surfaces rather than strictly pre-defined points, providing agility when objects’ positions or orientations vary. The software continuously figures out each object’s location and corrects the robot’s movements mid-operation. Running on NVIDIA hardware, the AI system claims to minimize computational demand while maintaining reliability. Dersy noted,
“With VLAs [vision-language-action models], you’re reinventing the grasp each time. With better data, we have more flexibility.”
The new architecture enables robots to place parts accurately even when handling unpredictably arranged items.
Can Manufacturers Quickly Train and Deploy Inbolt’s Solution?
Robots using Inbolt’s system can be trained on CAD models within minutes, and the technology supports a wide array of robot brands, such as ABB, KUKA, FANUC, Yaskawa, and Universal Robots. Their platform operates effectively regardless of bin size or movement, offering adaptability for evolving production lines. Inbolt has already run over 20 million cycles in the first half of 2025 and provides an estimated six-month return on investment for customers. Their GuideNow product, which previously won industry recognition, forms a platform for flexible deployment and rapid scalability across global manufacturing sites.
The market for robotic bin picking has seen multiple efforts to break past the limitations of rigid automation. While traditional methods required static calibration and precise environmental control, solutions like Inbolt’s present an alternative by embedding intelligence in the robot arm itself. Companies with highly variable manufacturing demands, such as Stellantis, Beko, and Toyota, have already begun piloting and adopting Inbolt’s adaptive technology, seeking to maximize up-time and meet diverse production goals. The notable shift lies in giving robots capabilities to adapt visually and tactically, reflecting ongoing trends toward more autonomous and less labor-intensive automation systems.
As flexible automation becomes a requirement for modern production lines, advancements in real-time vision-guided robotics like Inbolt’s will likely draw continued industry attention. Manufacturers seeking resilience in their operations can consider on-arm AI-powered vision systems that reduce setup time and hardware expenditure. Those investigating similar upgrades should weigh the benefits of adaptability, rapid training, and hardware scalability. Practical use cases indicate that rapid integration and agility in unpredictable settings are becoming key advantages in factory automation. When evaluating solutions, attention to system compatibility with existing robots, speed of deployment, and return on investment will be crucial for making informed decisions in industrial robotics adoption.
