Researchers from the University of Lincoln, Toshiba Europe Cambridge Research Laboratory, the University of Surrey, Arizona State University, and the Korea Advanced Institute of Science and Technology (KAIST) have introduced an alternative computational strategy to prevent robotic hands from dropping objects. The related findings have been published in the journal Nature Machine Intelligence.

Traditional methods to enhance a robot's grasping ability typically involve tightening the grip to prevent objects from slipping. However, this approach is not always effective and may even damage fragile items. Inspired by how humans handle objects, the research team developed a new controller. This controller consists of a robotic controller combined with a bio-inspired predictive trajectory modulation strategy. It can predict when an object is likely to slip and accordingly adjust the robot's movement, similar to how humans subtly adjust their actions when handling fragile or slippery objects rather than simply squeezing harder.
The bio-inspired trajectory modulation strategy that the new controller relies on serves as a complement to traditional techniques. It can modulate the robot's gripping force to achieve more dexterous manipulation. It allows the robot to slow down, change direction, and adapt in real time to the position and orientation of the hand, reducing the risk of breaking fragile objects. It is also effective in situations where gripping force cannot be increased, enabling smoother and more intelligent interaction with a wide variety of objects.
The research achieved two key breakthroughs: First, a motion-based slip controller, which is the first of its kind and complements force-based control, proving particularly important when gripping force cannot be increased. Second, a predictive controller driven by a learned tactile forward model (i.e., a world model), enabling the robot to predict slippage based on planned actions.
The newly developed controller was used to plan the motion of a robotic gripper and was tested in dynamic, unstructured environments. The results showed that in certain cases, the controller significantly improved the robot's grasping stability, outperforming conventional controllers that only adjust gripping force.
The research findings are expected to advance robotic systems, enabling them to safely handle various physical interactions — and even social interactions — using world models. Potential applications include real-world scenarios such as home environments, manufacturing sites, and healthcare facilities.
Currently, the research team is working to make the predictive controller faster and more efficient for deployment in more demanding real-time environments, including exploring different architectures and algorithmic techniques to reduce computational overhead. In the next phase, the researchers plan to extend the system to support more advanced and complex object manipulation tasks, such as handling deformable objects or items requiring two-handed operation. They also intend to combine the method with computer vision algorithms to plan robot trajectories based on both tactile and visual information. Additionally, improving the verifiability and explainability of the learning models is an important direction, aiming to develop transparent and safe predictive controllers suitable for real-world deployment.












