Large Language Models for Orchestrating Bimanual Robots

arXiv:2404.02018 doi: 10.48550/arXiv.2404.02018 - Apr 2024
Associated documents :  
Although there has been rapid progress in endowing robots with the ability to solve complex manipulation tasks, generating control policies for bimanual robots to solve tasks involving two hands is still challenging because of the difficulties in effective temporal and spatial coordination. With emergent abilities in terms of step-by-step reasoning and in-context learning, Large Language Models (LLMs) have taken control of a variety of robotic tasks. However, the nature of language communication via a single sequence of discrete symbols makes LLM-based coordination in continuous space a particular challenge for bimanual tasks. To tackle this challenge for the first time by an LLM, we present LAnguage-model-based Bimanual ORchestration (LABOR), an agent utilizing an LLM to analyze task configurations and devise coordination control policies for addressing long-horizon bimanual tasks. In the simulated environment, the LABOR agent is evaluated through several everyday tasks on the NICOL humanoid robot. Reported success rates indicate that overall coordination efficiency is close to optimal performance, while the analysis of failure causes, classified into spatial and temporal coordination and skill selection, shows that these vary over tasks. The project website can be found at https://labor-agent.github.io/

 

@Article{CZWLLW24, 
 	 author =  {Chu, Kun and Zhao, Xufeng and Weber, Cornelius and Li, Mengdi and Lu, Wenhao and Wermter, Stefan},  
 	 title = {Large Language Models for Orchestrating Bimanual Robots}, 
 	 booktitle = {},
 	 journal = {arXiv:2404.02018},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2024},
 	 month = {Apr},
 	 publisher = {},
 	 doi = {10.48550/arXiv.2404.02018}, 
 }