Empowering Universal Robot Programming with Fine-Tuned Large Language Models
DOI:
https://doi.org/10.4108/airo.8983Keywords:
Large Language Models, LLMs, Fine-tuning, Synthesis dataset, URScriptAbstract
LLMs are transforming AI but face challenges in robotics due to domain-specific requirements. This paper explores LLM-generated URScript code for Universal Robots (UR), improving automation accessibility. A fine-tuning dataset of 20,000 synthetic samples, based on 514 validated human-created examples, enhances performance. Using the Unsloth framework, we fine-tune and evaluate the model in real-world scenarios. Results demonstrate LLMs’potential to simplify UR robot programming, highlighting their value in industrial automation. The video demo is available at the following link, and the codebase will be added soon: https://github.com/t1end4t/llm-robotics
Downloads
References
[1] Universal Robots (2024) Universal Robots unveils its AI Accelerator [Internet]. Universal-robots.com. Available from: https://www.universal-robots.com/news-and-media/news-center/universal-robots-unveils-its-ai-accelerator/
[2] Ahn M., Brohan A., Brown N., Chebotar Y., Cortes O., David B., Finn C., Fu C., Gopalakrishnan K., Hausman K., Herzog A. (2022) Do as I can, not as I say: Grounding language in robotic affordances, arXiv preprint arXiv:2204.01691.
[3] Huang W., Wang C., Zhang R., Li Y., Wu J., Fei-Fei L. (2023) Voxposer: Composable 3d value maps for robotic manipulation with language models, arXiv preprint arXiv:2307.05973. 2023 Jul 12.
[4] Yoneda T., Fang J., Li P., Zhang H., Jiang T., Lin S., Picker B., Yunis D., Mei H., Walter M. R. (2024) Statler: State-maintaining language models for embodied reasoning, In 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024 May 13 (pp. 15083-15091). IEEE.
[5] Liang J., Huang W., Xia F., Xu P., Hausman K., Ichter B., Florence P., Zeng A. (2023) Code as policies: Language model programs for embodied control, In 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023 May 29 (pp. 9493-9500). IEEE.
[6] Mirchandani S., Xia F., Florence P., Ichter B., Driess D., Arenas M. G., Rao K., Sadigh D., Zeng A. (2023) Large language models as general pattern machines, arXiv preprint arXiv:2307.04721. 2023 Jul 10.
[7] Joublin F., Ceravola A., Smirnov P., Ocker F., Deigmoeller J., Belardinelli A., Wang C., Hasler S., Tanneberg D., Gienger M. (2024) Copal: Corrective planning of robot actions with large language models, In 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024 May 13 (pp. 8664-8670). IEEE.
[8] Driess D., Xia F., Sajjadi M. S., Lynch C., Chowdhery A., Wahid A., Tompson J., Vuong Q., Yu T., Huang W., Chebotar Y. PaLM-E: An embodied multimodal language model.
[9] Huang S., Jiang Z., Dong H., Qiao Y., Gao P., Li H. (2023) Instruct2Act: Mapping multi-modality instructions to robotic actions with large language model, arXiv preprint arXiv:2305.11176. 2023 May 18.
[10] Chen Y., Arkin J., Zhang Y., Roy N., Fan C. (2024) Scalable multi-robot collaboration with large language models: Centralized or decentralized systems?, In 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024 May 13 (pp. 4311-4317). IEEE.
[11] Firoozi R., Tucker J., Tian S., Majumdar A., Sun J., Liu W., Zhu Y., Song S., Kapoor A., Hausman K., Ichter B. (2023) Foundation models in robotics: Applications, challenges, and the future, The International Journal of Robotics Research. 2023 Dec:02783649241281508.
[12] Chen X., Li L., Chang L., Huang Y., Zhao Y., Zhang Y., Li D. (2023) Challenges and contributing factors in the utilization of large language models (LLMs), arXiv preprint arXiv:2310.13343. 2023 Oct 20.
[13] Universal Robots (2024) Offline Simulator - UR Sim for non Linux 5.12.6 LTS [Internet]. Available from: https://shorturl.at/zSc4y.
[14] Taori R., Gulrajani I., Zhang T., Dubois Y., Li X., Guestrin C., Liang P., Hashimoto TB. (2023) Stanford Alpaca: An Instruction-following LLaMA model, GitHub repository. Available from: https://github.com/tatsu-lab/stanford_alpaca.
[15] Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez AN., Kaiser Ł., Polosukhin I. (2017) Attention is all you need, Advances in Neural Information Processing Systems, 2017;30.
[16] Yang A., Yang B., Zhang B., Hui B., Zheng B., Yu B., Li C., Liu D., Huang F., Wei H., Lin H. (2024) Qwen2. 5 technical report, arXiv preprint arXiv:2412.15115, 2024 Dec 19.
[17] Hui B., Yang J., Cui Z., Yang J., Liu D., Zhang L., Liu T., Zhang J., Yu B., Lu K., Dang K. (2024) Qwen2. 5-coder technical report, arXiv preprint arXiv:2409.12186, 2024 Sep 18.
[18] Universal Robots (2024) Develop with URScript [Internet]. Available from: https://www.universal-robots.com/developer/urscript/.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Tien Dat Le, Minhhuy Le

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.