ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling

成果类型:
Article; Early Access
署名作者:
Huang, Chenyu; Tang, Zhengyang; Hu, Shixi; Jiang, Ruoqing; Zheng, Xin; Ge, Dongdong; Wang, Benyou; Wang, Zizhuo
署名单位:
Shanghai University of Finance & Economics; The Chinese University of Hong Kong, Shenzhen; The Chinese University of Hong Kong, Shenzhen; Shenzhen Research Institute of Big Data; Columbia University; Tsinghua University; Duke University; Shanghai Jiao Tong University; The Chinese University of Hong Kong, Shenzhen
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2024.1233
发表日期:
2025
关键词:
摘要:
Optimization modeling plays a critical role in the application of Operations Research (OR) tools to address real-world problems, yet they pose challenges and require extensive expertise from OR experts. With the advent of large language models (LLMs), new opportunities have emerged to streamline and automate such tasks. However, current research predominantly relies on closed-source LLMs, such as GPT-4, along with extensive prompt engineering techniques. This reliance stems from the scarcity of high-quality training data sets for optimization modeling, resulting in elevated costs, prolonged processing times, and privacy concerns. To address these challenges, our work is the first to propose a viable path for training open-source LLMs that are capable of optimization modeling and developing solver codes, eventually leading to a superior ability for automating optimization modeling and solving. Particularly, we design the OR-INSTRUCT, a semiautomated data synthesis framework for optimization modeling that enables customizable enhancements for specific scenarios or model types. This work also introduces IndustryOR, the first industrial benchmark for evaluating LLMs in solving practical OR problems. We train several 7B-scale open-source LLMs using synthesized data (dubbed ORLMs), which exhibit significantly enhanced optimization modeling capabilities, achieving competitive performance across the NL4Opt, MAMO, and IndustryOR benchmarks. Additionally, our experiments highlight the potential of scaling law and reinforcement learning to further enhance the performance of ORLMs. The workflows and human-machine interaction paradigms of ORLMs in practical industrial applications are also discussed in the paper.