ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling (2405.17743v5)
Abstract: Optimization modeling plays a critical role in the application of Operations Research (OR) tools to address real-world problems, yet they pose challenges and require extensive expertise from OR experts. With the advent of LLMs, new opportunities have emerged to streamline and automate such task. However, current research predominantly relies on closed-source LLMs such as GPT-4, along with extensive prompt engineering techniques. This reliance stems from the scarcity of high-quality training datasets for optimization modeling, resulting in elevated costs, prolonged processing times, and privacy concerns. To address these challenges, our work is the first to propose a viable path for training open-source LLMs that are capable of optimization modeling and developing solver codes, eventually leading to a superior ability for automating optimization modeling and solving. Particularly, we design the {\sc OR-Instruct}, a semi-automated data synthesis framework for optimization modeling that enables customizable enhancements for specific scenarios or model types. This work also introduces IndustryOR, the first industrial benchmark for evaluating LLMs in solving practical OR problems. We train several 7B-scale open-source LLMs using synthesized data (dubbed ORLMs{https://github.com/Cardinal-Operations/ORLM}), which exhibit significantly enhanced optimization modeling capabilities, achieving competitive performance across the NL4OPT, MAMO, and IndustryOR benchmarks. Additionally, our experiments highlight the potential of scaling law and reinforcement learning to further enhance the performance of ORLMs. The workflows and human-machine interaction paradigms of ORLMs in practical industrial applications are also discussed in the paper.
- Optimus: Scalable optimization modeling with (mi) lp solvers and large language models. arXiv preprint arXiv:2402.10172, 2024.
- AI@Meta. Llama 3 model card. 2024. URL https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md.
- A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications. Journal of Big Data, 10(1):46, 2023.
- Mathematical modelling. Gulf Professional Publishing, 1995.
- Cardinal Optimizer (COPT) user guide. https://guide.coap.online/copt/en-doc, 2022.
- Linear programming word problems formulation using ensemblecrf ner labeler and t5 text generator with data augmentations. arXiv preprint arXiv:2212.14657, 2022.
- Mamo: a mathematical modeling benchmark with solvers. arXiv preprint arXiv:2405.13144, 2024.
- Mistral 7b. ArXiv preprint, abs/2310.06825, 2023. URL https://arxiv.org/abs/2310.06825.
- Tagged input and decode all-at-once strategy. https://github.com/MLPgroup/nl4opt-generation, 2022.
- Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
- Large language models for supply chain optimization. arXiv preprint arXiv:2307.03875, 2023a.
- Synthesizing mixed-integer linear programming models from natural language descriptions. arXiv preprint arXiv:2311.15271, 2023b.
- Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
- A novel approach for auto-formulation of optimization problems. arXiv preprint arXiv:2302.04643, 2023.
- Synthesis of mathematical programs from natural language specifications. arXiv preprint arXiv:2304.03287, 2023.
- Augmenting operations research with auto-formulation of optimization models from problem descriptions. In Yunyao Li and Angeliki Lazaridou, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: EMNLP 2022 - Industry Track, pages 29–62, Abu Dhabi, UAE, 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-industry.4. URL https://doi.org/10.18653/v1/2022.emnlp-industry.4.
- Nl4opt competition: Formulating optimization problems based on their natural language descriptions. In NeurIPS 2022 Competition Track, pages 189–203. PMLR, 2023.
- Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024.
- Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
- Ajay Singh. An overview of the optimization modelling applications. Journal of Hydrology, 466:167–182, 2012.
- Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
- Self-instruct: Aligning language models with self-generated instructions. arXiv preprint arXiv:2212.10560, 2022.
- How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023.
- Approaches to sensitivity analysis in linear programming. Annals of Operations Research, 27(1):3–38, 1990.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022.
- Chain-of-experts: When llms meet complex operations research problems. In The Twelfth International Conference on Learning Representations, 2023.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.