Evolutionary Multi-Objective Optimization of Large Language Model Prompts for Balancing Sentiments (2401.09862v1)
Abstract: The advent of LLMs such as ChatGPT has attracted considerable attention in various domains due to their remarkable performance and versatility. As the use of these models continues to grow, the importance of effective prompt engineering has come to the fore. Prompt optimization emerges as a crucial challenge, as it has a direct impact on model performance and the extraction of relevant information. Recently, evolutionary algorithms (EAs) have shown promise in addressing this issue, paving the way for novel optimization strategies. In this work, we propose a evolutionary multi-objective (EMO) approach specifically tailored for prompt optimization called EMO-Prompts, using sentiment analysis as a case study. We use sentiment analysis capabilities as our experimental targets. Our results demonstrate that EMO-Prompts effectively generates prompts capable of guiding the LLM to produce texts embodying two conflicting emotions simultaneously.
- EvoPrompting: Language Models for Code-Level Neural Architecture Search. ArXiv, abs/2302.14838, 2023.
- A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2):182–197, 2002.
- Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution. ArXiv, abs/2309.16797, 2023.
- Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers. ArXiv, abs/2309.08532, 2023.
- SMS-EMOA: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research, 181:1653–1669, 02 2007.
- The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, page 3045–3059, 2021.
- X. L. Li and P. Liang. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), page 4582–4597, 2021.
- GPT understands, too. AI Open, 2023.
- Language Model Crossover: Variation through Few-Shot Prompting. CoRR, abs/2302.12170, 2023.
- OpenAI. GPT-4 Technical Report. ArXiv, abs/2303.08774, 2023.
- DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108, 2019.
- Llama 2: Open Foundation and Fine-Tuned Chat Models. ArXiv, abs/2307.09288, 2023.
- Chain of Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
- ReAct: Synergizing Reasoning and Acting in Language Models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023.
- Large Language Models Are Human-Level Prompt Engineers. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023.