Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evolutionary Multi-Objective Optimization of Large Language Model Prompts for Balancing Sentiments (2401.09862v1)

Published 18 Jan 2024 in cs.NE, cs.AI, cs.CL, and cs.LG

Abstract: The advent of LLMs such as ChatGPT has attracted considerable attention in various domains due to their remarkable performance and versatility. As the use of these models continues to grow, the importance of effective prompt engineering has come to the fore. Prompt optimization emerges as a crucial challenge, as it has a direct impact on model performance and the extraction of relevant information. Recently, evolutionary algorithms (EAs) have shown promise in addressing this issue, paving the way for novel optimization strategies. In this work, we propose a evolutionary multi-objective (EMO) approach specifically tailored for prompt optimization called EMO-Prompts, using sentiment analysis as a case study. We use sentiment analysis capabilities as our experimental targets. Our results demonstrate that EMO-Prompts effectively generates prompts capable of guiding the LLM to produce texts embodying two conflicting emotions simultaneously.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. EvoPrompting: Language Models for Code-Level Neural Architecture Search. ArXiv, abs/2302.14838, 2023.
  2. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2):182–197, 2002.
  3. Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution. ArXiv, abs/2309.16797, 2023.
  4. Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers. ArXiv, abs/2309.08532, 2023.
  5. SMS-EMOA: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research, 181:1653–1669, 02 2007.
  6. The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, page 3045–3059, 2021.
  7. X. L. Li and P. Liang. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), page 4582–4597, 2021.
  8. GPT understands, too. AI Open, 2023.
  9. Language Model Crossover: Variation through Few-Shot Prompting. CoRR, abs/2302.12170, 2023.
  10. OpenAI. GPT-4 Technical Report. ArXiv, abs/2303.08774, 2023.
  11. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108, 2019.
  12. Llama 2: Open Foundation and Fine-Tuned Chat Models. ArXiv, abs/2307.09288, 2023.
  13. Chain of Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
  14. ReAct: Synergizing Reasoning and Acting in Language Models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023.
  15. Large Language Models Are Human-Level Prompt Engineers. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets