Papers
Topics
Authors
Recent
2000 character limit reached

The Importance of Directional Feedback for LLM-based Optimizers (2405.16434v2)

Published 26 May 2024 in cs.AI, cs.CL, and cs.NE

Abstract: We study the potential of using LLMs as an interactive optimizer for solving maximization problems in a text space using natural language and numerical feedback. Inspired by the classical optimization literature, we classify the natural language feedback into directional and non-directional, where the former is a generalization of the first-order feedback to the natural language space. We find that LLMs are especially capable of optimization when they are provided with {directional feedback}. Based on this insight, we design a new LLM-based optimizer that synthesizes directional feedback from the historical optimization trace to achieve reliable improvement over iterations. Empirically, we show our LLM-based optimizer is more stable and efficient in solving optimization problems, from maximizing mathematical functions to optimizing prompts for writing poems, compared with existing techniques.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Convex optimization. Cambridge university press, 2004.
  2. LLF-Bench: Benchmark for interactive learning from language feedback, 2023.
  3. A real-world webagent with planning, long context understanding, and program synthesis. arXiv preprint arXiv:2307.12856, 2023.
  4. Do as i can, not as i say: Grounding language in robotic affordances. In CORL, 2023.
  5. Large language models are zero-shot reasoners. In NeurIPS, 2022.
  6. Melanie Mitchell. An introduction to genetic algorithms. MIT press, 1998.
  7. Jonas Mockus. The application of bayesian methods for seeking the extremum. Towards global optimization, 2:117, 1998.
  8. Automatic prompt optimization with "gradient descent" and beam search. arXiv preprint arXiv:2305.03495, 2023.
  9. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023.
  10. Policy gradient methods for reinforcement learning with function approximation. In NeurIPS, 1999.
  11. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023.
  12. Large language models as optimizers. arXiv preprint arXiv:2309.03409, 2023.
  13. React: Synergizing reasoning and acting in language models. In ICLR, 2023.
Citations (9)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 2 likes about this paper.