Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 82 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 117 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 469 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Solving the QAP by Two-Stage Graph Pointer Networks and Reinforcement Learning (2404.00539v1)

Published 31 Mar 2024 in cs.LG

Abstract: Quadratic Assignment Problem (QAP) is a practical combinatorial optimization problems that has been studied for several years. Since it is NP-hard, solving large problem instances of QAP is challenging. Although heuristics can find semi-optimal solutions, the execution time significantly increases as the problem size increases. Recently, solving combinatorial optimization problems by deep learning has been attracting attention as a faster solver than heuristics. Even with deep learning, however, solving large QAP is still challenging. In this paper, we propose the deep reinforcement learning model called the two-stage graph pointer network (GPN) for solving QAP. Two-stage GPN relies on GPN, which has been proposed for Euclidean Traveling Salesman Problem (TSP). First, we extend GPN for general TSP, and then we add new algorithms to that model for solving QAP. Our experimental results show that our two-stage GPN provides semi-optimal solutions for benchmark problem instances from TSPlib and QAPLIB.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.