Papers
Topics
Authors
Recent
Search
2000 character limit reached

Puzzle Solving without Search or Human Knowledge: An Unnatural Language Approach

Published 7 Sep 2021 in cs.LG, cs.AI, and cs.CL | (2109.02797v1)

Abstract: The application of Generative Pre-trained Transformer (GPT-2) to learn text-archived game notation provides a model environment for exploring sparse reward gameplay. The transformer architecture proves amenable to training on solved text archives describing mazes, Rubik's Cube, and Sudoku solvers. The method benefits from fine-tuning the transformer architecture to visualize plausible strategies derived outside any guidance from human heuristics or domain expertise. The large search space ($>10{19}$) for the games provides a puzzle environment in which the solution has few intermediate rewards and a final move that solves the challenge.

Citations (5)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.