Emergent Mind

Puzzle Solving without Search or Human Knowledge: An Unnatural Language Approach

(2109.02797)
Published Sep 7, 2021 in cs.LG , cs.AI , and cs.CL

Abstract

The application of Generative Pre-trained Transformer (GPT-2) to learn text-archived game notation provides a model environment for exploring sparse reward gameplay. The transformer architecture proves amenable to training on solved text archives describing mazes, Rubik's Cube, and Sudoku solvers. The method benefits from fine-tuning the transformer architecture to visualize plausible strategies derived outside any guidance from human heuristics or domain expertise. The large search space ($>10{19}$) for the games provides a puzzle environment in which the solution has few intermediate rewards and a final move that solves the challenge.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.