Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 31 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 11 tok/s Pro
GPT-5 High 9 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

The Magnetic Tower of Hanoi (1003.0225v2)

Published 28 Feb 2010 in math.CO and cs.DM

Abstract: In this work I study a modified Tower of Hanoi puzzle, which I term Magnetic Tower of Hanoi (MToH). The original Tower of Hanoi puzzle, invented by the French mathematician Edouard Lucas in 1883, spans "base 2". That is - the number of moves of disk number k is 2k-1, and the total number of moves required to solve the puzzle with N disks is 2N - 1. In the MToH puzzle, each disk has two distinct-color sides, and disks must be flipped and placed so that no sides of the same color meet. I show here that the MToH puzzle spans "base 3" - the number of moves required to solve an N+1 disk puzzle is essentially three times larger than he number of moves required to solve an N disk puzzle. The MToH comes in 3 flavors which differ in the rules for placing a disk on a free post and therefore differ in the possible evolutions of the Tower states towards a puzzle solution. I analyze here algorithms for minimizing the number of steps required to solve the MToH puzzle in its different versions. Thus, while the colorful Magnetic Tower of Hanoi puzzle is rather challenging, its inherent freedom nurtures mathematics with remarkable elegance.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube