A Q-learning approach to the continuous control problem of robot inverted pendulum balancing (2312.02649v1)
Abstract: This study evaluates the application of a discrete action space reinforcement learning method (Q-learning) to the continuous control problem of robot inverted pendulum balancing. To speed up the learning process and to overcome technical difficulties related to the direct learning on the real robotic system, the learning phase is performed in simulation environment. A mathematical model of the system dynamics is implemented, deduced by curve fitting on data acquired from the real system. The proposed approach demonstrated feasible, featuring its application on a real world robot that learned to balance an inverted pendulum. This study also reinforces and demonstrates the importance of an accurate representation of the physical world in simulation to achieve a more efficient implementation of reinforcement learning algorithms in real world, even when using a discrete action space algorithm to control a continuous action.
- Learning robust manipulation skills with guided policy search via generative motor reflexes. In 2019 International Conference on Robotics and Automation (ICRA) (pp. 7851ā7857). doi:10.1109/ICRA.2019.8793775.
- Q-learning in continuous state and action spaces. In Australasian Joint Conference on Artificial Intelligence (pp. 417ā428). Springer.
- Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3389ā3396). doi:10.1109/ICRA.2017.7989385.
- Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning (pp. 2829ā2838). PMLR.
- Q-learning algorithms: A comprehensive classification and applications. IEEE Access, 7, 133653ā133667. doi:10.1109/ACCESS.2019.2941229.
- Self-supervised sim-to-real adaptation for visual robotic manipulation. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2718ā2724). IEEE.
- Transferring policy of deep reinforcement learning from simulation to reality for robotics. Nature Machine Intelligence, 4, 1077ā1087. doi:10.1038/s42256-022-00573-6.
- Reinforced grounded action transformation for sim-to-real transfer. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4397ā4402). doi:10.1109/IROS45743.2020.9341149.
- Autonomous helicopter flight via reinforcement learning. In Advances in neural information processing systems (pp. 799ā806).
- Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32, 1238ā1274. URL: https://doi.org/10.1177/0278364913495721. doi:10.1177/0278364913495721. arXiv:https://doi.org/10.1177/0278364913495721.
- Learning contact-rich manipulation skills with guided policy search. In 2015 IEEE International Conference on Robotics and Automation (ICRA) (pp. 156--163). doi:10.1109/ICRA.2015.7138994.
- Human-level control through deep reinforcement learning. Nature, 518, 529--533.
- Human-level control through deep reinforcement learning. nature, 518, 529--533.
- Overcoming exploration in reinforcement learning with demonstrations. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 6292--6299). doi:10.1109/ICRA.2018.8463162.
- Deep reinforcement learning applied to an assembly sequence planning problem with user preferences. The International Journal of Advanced Manufacturing Technology, 122, 4235ā4245. doi:10.1007/s00170-022-09877-8.
- Ridm: Reinforced inverse dynamics modeling for learning from a single observed demonstration. IEEE Robotics and Automation Letters, 5, 6262--6269.
- Position/force control of robot manipulators using reinforcement learning. Industrial Robot, 46, 267--280.
- A framework for learning from demonstration with minimal human effort. IEEE Robotics and Automation Letters, 5, 2023--2030. doi:10.1109/LRA.2020.2970619.
- V-rep: A versatile and scalable robot simulation framework. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1321--1326). IEEE.
- Caql: Continuous action q-learning. arXiv preprint arXiv:1909.12397, .
- A neuro swarm procedure to solve the novel second order perturbed delay lane-emden model arising in astrophysics. Scientific Reports, 12. doi:10.1038/s41598-022-26566-4.
- Kuka sunrise toolbox: Interfacing collaborative robots with matlab. IEEE Robotics Automation Magazine, 26, 91--96.
- Deep reinforcement learning-based attitude motion control for humanoid robots with stability constraints. Industrial Robot, 47, 335--347. doi:10.1108/IR-11-2019-0240.
- Siciliano, B. (1990). A closed-loop inverse kinematic scheme for on-line joint-based robot control. Robotica, 8, 231--243.
- Reinforcement learning: An introduction. MIT press.
- Stochastic policy gradient reinforcement learning on a simple 3d biped. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566) (pp. 2849--2854). IEEE volumeĀ 3.
- Search algorithm of the assembly sequence of products by using past learning results. International Journal of Production Economics, 226, 107615. URL: http://www.sciencedirect.com/science/article/pii/S0925527320300037. doi:https://doi.org/10.1016/j.ijpe.2020.107615.
- Q-learning. Machine learning, 8, 279--292.
- Probability dueling dqn active visual slam for autonomous navigation in indoor environment. Industrial Robot, . doi:10.1108/IR-08-2020-0160.
- Sim-to-real transfer of accurate grasping with eye-in-hand observations and continuous control, .
- Sim-to-real transfer in deep reinforcement learning for robotics: a survey. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 737--744). IEEE.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.