Coding With Fun
Home Docker Django Node.js Articles Python pip guide FAQ Policy

Python cracks the Flipy Bird game


Jun 01, 2021 Article blog



This article was reproduced to Know ID: Charles (Bai Lu) knows his personal column

Download the W3Cschool Mobile App, 0 Foundation Anytime, Anywhere Learning Programming >> Poke this to learn

Lead

Yesterday, I found this interesting thing when I looked at the higher open source projects for deep learning on GitHub:

Use Deep Intensified Learning to crack the Flipy Bird game (Deep Q-Learning).


Related documents

Baidu web download link: https://pan.baidu.com/s/19LgDHq0V3IpE1K5sfuug2g

Password: tqus


References

The content is mainly referenced from the GitHub open source project:

Using Deep Q-Network to Learn How To Play Flappy Bird

link:

https://github.com/yenchenlin/DeepLearningFlappyBird


Introduction to the principle

This project refers to the deep Q learning algorithm in deep enhancement learning and shows that this learning algorithm can be extended to crack The Flipy Bird game. That is, the project is trained using variations of Q-learning, whose input is that the original pixel output is a numerical function of the estimated action.

PS:

If you are interested in in-depth intensive learning, a paper called Demystifying Deep Reinforcement Learning is also available in the Public Number document, which is highly recommended by the original author.

Network architecture:

 Python cracks the Flipy Bird game1

Prior to this, the pretreatment was:

(1) Grayscale image;

(2) Image size adjustment to 80×80;

(3) Stack every 4 frames into an 80x80x4 input array.

The network's final output is a matrix of 2×1 to determine whether the bird is moving. (i.e. whether to press the screen . . .

Test the environment

Computer system: Win10

Python version: 3.5.4

Python-related third-party libraries:

TensorFlow_GPU version: 1.4.0

Pygame version: 1.9.3

OpenCV-Python version: 3.3.0

For configuration details, please refer to the relevant network documentation!!!


Run the demo

The command-line window enters the DeepLearningFlappyBird folder to enter py -3.5 deep_q_network.py return to run:

 Python cracks the Flipy Bird game2

The results are as follows:

 Python cracks the Flipy Bird game3

More references

(1) Mnih Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, C harles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. H uman-level Control through Deep Reinforcement Learning. Nature, 529-33, 2015.

(2) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. P laying Atari with Deep Reinforcement Learning. NIPS, Deep Learning workshop.

(3)Kevin Chen. D eep Reinforcement Learning for Flappy Bird Report | Youtube result.

link:

https://youtu.be/9WKBzTUsPKc

(4)https://github.com/sourabhv/FlapPyBird

(5) https://github.com/asrivat1/DeepLearningVideoGames