This project is an image classification application using Tensorflow and Keras. This dataset contains images of hand gestures from the game Rock-Paper-Scissors. In this project I created a machine learning model using the Convolution Neural Network from Tensorflow to classify Rock-Paper-Scissors data.
The data set for this project was obtained from GoogleAPIs, in this and this. You can also get it from Kaggle. Rock Paper Scissors contains images from a variety of different hands, from different races, ages and genders, posed into Rock / Paper or Scissors and labelled as such. These images have all been generated using CGI techniques as an experiment in determining if a CGI-based dataset can be used for classification against real images. Rock Paper Scissors is a dataset containing 2,892 images of diverse hands in Rock/Paper/Scissors poses. There are 2520 images in the training set; and 372 images in the test set.
Note that all of this data is posed against a white background. Each image is 300×300 pixels in 24-bit color
- Import libraries
- Download and extract file
- Storing training and validation data sets into variables
- Data pre-processing using image augmentation
- Prepare train data
- Building a model architecture with CNN
- Create Callbacks
- Model Evaluate
- Plotting accuracy and loss
- Predict image
We achieved 97,66% accuracy on training set and 97,85% accuracy on validation set.