Skip to content

This project is in 'TensorFlow in Practice Specialization' where I've tried to improve the performance using transfer learning.

Notifications You must be signed in to change notification settings

MrAnayDongre/TransferLearning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

TransferLearning

This project is in 'TensorFlow in Practice Specialization' where I've tried to improve the performance using transfer learning.

When we have a relatively small dataset, a super-effective technique is to use Transfer Learning where we use a pre-trained model.

This model has been trained on an extremely large dataset, and we would be able to transfer weights which were learned through hundreds of hours of training on multiple high powered GPUs.

Many such models are open-sourced such as VGG-19 and Inception-v3. They were trained on millions of images with extremely high computing power which can be very expensive to achieve from scratch.

We are using the Inception-v3 model in the project.

Transfer Learning has become immensely popular because it considerably reduces training time, and requires a lot less data to train on to increase performance.

Get the data (3000 total images)

1

Import the Inception-v3 model

2

We are going to use all the layers in the model except for the last fully connected layer as it is specific to the ImageNet competition.

3

Make all the layers non-trainable (We can retrain some of the lower layers to increase performance. Keep in mind that this may lead to overfitting)

4

We use binary_crossentropy as the loss metric as we have 2 target classes (it's a binary classification problem) Our optimizer is RMSprop with a learning rate of 0.0001 (We can experiment with this; Adam and Adagrad optimizers would also work well)

5

After rescaling the images, and using Image Augmentation, we flow them in batches of 20 using train_datagen and test_datagen. Details can be found in my previous post.

6

We have 2000 training images and a 1000 for validation.

Let’s Train

7

11

After about 35 minutes, we were able to achieve an accuracy of 94%. A callback can very easily be implemented after we reach a certain accuracy. This is the kind of result we were hoping for using Transfer Learning; Building upon a pre-trained model and using it in our custom application which was able to achieve great performance after training on just 2000 images. Another approach to this problem would be not using all the layers of Inception-v3.

8

9

Here I achieved an accuracy of 96% .

10

About

This project is in 'TensorFlow in Practice Specialization' where I've tried to improve the performance using transfer learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published