Transfer Learning Analysis


Transfer Learning is a Machine Learning technique that allows to reutilize an already trained convolutional neural network (CNN) on a specific dataset and adapt it, or transfer it, to a different dataset.

In general, there are two strategies to perform transfer learning, Finetuning, which consists of using the pretrained network on the base dataset and train all layers in the target dataset, and freezing, which consists of leaving all but the last layer frozen (the weights are not updated) and train the last layer. For this project, the base dataset is ImageNet, which contains 1.2 million images with 1000 categories, divided into animals and objects.

We experimented with 4 datasets, which were converted to grayscale to understand how the CNNs behave in this color space. We created a pipeline to programatically be able to finetune and retrain the networks on any dataset.

Average training time: x2.5 faster (freezing over finetuning)
Accuracy (grayscale): +56% (finetuning over freezing)
Accuracy (similar domain): +2% (finetuning over freezing)
Accuracy (disimilar domain): +39% (finetuning over freezing)

Do you want more information?

Book a demo with us

Get in contact