What is Transfer Learning?
Transfer learning, used in machine learning, is the reuse of a pre-trained model on a new problem. In transfer learning, a machine exploits the knowledge gained from a previous task to improve generalization about another. For example, in training a classifier to predict whether an image contains backpack, you could use the knowledge it gained during training to recognize other stuffs. It refers to storing knowledge gained while solving one problem and applying it to a different, but related, problem.
Unfortunately, when created from scratch, deep learning models require access to vast amounts of data and compute resources. This is a luxury that many can't afford. Moreover, it takes a long time to train deep learning models to perform tasks, which is not suitable for use cases that have a short time budget.
Fortunately, transfer learning, the discipline of using the knowledge gained from one trained AI model to another, can help solve these problems. Transfer learning is a technique that's risen to prominence in the AI and machine learning community over the past several decades. Prominent computer scientist Andrew Ng said in 2016 that transfer learning will be one of the major drivers of machine learning commercial success.
"After supervised learning - Transfer Learning will be the next driver of ML commercial success." - Andrew NG, one of the world's foremost data scientists
Traditional vs Transfer Learning
As opposed to traditional machine learning, which occurs on specific tasks and datasets, transfer learning leverages features and weights (among other variables) from previously trained models to train new models. Features are information extracted from a dataset to simplify a model's learning process, like the edges, shapes, and corners of signature boxes and typefaces in documents. On the other hand, weights determine how a given piece of input data will influence the output data.
Models are trained in two stages in transfer learning. First, there's retraining, where the model is trained on a benchmark dataset representing a range of categories. Next is fine-tuning, where the model is further trained on a target task of interest. The pretraining step helps the model to learn general features that can be reused on the target task, boosting its accuracy.
Types of Transfer Learning
There's several different kinds of transfer learning, each with their own upsides: inductive, unsupervised, and transductive transfer learning. With inductive transfer learning, the source and target domains are the same, yet the source and target tasks are different. Unsupervised learning involves different tasks in similar - but not identical - source and target domains without labeled data. As for transductive transfer learning, similarities exist between the source and target tasks, but the domains are different and only the target domain doesn't have labeled data.
Models are trained in two stages in transfer learning. First, there's retraining, where the model is trained on a benchmark dataset representing a range of categories. Next is fine-tuning, where the model is further trained on a target task of interest. The pretraining step helps the model to learn general features that can be reused on the target task, boosting its accuracy.