As humans, it comes natural to us to adapt previously acquired knowledge to new tasks. That is how we recognize new objects, learn new languages, and develop new manual skills.
Computers, on the other hand, struggle to achieve such transfer of knowledge. For this reason, pattern recognition and other tasks related to artificial intelligence cannot be easily or effectively automated, and a great deal of effort is put into building problem-specific strategies.
Transfer Learning refers to the ideas and methods to tackle this issue, so that such problem-specific knowledge can be transferred to help solve similar problems.
In Computer Vision in particular, a common workflow (depicted above) consists of image acquisition, segmentation of objects of interest, feature extraction from the parts of the image corresponding to such objects, and the training of a machine learning model for prediction.
For many applications, such as biomedical imaging, ensuring homogeneity over different acquisitions (first step above) is difficult. Differences in experimental setup and parameters propagate to the next steps of the process. As a result, features extracted from objects acquired under different experimental conditions can not be guaranteed to come from a common distribution. Machine Learning models can not be safely reused for new occurrences of the general problem under this conditions (third step above).
Transfer Learning approaches
We address the knowledge transfer problem by adapting the extracted features, led by structural information obtained from the images.
We incorporate knowledge of the domains by establishing pairwise relationships between similar instances in both domains, and finding transformations that explain the observed differences in feature space.
We apply our methods for synaptic junction detection on electron microscopy of the brain. Our domain adaptation method achieves performance close to what could be obtained from annotated data.