First published: 2018/10/04 (2 years ago) Abstract: A central goal of unsupervised learning is to acquire representations from
unlabeled data or experience that can be used for more effective learning of
downstream tasks from modest amounts of labeled data. Many prior unsupervised
learning works aim to do so by developing proxy objectives based on
reconstruction, disentanglement, prediction, and other metrics. Instead, we
develop an unsupervised learning method that explicitly optimizes for the
ability to learn a variety of tasks from small amounts of data. To do so, we
construct tasks from unlabeled data in an automatic way and run meta-learning
over the constructed tasks. Surprisingly, we find that, when integrated with
meta-learning, relatively simple mechanisms for task design, such as clustering
unsupervised representations, lead to good performance on a variety of
downstream tasks. Our experiments across four image datasets indicate that our
unsupervised meta-learning approach acquires a learning algorithm without any
labeled data that is applicable to a wide range of downstream classification
tasks, improving upon the representation learned by four prior unsupervised
What is stopping us from applying meta-learning to new tasks? Where do the tasks come from? Designing task distribution is laborious. We should automatically learn tasks!
Unsupervised Learning via Meta-Learning: The idea is to use a distance metric in an out-of-the-box unsupervised embedding space created by BiGAN/ALI or DeepCluster to construct tasks in an unsupervised way. If you cluster points to randomly define classes (e.g. random k-means) you can then sample tasks of 2 or 3 classes and use them to train a model.
Where does the extra information come from? The metric space used for k-means asserts specific distances. The intuition why this works is that it is useful model initialization for downstream tasks.
This summary was written with the help of Chelsea Finn.