Does a neural network know when it doesn’t know?

If you are keeping up with data science and machine learning, you probably know that in recent years, deep neural networks revolutionized artificial intelligence and computer vision. Ever since AlexNet won the ImageNet challenge in 2012 by a large margin, deep neural nets have conquered many previously unsolvable tasks. Neural networks are mind-blowing. They can learn how to turn winter landscapes to summer landscapes, put zebra stripes to a horse, learn semantic representations between words with basically no supervision, generate photorealistic images from sketches, and many more amazing feats. The technology has advanced so far such that basically everyone with a notebook can use and build neural network architectures capable of previously unattainable feats. Many open source deep learning frameworks — such as TensorFlow and PyTorch — are available, bringing this amazing technology to your arm’s reach.

Every morning there are dozens of new preprints posted on arXiv, beating state of the art by constant technological innovations. However, besides improving performance, there is an equally significant area of deep learning research concerned with a set of questions related to understanding how neural networks perceive the world and how can a model generalize beyond what is known. In real life applications, the last one is a crucial question. Consider the following situation. You are building an application which classifies cars with respect to their manufacturer and model. But what if a new model is released, with a completely novel design? In this case, the neural network hasn’t seen this instance during training. What is the right answer in this case?

Recognizing the unknown is not default for neural networks. This problem is called open set recognition, and there are several methods to solve this. In this post, I will give a brief overview on them.

Confidently wrong

Before diving into the details, let’s consider a toy example. Suppose we have a very simple binary classification problem, for which we train a classifier. The decision boundary is learned, the model generalizes well to our test data, so we are happy.

After the model is deployed, we begin to see something like this.

Are we still happy? Well, probably not so much. From looking at the cluster of points on the right, we —the human experts — might think it is a new class. Maybe it is not a class we are interested in, or maybe it is important. But how does a neural network perceive this? Since it falls relatively far away from the decision boundary, the new instances are very confidently classified as orange. However, in this case, our classifier is wrong.

To effectively detect these situations, additional methods need to be employed. Essentially, the network only has limited tools to recognize the new class: it has a feature transform, a Softmax function and a promise that there are only two classes. How can we use these to solve our problem? A naive approach would be to threshold the Softmax probabilities: if the probability of the predicted class is below a given threshold, say 0.5, we reject the item as unknown. As we saw before, this might not work in many circumstances, because a neural network may give false predictions with a high confidence.

Looking at the context: the OpenMax method

The prediction probabilities provided by our network may be wrong, but looking at the distribution of the points, it is clear that probably there is a new class. Let’s take a look at the activation vectors! For instance, this is how the distribution for the activation scores in the above model would look like.

Blue is the data we have seen during training, red are the strange new instances from deployment. By the distribution of the activation vector values, we can tell whether a new data point is novel in terms of knowledge. This is the basic principle of the OpenMax method, developed by Abhijit Bendale and Terrance E. Boult in their CVPR16 paper Towards Open Set Deep Networks. In their experiments, they used an ImageNet pretrained model, fed it with real, fooling and open set images (that is, images from a class unseen in training); then examined the activation vector patterns. Their fundamental discovery was that these patterns can be used to detect what the neural network doesn’t know.

Activation patterns of the Softmax layer. Source: Abhijit Bendale and Terrance E. Boult, Towards Open Set Deep Networks

On the left you can see the heatmaps of the activation vector for the trained model using different images. Their method to find the outliers is the following.

1) For each class, fit a Weibull distribution to the activation scores using the correctly classified training examples for said class. 
2) Add a new “unknown” class to the activation vectors. 
3) Transform the activation scores using the parameters of the Weibull distributions. These are called OpenMax scores. 
4) For a new image, reject as unknown or accept as known based on the OpenMax scores.

The OpenMax algorithm. Source: Abhijit Bendale and Terrance E. Boult, Towards Open Set Deep Networks

With this method, they were able to improve the open set detection accuracy compared to the naive method, which would be based upon thresholding Softmax probabilities.

OpenMax performance. Source: Abhijit Bendale and Terrance E. Boult, Towards Open Set Deep Networks

So, OpenMax is basically an alternative final layer to a neural network, replacing the good old Softmax. However, this layer is not trainable! Thus, it won’t make your neural network smarter in terms of open set recognition, it just uses its predictions in a more clever way. This seems like a missed opportunity. Is there a way to train a network to recognize unknown?

Looking at the learned representation: open set recognition in the embedding space

Let’s dive into how exactly a neural network sees the data! For simplicity, let’s take a look of LeNet-5, the classical architecture for character recognition, presented in the gamechanging paper Gradient-Based Learning Applied to Document Recognition by Yann LeCun and his coworkers.

The LeNet architecture. Source: Yann LeCun et al., Gradient-Based Learning Applied to Document Recognition

As you can see, the input to the network is the raw data itself. This can be thought of as a vector in a really high dimensional space, 32 x 32 in our case. If you would see all of the vectors corresponding to our training dataset in this high dimensional space, you would probably think that the subsets corresponding to each class are pretty wild. Based on this representation, there is no easy way to distinguish between classes. This is exactly what a neural network sees initially. However, with each layer, it successively transforms the data to a representation more and more tangible, ultimately outputting a very simple one: a low dimensional simplex (generalization of a triangle in multiple dimensions), with each vertex corresponding to a class.

For our purpose, the most interesting representation is the one before the last. The last representation — the output — is slightly artificial, because it is constrained to a space with dimensions equal to the number of classes. As we saw previously, there is “no room” for unknown in this representation. However, this is very different for the output of the penultimate layer! This is often called the embedding, and the embedding vectors represent a high level feature representation of data. In this embedding space, ideally every class should be a cluster on its own, separated from other classes. Where would you put unknowns in this space? Well, if each class is indeed represented by a cluster, an open set example should lie far away from any known cluster.

Following this line of thought, Hassen and Chan developed a method in their paper (Learning a Neural-network-based Representation for Open Set Recognition), which aims to train the network to achieve exactly this. They introduce a new loss to push clusters further from each other and squeeze together the clusters themselves:

ii-loss.

The ii-loss can be attached to the embedding layer, giving rise to the following setups.

ii-loss setups. Source: Mehadi Hassen and Philip K. Chan, Learning a Neural-network-based Representation for Open Set Recognition

Open set recognition in the embedded space is simple in principle: just classify the points falling outside the clusters as unknown. This method was an improvement on OpenMax, however it still misses a very important point: it doesn’t explicitly teach the network to recognize unknown. Would we obtain better open set recognition performance with this?

Teaching a neural network to know what it doesn’t know (for real)

We have seen in the very first example that uncertainty is not a good way to recognize open set examples: a network can classify an unknown example as a member of a known class with very high confidence. In that example however, we haven’t shown the network a single unknown instance during training. Following open set terminology and paraphrasing Donald Rumsfeld former United States Secretary of Defense: there are known unknowns and unknown unknowns. The former class consists of instances available during training, albeit not of our interest. What if we use these to train the network on unknowns and show that these are the examples where you should to be uncertain?

This was the idea of Dhamija et al. in their recent paper Reducing Network Agnostophobia, where they introduce two new loss functions to disentangle unknown examples from the known ones: the Entropic Open Set loss and the Objectosphere loss. How do they work?

The first observation of the authors was that by visualizing the feature representation of MNIST digits and Devanagari handwritten characters for a network trained exclusively on MNIST, the unknown examples tend to be somewhat clustered around the origin.

Softmax activations of MNIST digits and Devanagari handwritten characters of a network trained exclusively on MNIST. Colored dots are MNIST classes, black dots are unknowns. Below are histograms of Softmax activation values for both classes. Source: Akshay Raj Dhamija et al, Reducing Network Agnostophobia

To use this to our advantage, they have introduced a new loss called Entropic Open Set loss, which drives the Softmax scores of the unknown instances to the uniform probability distribution.

Entropic Open Set loss. On the top: loss when instance known. On the bottom: loss when instance is unknown. Source: Akshay Raj Dhamija et al, Reducing Network Agnostophobia

Let’s pause for a minute to study this. For samples with known class (the top part of the definition), this is just the good old cross-entropy loss. Cross-entropy is minimized when the Softmax scores give probability 1 to the true class. However, at the bottom part of the definition, we see something interesting. This is the mean of the negative log-likelihood vector, which is minimized when all probabilities are equal in the Softmax vector! In effect, this loss tells the network that “this instance is unknown, you should be uncertain now”.

So, let’s suppose we have replaced our usual cross-entropy loss with Entropic Open Set loss. What happens with our feature representation? Turns out that this loss has a particular effect: it drives the norm of unknown samples down!

Feature magnitudes for networks trained with and without Entropic Open Set loss. Source: Akshay Raj Dhamija et al, Reducing Network Agnostophobia

Can we enhance this effect? The Entropic Open Set loss doesn’t directly influence feature magnitude: close to optimal scores for unknown samples can still have large magnitude, as long as their activation scores are similar for each class. To achieve this, the authors introduced a new term to this loss function. Together, they called the new loss Objectosphere loss.

Objectosphere loss. Source: Akshay Raj Dhamija et al, Reducing Network Agnostophobia

As we can see, the second term forces samples in the known class to have large magnitude (top row), while forces unknown samples to have a small magnitude (bottom row). These terms jointly have achieved quite an improvement for open set recognition tasks.

Correct Classification Rates at different False Positive Ratios. Source: Akshay Raj Dhamija et al, Reducing Network Agnostophobia

The state of open set recognition

As we have seen previously, open set recognition went through a lot of progress in the last few years. Still, we are very far from a robust and universal solution. Despite its importance in real life applications, it is a somewhat neglected topic. However, for any given problem, it is not even possible to assemble a training set such that it covers all possible cases and includes all relevant knowledge. In many cases, this is not even known for the domain experts. Think about cell biologists, who use microscopic images to analyze cellular phenotypes and perhaps discover new ones. If a phenotype is truly unknown, how can you reason about it in advance?

In my personal opinion, the truly interesting questions about deep learning start when we are looking at how a neural network can generalize beyond the known. In humans, this is the very essence of intelligence. Think about this: as you constantly accumulate knowledge and grown personally, what is the single sentence which drives you forward the most?

“I don’t know.”

References

[1] A. Bendale and T. E. Boult, Towards Open Set Deep Networks (2016), The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1563–1572
[2] M. Hassen and P. K. Chan, Learning a Neural-network-based Representation for Open Set Recognition, (2018 )arXiv preprint
[3] A. R. Dhamija et al., Reducing Network Agnostophobia, (2018) arXiv preprint

Share on facebook
Share on twitter
Share on linkedin

Related posts