Improving the Post-hoc Calibration of Modern Neural Networks with Probe Scaling

29 Sep 2021  ·  Amr Khalifa, Ibrahim Alabdulmohsin ·

We present "probe scaling": a post-hoc recipe for calibrating the predictions of modern neural networks. Our recipe is inspired by several lines of work, which demonstrate that early layers in the neural network learn general rules whereas later layers specialize. We show how such observations can be utilized in a post-hoc manner to calibrate the predictions of trained neural networks by injecting linear probes on the network's intermediate representations. Similar to temperature scaling, probe scaling neither retrains the architecture nor requires significantly more parameters. Unlike temperature scaling, however, it utilizes intermediate layers in the neural network. We demonstrate that probe scaling improves performance over temperature scaling on benchmark datasets across all five metrics: expected calibration error (ECE), negative log-likelihood, Brier score, classification accuracy, and the area under the ROC curve.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here