Gradient Checkpointing is a method used for reducing the memory footprint when training deep neural networks, at the cost of having a small increase in computation time.
Source: Training Deep Nets with Sublinear Memory CostPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 2 | 14.29% |
Computational Efficiency | 1 | 7.14% |
Music Generation | 1 | 7.14% |
Image Captioning | 1 | 7.14% |
Machine Translation | 1 | 7.14% |
MRI Reconstruction | 1 | 7.14% |
Classification | 1 | 7.14% |
Zero-Shot Transfer Image Classification | 1 | 7.14% |
Language Modelling | 1 | 7.14% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |