1 code implementation • 29 May 2024 • Soochan Lee, Hyeonseong Jeon, Jaehyeon Son, Gunhee Kim
On the other hand, in the more classical literature of statistical machine learning, many models have sequential Bayesian update rules that yield the same learning outcome as the batch training, i. e., they are completely immune to catastrophic forgetting.
no code implementations • 9 Nov 2023 • Jaehyeon Son, Soochan Lee, Gunhee Kim
Over the past decade, deep neural networks have demonstrated significant success using the training scheme that involves mini-batch stochastic gradient descent on extensive datasets.
1 code implementation • NeurIPS 2023 • Soochan Lee, Jaehyeon Son, Gunhee Kim
That is, we propose to formulate continual learning as a sequence modeling problem, allowing advanced sequence models to be utilized for continual learning.