no code implementations • 24 May 2024 • Nolan Dey, Shane Bergsma, Joel Hestness
Further, by reparameterizing the HPs, S$\mu$Par enables the same HP values to be optimal as we vary both sparsity level and model width.
no code implementations • NeurIPS 2023 • Shane Bergsma, Timothy Zeyl, Lei Guo
We find SutraNets to significantly improve forecasting accuracy over competitive alternatives on six real-world datasets, including when we vary the number of sub-series and scale up the depth and width of the underlying sequence models.
1 code implementation • 22 Dec 2023 • Shane Bergsma, Timothy Zeyl, Javad Rahimipour Anaraki, Lei Guo
We present coarse-to-fine autoregressive networks (C2FAR), a method for modeling the probability distribution of univariate, numeric random variables.