no code implementations • 7 Mar 2024 • Alex Havrilla, Yuqing Du, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, Sainbayar Sukhbaatar, Roberta Raileanu
Surprisingly, we find the sample complexity of Expert Iteration is similar to that of PPO, requiring at most on the order of $10^6$ samples to converge from a pretrained checkpoint.
no code implementations • 6 Feb 2024 • Alex Havrilla, Maia Iyer
We then evaluate the test performance of pretrained models both prompted and fine-tuned on noised datasets with varying levels of dataset contamination and intensity.
1 code implementation • 30 Nov 2023 • Alex Havrilla, Kevin Rojas, Wenjing Liao, Molei Tao
Diffusion generative models have achieved remarkable success in generating images with a fixed resolution.
no code implementations • 17 Mar 2023 • Hao liu, Alex Havrilla, Rongjie Lai, Wenjing Liao
Our paper establishes statistical guarantees on the generalization error of chart autoencoders, and we demonstrate their denoising capabilities by considering $n$ noisy training samples, along with their noise-free counterparts, on a $d$-dimensional manifold.
no code implementations • 25 Feb 2023 • Biraj Dahal, Alex Havrilla, Minshuo Chen, Tuo Zhao, Wenjing Liao
Many existing experiments have demonstrated that generative networks can generate high-dimensional complex data from a low-dimensional easy-to-sample distribution.
no code implementations • 18 Feb 2021 • Alex Havrilla, Piotr Nayar, Tomasz Tkocz
We prove Khinchin-type inequalities with sharp constants for type L random variables and all even moments.
Probability