no code implementations • 21 Feb 2024 • Brian Wheatman, Meghana Madhyastha, Randal Burns
Artificial intelligence workloads, especially transformer models, exhibit emergent sparsity in which computations perform selective sparse access to dense data.
no code implementations • 10 Nov 2020 • Meghana Madhyastha, Kunal Lillaney, James Browne, Joshua Vogelstein, Randal Burns
We present methods to serialize and deserialize tree ensembles that optimize inference latency when models are not already loaded into memory.
no code implementations • 5 Jul 2019 • Meghana Madhyastha, Percy Li, James Browne, Veronika Strnadova-Neeley, Carey E. Priebe, Randal Burns, Joshua T. Vogelstein
Empirical results on simulated and real data demonstrate that URerF is robust to high-dimensional noise, where as other methods, such as Isomap, UMAP, and FLANN, quickly deteriorate in such settings.
1 code implementation • 30 Nov 2018 • Sudeshna Roy, Meghana Madhyastha, Sheril Lawrence, Vaibhav Rajan
PREREQ can learn unknown concept prerequisites from course prerequisites and labeled concept prerequisite data.