no code implementations • 15 Feb 2024 • Vijayalakshmi Saravanan, Perry Siehien, Shinjae Yoo, Hubertus van Dam, Thomas Flynn, Christopher Kelly, Khaled Z Ibrahim
Detecting abrupt changes in real-time data streams from scientific simulations presents a challenging task, demanding the deployment of accurate and efficient algorithms.
no code implementations • 25 Oct 2023 • Lingda Li, Thomas Flynn, Adolfy Hoisie
This paper proposes PerfVec, a novel deep learning-based performance modeling framework that learns high-dimensional, independent/orthogonal program and microarchitecture representations.
no code implementations • 24 Jun 2021 • Patrick R. Johnstone, Jonathan Eckstein, Thomas Flynn, Shinjae Yoo
We present a new, stochastic variant of the projective splitting (PS) family of algorithms for monotone inclusion problems.
1 code implementation • 12 May 2021 • Lingda Li, Santosh Pandey, Thomas Flynn, Hang Liu, Noel Wheeler, Adolfy Hoisie
While discrete-event simulators are essential tools for architecture research, design, and development, their practicality is limited by an extremely long time-to-solution for realistic applications under investigation.
BIG-bench Machine Learning Vocal Bursts Intensity Prediction
no code implementations • 20 Feb 2020 • Thomas Flynn, Kwang Min Yu, Abid Malik, Nicolas D'Imperio, Shinjae Yoo
This work examines the convergence of stochastic gradient-based optimization algorithms that use early stopping based on a validation function.
no code implementations • 25 Sep 2019 • Thomas Flynn, Kwang Min Yu, Abid Malik, Shinjae Yoo, Nicholas D'Imperio
This work examines the convergence of stochastic gradient algorithms that use early stopping based on a validation function, wherein optimization ends when the magnitude of a validation function gradient drops below a threshold.
no code implementations • 13 Jun 2019 • Kwangmin Yu, Thomas Flynn, Shinjae Yoo, Nicholas D'Imperio
The efficiency of the algorithm is tested by training a deep network on the ImageNet classification task.
no code implementations • 1 Aug 2017 • Thomas Flynn
The decision of what layer to update is done in a greedy fashion, based on a rigorous lower bound on the improvement of the objective function for each choice of layer.