no code implementations • 20 Feb 2024 • Kewei Cheng, Nesreen K. Ahmed, Theodore Willke, Yizhou Sun
Our experiments show that this framework significantly enhances the reasoning capabilities of LLMs, enabling them to excel in a broader spectrum of natural language scenarios.
no code implementations • 3 Feb 2024 • Le Chen, Nesreen K. Ahmed, Akash Dutta, Arijit Bhattacharjee, Sixing Yu, Quazi Ishtiaque Mahmud, Waqwoya Abebe, Hung Phan, Aishwarya Sarkar, Branden Butler, Niranjan Hasabnis, Gal Oren, Vy A. Vo, Juan Pablo Munoz, Theodore L. Willke, Tim Mattson, Ali Jannesari
Recently, language models (LMs), especially large language models (LLMs), have revolutionized the field of deep learning.
no code implementations • 9 Dec 2023 • Shukai Duan, Nikos Kanakaris, Xiongye Xiao, Heng Ping, Chenyu Zhou, Nesreen K. Ahmed, Guixiang Ma, Mihai Capota, Theodore L. Willke, Shahin Nazarian, Paul Bogdan
We compare our framework with existing state-of-the-art models and show that it is more efficient with respect to speed and computational usage, as a result of the decrement in training steps and its applicability to models with fewer parameters.
no code implementations • 29 Nov 2023 • Puja Trivedi, Ryan Rossi, David Arbour, Tong Yu, Franck Dernoncourt, Sungchul Kim, Nedim Lipka, Namyong Park, Nesreen K. Ahmed, Danai Koutra
Most real-world networks are noisy and incomplete samples from an unknown target distribution.
no code implementations • 11 Nov 2023 • Le Chen, Arijit Bhattacharjee, Nesreen K. Ahmed, Niranjan Hasabnis, Gal Oren, Bin Lei, Ali Jannesari
The evaluation of CompCodeVet on two open-source code datasets shows that CompCodeVet has the ability to improve the training dataset quality for LLMs.
no code implementations • 6 Oct 2023 • Quazi Ishtiaque Mahmud, Ali TehraniJamsaz, Hung D Phan, Nesreen K. Ahmed, Ali Jannesari
Parallelizing sequentially written programs is a challenging task.
1 code implementation • 2 Sep 2023 • Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, Nesreen K. Ahmed
Rapid advancements of large language models (LLMs) have enabled the processing, understanding, and generation of human-like text, with increasing integration into systems that touch our social sphere.
no code implementations • 8 Jul 2023 • April Chen, Ryan A. Rossi, Namyong Park, Puja Trivedi, Yu Wang, Tong Yu, Sungchul Kim, Franck Dernoncourt, Nesreen K. Ahmed
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
1 code implementation • NeurIPS 2023 • Ali TehraniJamsaz, Quazi Ishtiaque Mahmud, Le Chen, Nesreen K. Ahmed, Ali Jannesari
The remarkable growth and significant success of machine learning have expanded its applications into programming languages and program analysis.
no code implementations • 9 May 2023 • Le Chen, Quazi Ishtiaque Mahmud, Hung Phan, Nesreen K. Ahmed, Ali Jannesari
However, applying machine learning techniques to parallelism detection presents several challenges, such as the lack of an adequate dataset for training, an effective code representation with rich information, and a suitable machine learning model to learn the latent features of code for diverse analyses.
1 code implementation • 7 Mar 2023 • Kewei Cheng, Nesreen K. Ahmed, Yizhou Sun
NCRL detects the best compositional structure of a rule body, and breaks it into small compositions in order to infer the rule head.
no code implementations • 22 Dec 2022 • April Chen, Ryan Rossi, Nedim Lipka, Jane Hoffswell, Gromit Chan, Shunan Guo, Eunyee Koh, Sungchul Kim, Nesreen K. Ahmed
Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node.
no code implementations • 22 Sep 2022 • Guixiang Ma, Vy A. Vo, Theodore Willke, Nesreen K. Ahmed
We provide a comprehensive review of the existing literature on memory-augmented GNNs.
no code implementations • 25 Apr 2022 • Yao Xiao, Guixiang Ma, Nesreen K. Ahmed, Mihai Capota, Theodore Willke, Shahin Nazarian, Paul Bogdan
To enable heterogeneous computing systems with autonomous programming and optimization capabilities, we propose a unified, end-to-end, programmable graph representation learning (PGL) framework that is capable of mining the complexity of high-level programs down to the universal intermediate representation, extracting the specific computational patterns and predicting which code segments would run best on a specific core in heterogeneous hardware platforms.
no code implementations • 22 Jan 2022 • Ancy Sarah Tom, Nesreen K. Ahmed, George Karypis
To account for the structure in the node representations, Mazi generates node representations at each level of the hierarchy, and utilizes them to influence the node representations of the original graph.
no code implementations • 14 Apr 2021 • Vasimuddin Md, Sanchit Misra, Guixiang Ma, Ramanarayan Mohanty, Evangelos Georganas, Alexander Heinecke, Dhiraj Kalamkar, Nesreen K. Ahmed, Sasikanth Avancha
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible.
no code implementations • 12 Feb 2021 • Xin Qian, Ryan A. Rossi, Fan Du, Sungchul Kim, Eunyee Koh, Sana Malik, Tak Yeon Lee, Nesreen K. Ahmed
Visualization recommendation work has focused solely on scoring visualizations based on the underlying dataset and not the actual user and their past visualization feedback.
no code implementations • 23 Oct 2020 • Ryan A. Rossi, Nesreen K. Ahmed, Aldo Carranza, David Arbour, Anup Rao, Sungchul Kim, Eunyee Koh
Notably, since typed graphlet is more general than colored graphlet (and untyped graphlets), the counts of various typed graphlets can be combined to obtain the counts of the much simpler notion of colored graphlets.
no code implementations • 9 Oct 2020 • Guixiang Ma, Yao Xiao, Theodore L. Willke, Nesreen K. Ahmed, Shahin Nazarian, Paul Bogdan
High-level applications, such as machine learning, are evolving from simple models based on multilayer perceptrons for simple image recognition to much deeper and more complex neural networks for self-driving vehicle control systems. The rapid increase in the consumption of memory and computational resources by these models demands the use of multi-core parallel systems to scale the execution of the complex emerging applications that depend on them.
1 code implementation • 28 Sep 2020 • Jiong Zhu, Ryan A. Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K. Ahmed, Danai Koutra
Graph Neural Networks (GNNs) have proven to be useful for many different practical applications.
1 code implementation • 2020 • Nesreen K. Ahmed, Jennifer L Neville, Ramana Rao
Network sampling is integral to the analysis of social, information, and biological networks.
no code implementations • 25 Dec 2019 • Guixiang Ma, Nesreen K. Ahmed, Theodore L. Willke, Philip S. Yu
In many domains where data are represented as graphs, learning a similarity metric among graphs is considered a key problem, which can further facilitate various learning tasks, such as classification, clustering, and similarity search.
no code implementations • 18 Oct 2019 • Nesreen K. Ahmed, Nick Duffield, Ryan A. Rossi
In addition, we propose a temporally decaying sampling algorithm with unbiased estimators for studying networks that evolve in continuous time, where the strength of links is a function of time, and the motif patterns are temporally-weighted.
1 code implementation • 20 Sep 2019 • Ameer Haj-Ali, Nesreen K. Ahmed, Ted Willke, Sophia Shao, Krste Asanovic, Ion Stoica
However, these models are unable to capture the data dependency, the computation graph, or the organization of instructions.
Distributed, Parallel, and Cluster Computing Performance Programming Languages
no code implementations • 22 Aug 2019 • Ryan A. Rossi, Di Jin, Sungchul Kim, Nesreen K. Ahmed, Danai Koutra, John Boaz Lee
Unfortunately, recent work has sometimes confused the notion of structural roles and communities (based on proximity) leading to misleading or incorrect claims about the capabilities of network embedding methods.
no code implementations • 4 Aug 2019 • Ameer Haj-Ali, Nesreen K. Ahmed, Ted Willke, Joseph Gonzalez, Krste Asanovic, Ion Stoica
We propose a set of essential metrics to guide future works in evaluating the efficacy of using deep reinforcement learning in system optimization.
no code implementations • NeurIPS 2020 • Nesreen K. Ahmed, Nick Duffield
We propose a novel adaptive, single-pass sampling framework and unbiased estimators for higher-order network analysis of large streaming networks.
no code implementations • 12 Jun 2019 • Ryan A. Rossi, Anup Rao, Sungchul Kim, Eunyee Koh, Nesreen K. Ahmed, Gang Wu
In this work, we investigate higher-order network motifs and develop techniques based on the notion of closing higher-order motifs that move beyond closing simple triangles.
no code implementations • 12 Apr 2019 • John Boaz Lee, Giang Nguyen, Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, Sungchul Kim
In this work, we propose using the notion of temporal walks for learning dynamic embeddings from temporal networks.
no code implementations • 28 Jan 2019 • Ryan A. Rossi, Nesreen K. Ahmed, Aldo Carranza, David Arbour, Anup Rao, Sungchul Kim, Eunyee Koh
To address this problem, we propose a fast, parallel, and space-efficient framework for counting typed graphlets in large networks.
no code implementations • 2 Nov 2018 • Guixiang Ma, Nesreen K. Ahmed, Ted Willke, Dipanjan Sengupta, Michael W. Cole, Nicholas B. Turk-Browne, Philip S. Yu
We propose an end-to-end similarity learning framework called Higher-order Siamese GCN for multi-subject fMRI data analysis.
1 code implementation • 20 Jul 2018 • John Boaz Lee, Ryan A. Rossi, Sungchul Kim, Nesreen K. Ahmed, Eunyee Koh
However, in the real-world, graphs can be both large - with many complex patterns - and noisy which can pose a problem for effective graph mining.
no code implementations • 7 May 2018 • James P. Canning, Emma E. Ingram, Sammantha Nowak-Wolff, Adriana M. Ortiz, Nesreen K. Ahmed, Ryan A. Rossi, Karl R. B. Schmitt, Sucheta Soundarajan
Even though the current version of this paper is withdrawn, there was no disagreement between authors on the novel work in this paper.
2 code implementations • IJCAI 2018 • Nesreen K. Ahmed, Ryan Rossi, John Boaz Lee, Theodore L. Willke, Rong Zhou, Xiangnan Kong, Hoda Eldardiry
Random walks are at the heart of many existing network embedding methods.
no code implementations • 28 Jan 2018 • Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, Sungchul Kim, Anup Rao, Yasin Abbasi Yadkori
This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs.
no code implementations • 27 Oct 2017 • Ryan A. Rossi, Nesreen K. Ahmed, Hoda Eldardiry, Rong Zhou
Multi-label classification is an important learning problem with many applications.
no code implementations • 25 Oct 2017 • Nesreen K. Ahmed, Ryan A. Rossi, Rong Zhou, John Boaz Lee, Xiangnan Kong, Theodore L. Willke, Hoda Eldardiry
To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$.
no code implementations • 14 Sep 2017 • Nesreen K. Ahmed, Ryan A. Rossi, Rong Zhou, John Boaz Lee, Xiangnan Kong, Theodore L. Willke, Hoda Eldardiry
Random walks are at the heart of many existing deep learning algorithms for graph data.
no code implementations • 13 Sep 2017 • James P. Canning, Emma E. Ingram, Sammantha Nowak-Wolff, Adriana M. Ortiz, Nesreen K. Ahmed, Ryan A. Rossi, Karl R. B. Schmitt, Sucheta Soundarajan
To the best of our knowledge, this paper presents the first large-scale study that tests whether network categories (e. g., social networks vs. web graphs) are distinguishable from one another (using both categories of real-world networks and synthetic graphs).
no code implementations • 28 Apr 2017 • Ryan A. Rossi, Rong Zhou, Nesreen K. Ahmed
This paper presents a general graph representation learning framework called DeepGL for learning deep node and edge representations from large (attributed) graphs.
no code implementations • 6 Jan 2017 • Ryan A. Rossi, Rong Zhou, Nesreen K. Ahmed
In this work, we propose an unbiased graphlet estimation framework that is (a) fast with significant speedups compared to the state-of-the-art, (b) parallel with nearly linear-speedups, (c) accurate with <1% relative error, (d) scalable and space-efficient for massive networks with billions of edges, and (e) flexible for a variety of real-world settings, as well as estimating macro and micro-level graphlet statistics (e. g., counts) of both connected and disconnected graphlets.
no code implementations • 4 Oct 2016 • Nesreen K. Ahmed, Ryan A. Rossi, Theodore L. Willke, Rong Zhou
The experimental results demonstrate the utility of edge roles for network analysis tasks on a variety of graphs from various problem domains.
no code implementations • 2 Aug 2016 • Ryan A. Rossi, Rong Zhou, Nesreen K. Ahmed
Despite the importance of relational learning, most existing methods are hard to adapt to different settings, due to issues with efficiency, scalability, accuracy, and flexibility for handling a wide variety of classification problems, data, constraints, and tasks.
no code implementations • 13 Jun 2015 • Nesreen K. Ahmed, Jennifer Neville, Ryan A. Rossi, Nick Duffield, Theodore L. Willke
From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level.
no code implementations • 2 Feb 2015 • Nesreen K. Ahmed, Ryan A. Rossi
This paper proposes a web-based visual graph analytics platform for interactive graph mining, visualization, and real-time exploration of networks.
no code implementations • 14 Mar 2014 • Nesreen K. Ahmed, Christopher Cole, Jennifer Neville
We use the two representations as inputs to a mixture model to learn the latent state transitions that correspond to important changes in the Email graph structure over time.