Search Results for author: Robert Birke

Found 25 papers, 9 papers with code

SFDDM: Single-fold Distillation for Diffusion models

no code implementations23 May 2024 Chi Hong, Jiyue Huang, Robert Birke, Dick Epema, Stefanie Roos, Lydia Y. Chen

While diffusion models effectively generate remarkable synthetic images, a key limitation is the inference inefficiency, requiring numerous sampling steps.

DALLMi: Domain Adaption for LLM-based Multi-label Classifier

1 code implementation3 May 2024 Miruna Beţianu, Abele Mălan, Marco Aldinucci, Robert Birke, Lydia Chen

In this paper, we design DALLMi, Domain Adaptation Large Language Model interpolator, a first-of-its-kind semi-supervised domain adaptation method for text data models based on LLMs, specifically BERT.

Domain Adaptation Language Modelling +3

TabuLa: Harnessing Language Models for Tabular Data Synthesis

3 code implementations19 Oct 2023 Zilong Zhao, Robert Birke, Lydia Chen

Results show that Tabula averagely reduces 46. 2% training time per epoch comparing to current LLMs-based state-of-the-art algorithm and consistently achieves even higher synthetic data utility.

Language Modelling

BatMan-CLR: Making Few-shots Meta-Learners Resilient Against Label Noise

no code implementations12 Sep 2023 Jeroen M. Galjaard, Robert Birke, Juan Perez, Lydia Y. Chen

We show that the accuracy of Reptile, iMAML, and foMAML drops by up to 42% on the Omniglot and CifarFS datasets when meta-training is affected by label noise.

Meta-Learning

Model-Agnostic Federated Learning

1 code implementation8 Mar 2023 Gianluca Mittone, Walter Riviera, Iacopo Colonnelli, Robert Birke, Marco Aldinucci

MAFL marries a model-agnostic FL algorithm, AdaBoost. F, with an open industry-grade FL framework: Intel OpenFL.

Federated Learning

Permutation-Invariant Tabular Data Synthesis

no code implementations17 Nov 2022 Yujin Zhu, Zilong Zhao, Robert Birke, Lydia Y. Chen

We show that changing the input column order worsens the statistical difference between real and synthetic data by up to 38. 67% due to the encoding of tabular data and the network architectures.

FCT-GAN: Enhancing Table Synthesis via Fourier Transform

no code implementations12 Oct 2022 Zilong Zhao, Robert Birke, Lydia Y. Chen

Mainstream state-of-the-art tabular data synthesizers draw methodologies from Generative Adversarial Networks (GANs), which are composed of a generator and a discriminator.

Generative Adversarial Network

CTAB-GAN+: Enhancing Tabular Data Synthesis

2 code implementations1 Apr 2022 Zilong Zhao, Aditya Kunar, Robert Birke, Lydia Y. Chen

We extensively evaluate CTAB-GAN+ on data similarity and analysis utility against state-of-the-art tabular GANs.

Privacy Preserving

Fed-TGAN: Federated Learning Framework for Synthesizing Tabular Data

1 code implementation18 Aug 2021 Zilong Zhao, Robert Birke, Aditya Kunar, Lydia Y. Chen

And, while learning GANs to synthesize images on FL systems has just been demonstrated, it is unknown if GANs for tabular data can be learned from decentralized data sources.

Federated Learning Privacy Preserving

DTGAN: Differential Private Training for Tabular GANs

no code implementations6 Jul 2021 Aditya Kunar, Robert Birke, Zilong Zhao, Lydia Chen

Additionally, we rigorously evaluate the theoretical privacy guarantees offered by DP empirically against membership and attribute inference attacks.

Attribute

Enhancing Robustness of On-line Learning Models on Highly Noisy Data

1 code implementation19 Mar 2021 Zilong Zhao, Robert Birke, Rui Han, Bogdan Robu, Sara Bouchenak, Sonia Ben Mokhtar, Lydia Y. Chen

Classification algorithms have been widely adopted to detect anomalies for various systems, e. g., IoT, cloud and face recognition, under the common assumption that the data source is clean, i. e., features and labels are correctly set.

Anomaly Detection Face Recognition

CTAB-GAN: Effective Table Data Synthesizing

1 code implementation16 Feb 2021 Zilong Zhao, Aditya Kunar, Hiek Van der Scheer, Robert Birke, Lydia Y. Chen

In this paper, we develop CTAB-GAN, a novel conditional table GAN architecture that can effectively model diverse data types, including a mix of continuous and categorical variables.

Robust Learning via Golden Symmetric Loss of (un)Trusted Labels

no code implementations1 Jan 2021 Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen

In this paper, we propose to construct a golden symmetric loss (GSL) based on the estimated confusion matrix as to avoid overfitting to noisy labels and learn effectively from hard classes.

End-to-End Learning from Noisy Crowd to Supervised Machine Learning Models

no code implementations13 Nov 2020 Taraneh Younesian, Chi Hong, Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen

Furthermore, relabeling only 10% of the data using the expert's results in over 90% classification accuracy with SVM.

BIG-bench Machine Learning

TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise

no code implementations13 Jul 2020 Amirmasoud Ghiassi, Taraneh Younesian, Robert Birke, Lydia Y. Chen

Based on the insights, we design TrustNet that first adversely learns the pattern of noise corruption, being it both symmetric or asymmetric, from a small set of trusted data.

ExpertNet: Adversarial Learning and Recovery Against Noisy Labels

no code implementations10 Jul 2020 Amirmasoud Ghiassi, Robert Birke, Rui Han, Lydia Y. Chen

Today's available datasets in the wild, e. g., from social media and open platforms, present tremendous opportunities and challenges for deep learning, as there is a significant portion of tagged images, but often with noisy, i. e. erroneous, labels.

Robust classification

QActor: On-line Active Learning for Noisy Labeled Stream Data

no code implementations28 Jan 2020 Taraneh Younesian, Zilong Zhao, Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen

A central feature of QActor is to dynamically adjust the query limit according to the learning loss for each data batch.

Active Learning

RAD: On-line Anomaly Detection for Highly Unreliable Data

no code implementations11 Nov 2019 Zilong Zhao, Robert Birke, Rui Han, Bogdan Robu, Sara Bouchenak, Sonia Ben Mokhtar, Lydia Y. Chen

Classification algorithms have been widely adopted to detect anomalies for various systems, e. g., IoT, cloud and face recognition, under the common assumption that the data source is clean, i. e., features and labels are correctly set.

Anomaly Detection Face Recognition

WiSE-ALE: Wide Sample Estimator for Approximate Latent Embedding

no code implementations16 Feb 2019 Shuyu Lin, Ronald Clark, Robert Birke, Niki Trigoni, Stephen Roberts

Variational Auto-encoders (VAEs) have been very successful as methods for forming compressed latent representations of complex, often high-dimensional, data.

Online Label Aggregation: A Variational Bayesian Approach

no code implementations19 Jul 2018 Chi Hong, Amirmasoud Ghiassi, Yichi Zhou, Robert Birke, Lydia Y. Chen

Our evaluation results on various online scenarios show that BiLA can effectively infer the true labels, with an error rate reduction of at least 10 to 1. 5 percent points for synthetic and real-world datasets, respectively.

Bayesian Inference Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.