Data Compression
94 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Data Compression
Libraries
Use these libraries to find Data Compression models and implementationsMost implemented papers
XGBoost: A Scalable Tree Boosting System
In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges.
DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome
Decoding the linguistic intricacies of the genome is a crucial problem in biology, and pre-trained foundational models such as DNABERT and Nucleotide Transformer have made significant strides in this area.
Efficient Manifold and Subspace Approximations with Spherelets
There is a rich literature on approximating the unknown manifold, and on exploiting such approximations in clustering, data compression, and prediction.
Transformer-based Transform Coding
Neural data compression based on nonlinear transform coding has made great progress over the last few years, mainly due to improvements in prior models, quantization methods and nonlinear transforms.
Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner Product Search
In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error.
ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction
This work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation.
Supervised Compression for Resource-Constrained Edge Computing Systems
There has been much interest in deploying deep learning algorithms on low-powered devices, including smartphones, drones, and medical sensors.
Towards Empirical Sandwich Bounds on the Rate-Distortion Function
By contrast, this paper makes the first attempt at an algorithm for sandwiching the R-D function of a general (not necessarily discrete) source requiring only i. i. d.
BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing
We show that BottleFit decreases power consumption and latency respectively by up to 49% and 89% with respect to (w. r. t.)
An Introduction to Neural Data Compression
Neural compression is the application of neural networks and other machine learning methods to data compression.