no code implementations • 26 Feb 2024 • Maximilian Böther, Abraham Sebastian, Pranjal Awasthi, Ana Klimovic, Srikumar Ramalingam
In this paper, we relax the requirement of having a central machine for the target subset by proposing a novel distributed bounding algorithm with provable approximation guarantees.
no code implementations • 7 Feb 2024 • Hanna Mazzawi, Pranjal Awasthi, Xavi Gonzalvo, Srikumar Ramalingam
Building upon this framework, we present a novel, architecture agnostic algorithm called "majority kernels", which seamlessly integrates with predominant architectures, including Transformer models.
no code implementations • 17 Dec 2023 • Srikumar Ramalingam, Pranjal Awasthi, Sanjiv Kumar
The success of deep learning hinges on enormous data and large models, which require labor-intensive annotations and heavy computation costs.
3 code implementations • 30 Nov 2023 • Sadeep Jayasumana, Srikumar Ramalingam, Andreas Veit, Daniel Glasner, Ayan Chakrabarti, Sanjiv Kumar
It is an unbiased estimator that does not make any assumptions on the probability distribution of the embeddings and is sample efficient.
no code implementations • 14 Aug 2023 • Sadeep Jayasumana, Daniel Glasner, Srikumar Ramalingam, Andreas Veit, Ayan Chakrabarti, Sanjiv Kumar
Modern text-to-image generation models produce high-quality images that are both photorealistic and faithful to the text prompts.
no code implementations • 28 Jan 2023 • Gui Citovsky, Giulia Desalvo, Sanjiv Kumar, Srikumar Ramalingam, Afshin Rostamizadeh, Yunjuan Wang
In such a setting, an algorithm can sample examples one at a time but, in order to limit overhead costs, is only able to update its state (i. e. further train model weights) once a large enough batch of examples is selected.
no code implementations • 28 Oct 2022 • Arslan Chaudhry, Aditya Krishna Menon, Andreas Veit, Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar
Towards this, we study two questions: (1) how does the Mixup loss that enforces linearity in the \emph{last} network layer propagate the linearity to the \emph{earlier} layers?
1 code implementation • 9 Mar 2022 • Xin Yu, Thiago Serra, Srikumar Ramalingam, Shandian Zhe
We propose a tractable heuristic for solving the combinatorial extension of OBS, in which we select weights for simultaneous removal, as well as a systematic update of the remaining weights.
no code implementations • 29 Sep 2021 • Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar
We investigate the possibility of using the embeddings produced by a lightweight network more effectively with a nonlinear classification layer.
1 code implementation • 29 Sep 2021 • Kieran A Murphy, Varun Jampani, Srikumar Ramalingam, Ameesh Makadia
We propose a novel algorithm that relies on a weak form of supervision where the data is partitioned into sets according to certain \textit{inactive} factors of variation.
no code implementations • 29 Sep 2021 • Srikumar Ramalingam, Daniel Glasner, Kaushal Patel, Raviteja Vemulapalli, Sadeep Jayasumana, Sanjiv Kumar
Deep learning has yielded extraordinary results in vision and natural language processing, but this achievement comes at a cost.
2 code implementations • 10 Jun 2021 • Kieran Murphy, Carlos Esteves, Varun Jampani, Srikumar Ramalingam, Ameesh Makadia
Single image pose estimation is a fundamental problem in many vision and robotics tasks, and existing deep learning approaches suffer by not completely modeling and handling: i) uncertainty about the predictions, and ii) symmetric objects with multiple (sometimes infinite) correct poses.
no code implementations • 19 May 2021 • Seungyeon Kim, Daniel Glasner, Srikumar Ramalingam, Cho-Jui Hsieh, Kishore Papineni, Sanjiv Kumar
It is generally believed that robust training of extremely large networks is critical to their success in real-world applications.
no code implementations • 26 Apr 2021 • Srikumar Ramalingam, Daniel Glasner, Kaushal Patel, Raviteja Vemulapalli, Sadeep Jayasumana, Sanjiv Kumar
Deep learning has yielded extraordinary results in vision and natural language processing, but this achievement comes at a cost.
1 code implementation • CVPR 2022 • Kieran A. Murphy, Varun Jampani, Srikumar Ramalingam, Ameesh Makadia
We propose a novel algorithm that utilizes a weak form of supervision where the data is partitioned into sets according to certain inactive (common) factors of variation which are invariant across elements of each set.
1 code implementation • NeurIPS 2021 • Thiago Serra, Xin Yu, Abhinav Kumar, Srikumar Ramalingam
We can compress a rectifier network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable.
no code implementations • 1 Jan 2021 • Kieran A Murphy, Varun Jampani, Srikumar Ramalingam, Ameesh Makadia
In this work, we operate in the setting where limited information is known about the data in the form of groupings, or set membership, and the task is to learn representations which isolate the factors of variation that are common across the groupings.
no code implementations • 8 Dec 2020 • Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar
We propose a kernelized classification layer for deep networks.
no code implementations • 4 Oct 2020 • Siddhant Ranade, Xin Yu, Shantnu Kakkar, Pedro Miraldo, Srikumar Ramalingam
We propose a novel technique to register sparse 3D scans in the absence of texture.
no code implementations • 1 Jan 2020 • Thiago Serra, Abhinav Kumar, Srikumar Ramalingam
Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition, where large neural networks are often used to obtain good accuracy.
no code implementations • 13 Jun 2019 • Siddhant Ranade, Xin Yu, Shantnu Kakkar, Pedro Miraldo, Srikumar Ramalingam
In contrast to correspondence based methods, we take a different viewpoint and formulate the sparse 3D registration problem based on the constraints from the intersection of line segments from adjacent scans.
no code implementations • 27 May 2019 • Abhinav Kumar, Thiago Serra, Srikumar Ramalingam
On the practical side, we show that certain rectified linear units (ReLUs) can be safely removed from a network if they are always active or inactive for any valid input.
no code implementations • CVPR 2019 • Pedro Miraldo, Surojit Saha, Srikumar Ramalingam
3D scan registration is a classical, yet a highly useful problem in the context of 3D sensors such as Kinect and Velodyne.
1 code implementation • CVPR 2020 • G. Dias Pais, Srikumar Ramalingam, Venu Madhav Govindu, Jacinto C. Nascimento, Rama Chellappa, Pedro Miraldo
Given a set of 3D point correspondences, we build a deep neural network to address the following two challenges: (i) classification of the point correspondences into inliers/outliers, and (ii) regression of the motion parameters that align the scans into a common reference frame.
no code implementations • 8 Oct 2018 • Siddhant Ranade, Srikumar Ramalingam
We treat the line segments in the image to be part of a graph similar to straws and connectors game, where the goal is to back-project the line segments in 3D space and while ensuring that some of these 3D line segments connect with each other (i. e., truly intersect in 3D space) to form the 3D structure.
no code implementations • ICLR 2019 • Thiago Serra, Srikumar Ramalingam
Our first contribution is a method to sample the activation patterns defined by ReLUs using universal hash functions.
3 code implementations • ECCV 2018 • Zhiding Yu, Weiyang Liu, Yang Zou, Chen Feng, Srikumar Ramalingam, B. V. K. Vijaya Kumar, Jan Kautz
Edge detection is among the most fundamental vision problems for its role in perceptual grouping and its wide applications.
1 code implementation • ECCV 2018 • Pedro Miraldo, Tiago Dias, Srikumar Ramalingam
We show that the solution for the case of two points and one line can be formulated as a fourth degree equation.
1 code implementation • 6 Jul 2018 • Xin Yu, Sagar Chaturvedi, Chen Feng, Yuichi Taguchi, Teng-Yok Lee, Clinton Fernandes, Srikumar Ramalingam
In this paper, we propose VLASE, a framework to use semantic edge features from images to achieve on-road localization.
no code implementations • 17 Jun 2018 • Thiago Serra, Christian Tjandraatmadja, Srikumar Ramalingam
The holy grail of deep learning is to come up with an automatic method to design optimal architectures for different applications.
no code implementations • CVPR 2018 • Pedro Miraldo, Francisco Eiras, Srikumar Ramalingam
Vanishing points and vanishing lines are classical geometrical concepts in perspective cameras that have a lineage dating back to 3 centuries.
1 code implementation • CVPR 2018 • Xin Yu, Zhiding Yu, Srikumar Ramalingam
A family of super deep networks, referred to as residual networks or ResNet, achieved record-beating performance in various visual tasks such as image recognition, object detection, and semantic segmentation.
no code implementations • 30 Mar 2018 • Varun Manjunatha, Srikumar Ramalingam, Tim K. Marks, Larry Davis
To accomplish this, we use a submodular set function to model the accuracy achievable on a new task when the features have been learned on a given subset of classes of the source dataset.
no code implementations • 6 Nov 2017 • Thiago Serra, Christian Tjandraatmadja, Srikumar Ramalingam
We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions.
no code implementations • 28 Jun 2017 • Srikumar Ramalingam, Arvind U. Raghunathan, Daniel Nikovski
We show that this objective function is submodular.
11 code implementations • CVPR 2017 • Zhiding Yu, Chen Feng, Ming-Yu Liu, Srikumar Ramalingam
To this end, we propose a novel end-to-end deep semantic edge learning architecture based on ResNet and a new skip-layer architecture where category-wise edge activations at the top convolution layer share and are fused with the same set of bottom layer features.
Ranked #1 on Edge Detection on Cityscapes test
no code implementations • 3 Nov 2015 • Sudeep Pillai, Srikumar Ramalingam, John J. Leonard
Traditional stereo algorithms have focused their efforts on reconstruction quality and have largely avoided prioritizing for run time performance.
no code implementations • 15 Jun 2015 • Ming-Yu Liu, Shuoxin Lin, Srikumar Ramalingam, Oncel Tuzel
We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving.
no code implementations • CVPR 2015 • Srikumar Ramalingam, Michel Antunes, Dan Snow, Gim Hee Lee, Sudeep Pillai
We propose a simple and useful idea based on cross-ratio constraint for wide-baseline matching and 3D reconstruction.
no code implementations • CVPR 2013 • Srikumar Ramalingam, Jaishanker K. Pillai, Arpit Jain, Yuichi Taguchi
In this paper, we consider the problem of detecting junctions and using them for recovering the spatial layout of an indoor scene.
no code implementations • CVPR 2013 • Amit Agrawal, Srikumar Ramalingam
We describe such setups as multi-axial imaging systems, since a single sphere results in an axial system.
no code implementations • 11 Sep 2011 • Srikumar Ramalingam, Chris Russell, Lubor Ladicky, Philip H. S. Torr
E +n^4 {\log}^{O(1)} n)$ where $E$ is the time required to evaluate the function and $n$ is the number of variables \cite{Lee2015}.