Inductive Bias

563 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Inductive Bias models and implementations

Most implemented papers

Prototypical Networks for Few-shot Learning

jakesnell/prototypical-networks NeurIPS 2017

We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.

Relational inductive biases, deep learning, and graph networks

deepmind/graph_nets 4 Jun 2018

As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.

How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers

rwightman/pytorch-image-models 18 Jun 2021

Vision Transformers (ViT) have been shown to attain highly competitive performance for a wide range of vision applications, such as image classification, object detection and semantic image segmentation.

Deep Image Prior

DmitryUlyanov/deep-image-prior CVPR 2018

In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning.

CoAtNet: Marrying Convolution and Attention for All Data Sizes

rwightman/pytorch-image-models NeurIPS 2021

Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks.

Video Swin Transformer

SwinTransformer/Video-Swin-Transformer CVPR 2022

The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks.

Taming Transformers for High-Resolution Image Synthesis

CompVis/taming-transformers CVPR 2021

We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images.

Inductive Relation Prediction by Subgraph Reasoning

kkteru/grail ICML 2020

The dominant paradigm for relation prediction in knowledge graphs involves learning and operating on latent representations (i. e., embeddings) of entities and relations.

ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases

facebookresearch/convit 19 Mar 2021

We initialise the GPSA layers to mimic the locality of convolutional layers, then give each attention head the freedom to escape locality by adjusting a gating parameter regulating the attention paid to position versus content information.

Universal Transformers

tensorflow/tensor2tensor ICLR 2019

Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times.