no code implementations • 19 Nov 2023 • Abdalgader Abubaker, Takanori Maehara, Madhav Nimishakavi, Vassilis Plachouras
SPHH is consist of two self-supervised pretraining tasks that aim to simultaneously learn both local and global representations of the entities in the hypergraph by using informative representations derived from the hypergraph structure.
no code implementations • NeurIPS 2021 • Takanori Maehara, Hoang NT
Theoretical analyses for graph learning methods often assume a complete observation of the input graph.
no code implementations • 24 Feb 2021 • Kenshin Abe, Takanori Maehara, Issei Sato
We study the problem of modeling a binary operation that satisfies some algebraic requirements.
no code implementations • 1 Jan 2021 • Hoang NT, Takanori Maehara, Tsuyoshi Murata
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fully-connected weights versus trainable polynomial coefficients.
1 code implementation • 22 Nov 2020 • Hoang NT, Takanori Maehara, Tsuyoshi Murata
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fully connected weights versus trainable polynomial coefficients.
1 code implementation • ICML 2020 • Hoang NT, Takanori Maehara
In this paper, we study the graph classification problem from the graph homomorphism perspective.
no code implementations • 29 Feb 2020 • Akihiro Yabe, Takanori Maehara
Data-driven decision-making is performed by solving a parameterized optimization problem, and the optimal decision is given by an optimal solution for unknown true parameters.
1 code implementation • 28 Feb 2020 • Yoichi Sasaki, Kosuke Akimoto, Takanori Maehara
Neural networks using numerous text data have been successfully applied to a variety of tasks.
no code implementations • 9 Oct 2019 • Takanori Maehara, Hoang NT
We present a simple proof for the universality of invariant and equivariant tensorized graph neural networks.
no code implementations • 25 Sep 2019 • Hoang NT, Takanori Maehara
In this work, we develop quantitative results to the learnablity of a two-layers Graph Convolutional Network (GCN).
no code implementations • 4 Sep 2019 • Akihiro Yabe, Takanori Maehara
Selecting appropriate regularization coefficients is critical to performance with respect to regularized empirical risk minimization problems.
1 code implementation • NeurIPS 2019 • Satoshi Hara, Atsushi Nitanda, Takanori Maehara
Data cleansing is a typical approach used to improve the accuracy of machine learning models, which, however, requires extensive domain knowledge to identify the influential instances that affect the models.
2 code implementations • 23 May 2019 • Hoang NT, Takanori Maehara
However, we find that the feature vectors of benchmark datasets are already quite informative for the classification task, and the graph structure only provides a means to denoise the data.
2 code implementations • 24 Jan 2019 • Kazuto Fukuchi, Satoshi Hara, Takanori Maehara
The focus of this study is to raise an awareness of the risk of malicious decision-makers who fake fairness by abusing the auditing tools and thereby deceiving the social communities.
no code implementations • AKBC 2019 • Mohammed Alsuhaibani, Takanori Maehara, Danushka Bollegala
To learn the word embeddings, the proposed method considers not only the hypernym relations that exists between words on a taxonomy, but also their contextual information in a large text corpus.
1 code implementation • 14 Oct 2018 • Satoshi Hara, Takanori Maehara
To this end, we formulate the problem as finding a small number of solutions such that the convex hull of these solutions approximates the set of nearly optimal solutions.
no code implementations • 11 Oct 2018 • Junpei Komiyama, Takanori Maehara
Statistical hypothesis testing serves as statistical evidence for scientific innovation.
no code implementations • 27 Sep 2018 • Satoshi Hara, Koichi Ikeno, Tasuku Soma, Takanori Maehara
In this study, we formalize the feature attribution problem as a feature selection problem.
1 code implementation • 19 Jun 2018 • Satoshi Hara, Kouichi Ikeno, Tasuku Soma, Takanori Maehara
In adversarial example, one seeks the smallest data perturbation that changes the model's output.
no code implementations • 14 Apr 2018 • Danushka Bollegala, Vincent Atanasov, Takanori Maehara, Ken-ichi Kawarabayashi
We propose \emph{ClassiNet} -- a network of classifiers trained for predicting missing features in a given instance, to overcome the feature sparseness problem.
no code implementations • ICML 2018 • Tatsunori Taniai, Takanori Maehara
We present a novel convolutional neural network architecture for photometric stereo (Woodham, 1980), a problem of recovering 3D object surface normals from multiple images observed under varying illuminations.
no code implementations • NeurIPS 2017 • Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors.
no code implementations • 1 Aug 2017 • Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors.
1 code implementation • 18 Nov 2016 • Satoshi Hara, Takanori Maehara
We propose a method for finding alternate features missing in the Lasso optimal solution.
1 code implementation • 19 Nov 2015 • Danushka Bollegala, Alsuhaibani Mohammed, Takanori Maehara, Ken-ichi Kawarabayashi
For this purpose, we propose a joint word representation learning method that simultaneously predicts the co-occurrences of two words in a sentence subject to the relational constrains given by the semantic lexicon.
no code implementations • IJCNLP 2015 • Danushka Bollegala, Takanori Maehara, Ken-ichi Kawarabayashi
Given a pair of \emph{source}-\emph{target} domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics.
no code implementations • 1 May 2015 • Danushka Bollegala, Takanori Maehara, Ken-ichi Kawarabayashi
We propose an unsupervised method for learning vector representations for words such that the learnt representations are sensitive to the semantic relations that exist between two words.
no code implementations • 7 Dec 2014 • Danushka Bollegala, Takanori Maehara, Yuichi Yoshida, Ken-ichi Kawarabayashi
To evaluate the accuracy of the word representations learnt using the proposed method, we use the learnt word representations to solve semantic word analogy problems.