no code implementations • 15 Dec 2020 • Ruo Ando, Yoshiyasu Takefuji
By doing this, we can distinguish the complexity of 8 puzzles with the number of generated with paramodulation.
no code implementations • 2 Nov 2020 • Ruo Ando, Yoshiyasu Takefuji
Generally, negation-limited inverters problem is known as a puzzle of constructing an inverter with AND gates and OR gates and a few inverters.
no code implementations • 21 Aug 2020 • Ruo Ando, Yoshiyasu Takefuji
In proposal method, we impose the constraint on the recursion algorithm for the depth-first search of binary tree representation of LSTM for which batch normalization is applied.
no code implementations • EMNLP 2020 • Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Yuji Matsumoto
The embeddings of entities in a large knowledge base (e. g., Wikipedia) are highly beneficial for solving various natural language tasks that involve real world knowledge.
2 code implementations • COLING 2018 • Ikuya Yamada, Hiroyuki Shindo, Yoshiyasu Takefuji
In this paper, we describe TextEnt, a neural network model that learns distributed representations of entities and documents directly from a knowledge base (KB).
Ranked #1 on Entity Typing on Freebase FIGER
no code implementations • 23 Mar 2018 • Ikuya Yamada, Ryuji Tamaki, Hiroyuki Shindo, Yoshiyasu Takefuji
In this chapter, we describe our question answering system, which was the winning system at the Human-Computer Question Answering (HCQA) Competition at the Thirty-first Annual Conference on Neural Information Processing Systems (NIPS).
1 code implementation • TACL 2017 • Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji
Given a text in the KB, we train our proposed model to predict entities that are relevant to the text.
Ranked #2 on Entity Disambiguation on TAC2010
1 code implementation • CONLL 2016 • Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji
The KB graph model learns the relatedness of entities using the link structure of the KB, whereas the anchor context model aims to align vectors such that similar words and entities occur close to one another in the vector space by leveraging KB anchors and their context words.
Ranked #4 on Entity Disambiguation on TAC2010