no code implementations • 21 Mar 2021 • Prashank Kadam, Ruiyang Xu, Karl Lieberherr
This technique is applicable to any MCTS based algorithm to reduce the number of updates to the tree.
no code implementations • 17 Jan 2021 • Ruiyang Xu, Karl Lieberherr
After training, an off-the-shelf QSAT solver is used to evaluate the performance of the algorithm.
no code implementations • 11 Jan 2021 • Ruiyang Xu, Prashank Kadam, Karl Lieberherr
We propose a general framework, Persephone, to map the FOL description of a combinatorial problem to a semantic game so that it can be solved through a neural MCTS based reinforcement learning algorithm.
no code implementations • 8 Mar 2019 • Ruiyang Xu, Karl Lieberherr
Recent progress in reinforcement learning (RL) using self-game-play has shown remarkable performance on several board games (e. g., Chess and Go) as well as video games (e. g., Atari games and Dota2).