Search Results for author: Karl Lieberherr

Found 4 papers, 0 papers with code

Dual Monte Carlo Tree Search

no code implementations21 Mar 2021 Prashank Kadam, Ruiyang Xu, Karl Lieberherr

This technique is applicable to any MCTS based algorithm to reduce the number of updates to the tree.

Solving QSAT problems with neural MCTS

no code implementations17 Jan 2021 Ruiyang Xu, Karl Lieberherr

After training, an off-the-shelf QSAT solver is used to evaluate the performance of the algorithm.

Board Games Graph Neural Network

First-Order Problem Solving through Neural MCTS based Reinforcement Learning

no code implementations11 Jan 2021 Ruiyang Xu, Prashank Kadam, Karl Lieberherr

We propose a general framework, Persephone, to map the FOL description of a combinatorial problem to a semantic game so that it can be solved through a neural MCTS based reinforcement learning algorithm.

reinforcement-learning Reinforcement Learning (RL)

Learning Self-Game-Play Agents for Combinatorial Optimization Problems

no code implementations8 Mar 2019 Ruiyang Xu, Karl Lieberherr

Recent progress in reinforcement learning (RL) using self-game-play has shown remarkable performance on several board games (e. g., Chess and Go) as well as video games (e. g., Atari games and Dota2).

Atari Games Board Games +2

Cannot find the paper you are looking for? You can Submit a new open access paper.