1 code implementation • 4 Dec 2022 • Ruggiero Seccia, Corrado Coppola, Giampaolo Liuzzi, Laura Palagi
In this work, we consider minimizing the average of a very large number of smooth and possibly non-convex functions, and we focus on two widely used minibatch frameworks to tackle this optimization problem: Incremental Gradient (IG) and Random Reshuffling (RR).
no code implementations • 30 Jul 2022 • Simone Foa, Corrado Coppola, Giorgio Grani, Laura Palagi
Comparisons between the algorithm proposed and the state-of-the-art solver OR-TOOLS show that the latter still outperforms the Reinforcement learning algorithm.
no code implementations • 30 Jul 2022 • Giorgio Grani, Corrado Coppola, Valerio Agasucci
The main idea is to replace the rounding phase of the Feasibility Pump with a suitable adaptation of the Shifting and other rounding heuristics.