1 code implementation • 27 Mar 2024 • Philip Kenneweg, Tristan Kenneweg, Barbara Hammer
In recent studies, line search methods have shown significant improvements in the performance of traditional stochastic gradient descent techniques, eliminating the need for a specific learning rate schedule.
1 code implementation • 27 Mar 2024 • Philip Kenneweg, Leonardo Galli, Tristan Kenneweg, Barbara Hammer
Recent works have shown that line search methods greatly increase performance of traditional stochastic gradient descent methods on a variety of datasets and architectures [1], [2].
1 code implementation • 26 Feb 2024 • Tristan Kenneweg, Philip Kenneweg, Barbara Hammer
We use a dataset created this way for the development and evaluation of a boolean agent RAG setup: A system in which a LLM can decide whether to query a vector database or not, thus saving tokens on questions that can be answered with internal knowledge.