no code implementations • 7 May 2024 • Hassan Shakil, Zeydy Ortiz, Grant C. Forbes
In this research, we uses the DistilBERT model to generate extractive summary and the T5 model to generate abstractive summaries.
no code implementations • 12 Feb 2024 • Grant C. Forbes, Nitish Gupta, Leonardo Villalobos-Arias, Colin M. Potts, Arnav Jhala, David L. Roberts
Recently there has been a proliferation of intrinsic motivation (IM) reward-shaping methods to learn in complex and sparse-reward environments.
no code implementations • 16 Oct 2023 • Grant C. Forbes, Parth Katlana, Zeydy Ortiz
Due to this need, a wide array of metrics estimating consistency with the text being summarized have been proposed.