1 code implementation • 1 Feb 2024 • Yingji Zhang, Danilo S. Carvalho, Marco Valentino, Ian Pratt-Hartmann, Andre Freitas
Achieving precise semantic control over the latent spaces of Variational AutoEncoders (VAEs) holds significant value for downstream tasks in NLP as the underlying generative mechanisms could be better localised, explained and improved upon.
no code implementations • 20 Dec 2023 • Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas
Deep generative neural networks, such as Variational AutoEncoders (VAEs), offer an opportunity to better understand and control language models from the perspective of sentence-level latent spaces.
1 code implementation • 14 Nov 2023 • Yingji Zhang, Marco Valentino, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas
The injection of syntactic information in Variational AutoEncoders (VAEs) has been shown to result in an overall improvement of performances and generalisation.
no code implementations • 7 Aug 2023 • Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, Andre Freitas
They employ the T5 model to directly generate the tree, which can explain how the answer is inferred.
1 code implementation • 12 May 2023 • Marco Valentino, Danilo S. Carvalho, André Freitas
Natural language definitions possess a recursive, self-explanatory semantic structure that can support representation learning methods able to preserve explicit conceptual relations and constraints in the latent space.
no code implementations • 2 May 2023 • Yingji Zhang, Danilo S. Carvalho, André Freitas
Disentangled latent spaces usually have better semantic separability and geometrical properties, which leads to better interpretability and more controllable data generation.
no code implementations • 9 Feb 2023 • Mauricio Jacobo-Romero, Danilo S. Carvalho, Andre Freitas
In this work, we examined Business Process (BP) production as a signal; this novel approach explores a BP workflow as a linear time-invariant (LTI) system.
no code implementations • 12 Oct 2022 • Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas
Disentangling the encodings of neural models is a fundamental aspect for improving interpretability, semantic control, and understanding downstream task performance in Natural Language Processing.
no code implementations • 10 Oct 2022 • Danilo S. Carvalho, Edoardo Manino, Julia Rozanova, Lucas Cordeiro, André Freitas
At the same time, the need for interpretability has elicited questions on their intrinsic properties and capabilities.
no code implementations • 3 Oct 2022 • Mauricio Jacobo-Romero, Danilo S. Carvalho, André Freitas
This paper proposes a novel productivity estimation model to evaluate the effects of adopting Artificial Intelligence (AI) components in a production chain.
no code implementations • 22 Sep 2022 • Danilo S. Carvalho, Giangiacomo Mercatali, Yingji Zhang, Andre Freitas
Disentangling the encodings of neural models is a fundamental aspect for improving interpretability, semantic control and downstream task performance in Natural Language Processing.
no code implementations • 4 Jun 2017 • Danilo S. Carvalho, Duc-Vu Tran, Van-Khanh Tran, Le-Nguyen Minh
In this work, a two-stage method for Legal Information Retrieval is proposed, combining lexical statistics and distributional sentence representations in the context of Competition on Legal Information Extraction/Entailment (COLIEE).
no code implementations • 3 Sep 2016 • Danilo S. Carvalho, Minh-Tien Nguyen, Tran Xuan Chien, Minh Le Nguyen
In the context of the Competition on Legal Information Extraction/Entailment (COLIEE), we propose a method comprising the necessary steps for finding relevant documents to a legal question and deciding on textual entailment evidence to provide a correct answer.