no code implementations • INLG (ACL) 2021 • Ryo Nagata, Masato Hagiwara, Kazuaki Hanawa, Masato Mita, Artem Chernodub, Olena Nahorna
In this paper, we propose a generation challenge called Feedback comment generation for language learners.
no code implementations • 29 Apr 2024 • Aman Saini, Artem Chernodub, Vipul Raheja, Vivek Kulkarni
We introduce Spivavtor, a dataset, and instruction-tuned models for text editing focused on the Ukrainian language.
1 code implementation • 23 Apr 2024 • Kostiantyn Omelianchuk, Andrii Liubonko, Oleksandr Skurzhanskyi, Artem Chernodub, Oleksandr Korniienko, Igor Samokhin
In this paper, we carry out experimental research on Grammatical Error Correction, delving into the nuances of single-model systems, comparing the efficiency of ensembling and ranking methods, and exploring the application of large language models to GEC as single-model systems, as parts of ensembles, and as ranking methods.
no code implementations • 8 Jun 2023 • Oleksandr Yermilov, Vipul Raheja, Artem Chernodub
Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques to better balance the trade-offs between data protection and utility preservation.
1 code implementation • ACL 2022 • Maksym Tarnavskyi, Artem Chernodub, Kostiantyn Omelianchuk
Our best ensemble achieves a new SOTA result with an $F_{0. 5}$ score of 76. 05 on BEA-2019 (test), even without pre-training on synthetic datasets.
Ranked #6 on Grammatical Error Correction on BEA-2019 (test)
3 code implementations • WS 2020 • Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, Oleksandr Skurzhanskyi
In this paper, we present a simple and efficient GEC sequence tagger using a Transformer encoder.
Ranked #7 on Grammatical Error Correction on BEA-2019 (test)
1 code implementation • WS 2019 • Hanna Pylieva, Artem Chernodub, Natalia Grabar, Thierry Hamon
We introduce novel embeddings received from RNN - FrnnMUTE (French RNN Medical Understandability Text Embeddings) which allow to reach up to 87. 0 F1 score in identification of difficult words.
1 code implementation • ACL 2019 • Artem Chernodub, Oleksiy Oliynyk, Philipp Heidenreich, Alex Bondarenko, Matthias Hagen, Chris Biemann, Alex Panchenko, er
We present TARGER, an open source neural argument mining framework for tagging arguments in free input texts and for keyword-based retrieval of arguments from an argument-tagged web-scale corpus.
no code implementations • 24 Jun 2016 • Artem Chernodub, Dimitri Nowicki
In this paper we propose a novel universal technique that makes the norm of the gradient stay in the suitable range.
no code implementations • 12 May 2016 • Artem Chernodub
This paper is dedicated to the long-term, or multi-step-ahead, time series prediction problem.
no code implementations • 8 Apr 2016 • Artem Chernodub, Dimitri Nowicki
We propose a novel activation function that implements piece-wise orthogonal non-linear mappings based on permutations.
no code implementations • 17 Nov 2015 • Artem Chernodub, Dmitry Dziuba
Methods of applying neural networks to control plants are considered.