1 code implementation • 16 Apr 2024 • Oren Kraus, Kian Kenyon-Dean, Saber Saberian, Maryam Fallah, Peter McLean, Jess Leung, Vasudev Sharma, Ayla Khan, Jia Balakrishnan, Safiye Celik, Dominique Beaini, Maciej Sypetkowski, Chi Vicky Cheng, Kristen Morse, Maureen Makes, Ben Mabey, Berton Earnshaw
Featurizing microscopy images for use in biological research remains a significant challenge, especially for large-scale experiments spanning millions of images.
1 code implementation • 27 Sep 2023 • Oren Kraus, Kian Kenyon-Dean, Saber Saberian, Maryam Fallah, Peter McLean, Jess Leung, Vasudev Sharma, Ayla Khan, Jia Balakrishnan, Safiye Celik, Maciej Sypetkowski, Chi Vicky Cheng, Kristen Morse, Maureen Makes, Ben Mabey, Berton Earnshaw
Inferring biological relationships from cellular phenotypes in high-content microscopy screens provides significant opportunity and challenge in biological research.
no code implementations • EMNLP 2020 • Kian Kenyon-Dean, Edward Newell, Jackie Chi Kit Cheung
Word embeddings are reliable feature representations of words used to obtain high quality results for various NLP applications.
1 code implementation • COLING 2020 • Jingyi He, KC Tsiolis, Kian Kenyon-Dean, Jackie Chi Kit Cheung
Word embeddings are trained to predict word cooccurrence statistics, which leads them to possess different lexical properties (syntactic, semantic, etc.)
no code implementations • 29 Nov 2019 • Edward Newell, Kian Kenyon-Dean, Jackie Chi Kit Cheung
Uncontextualized word embeddings are reliable feature representations of words used to obtain high quality results for various NLP applications.
no code implementations • 6 Nov 2019 • Kian Kenyon-Dean
We derive that both of these algorithms attempt to produce embedding inner products that approximate pointwise mutual information (PMI) statistics in the corpus.
1 code implementation • 18 Dec 2018 • Kian Kenyon-Dean, Andre Cianflone, Lucas Page-Caccia, Guillaume Rabusseau, Jackie Chi Kit Cheung, Doina Precup
The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective.
no code implementations • NAACL 2018 • Kian Kenyon-Dean, Eisha Ahmed, Scott Fujimoto, Jeremy Georges-Filteau, Christopher Glasz, Barleen Kaur, Lal, Auguste e, Bh, Shruti eri, Robert Belfer, Nirmal Kanagasabai, Roman Sarrazingendron, Rohit Verma, Derek Ruths
In datasets constructed for the purpose of Twitter sentiment analysis (TSA), these controversial examples can compose over 30{\%} of the originally annotated data.
1 code implementation • SEMEVAL 2018 • Kian Kenyon-Dean, Jackie Chi Kit Cheung, Doina Precup
This work provides insight and motivating results for a new general approach to solving coreference and clustering problems with representation learning.