Search Results for author: Mina Bishay

Found 6 papers, 0 papers with code

Automatic Detection of Sentimentality from Facial Expressions

no code implementations11 Sep 2022 Mina Bishay, Jay Turcot, Graham Page, Mohammad Mavadati

To the best of our knowledge this is the first work to address the problem of sentimentality detection.

Emotion Recognition

AFFDEX 2.0: A Real-Time Facial Expression Analysis Toolkit

no code implementations24 Feb 2022 Mina Bishay, Kenneth Preston, Matthew Strafuss, Graham Page, Jay Turcot, Mohammad Mavadati

In this paper we introduce AFFDEX 2. 0 - a toolkit for analyzing facial expressions in the wild, that is, it is intended for users aiming to; a) estimate the 3D head pose, b) detect facial Action Units (AUs), c) recognize basic emotions and 2 new emotional states (sentimentality and confusion), and d) detect high-level expressive metrics like blink and attention.

Choose Settings Carefully: Comparing Action Unit detection at Different Settings Using a Large-Scale Dataset

no code implementations16 Nov 2021 Mina Bishay, Ahmed Ghoneim, Mohamed Ashraf, Mohammad Mavadati

In this paper, we investigate the impact of some of the commonly used settings for (a) preprocessing face images, and (b) classification and training, on Action Unit (AU) detection performance and complexity.

Action Unit Detection Classification

Which CNNs and Training Settings to Choose for Action Unit Detection? A Study Based on a Large-Scale Dataset

no code implementations16 Nov 2021 Mina Bishay, Ahmed Ghoneim, Mohamed Ashraf, Mohammad Mavadati

In this paper we explore the influence of some frequently used Convolutional Neural Networks (CNNs), training settings, and training set structures, on Action Unit (AU) detection.

Action Unit Detection

TARN: Temporal Attentive Relation Network for Few-Shot and Zero-Shot Action Recognition

no code implementations21 Jul 2019 Mina Bishay, Georgios Zoumpourlis, Ioannis Patras

At the heart of our network is a meta-learning approach that learns to compare representations of variable temporal length, that is, either two videos of different length (in the case of few-shot action recognition) or a video and a semantic representation such as word vector (in the case of zero-shot action recognition).

Few-Shot action recognition Few Shot Action Recognition +5

SchiNet: Automatic Estimation of Symptoms of Schizophrenia from Facial Behaviour Analysis

no code implementations7 Aug 2018 Mina Bishay, Petar Palasek, Stefan Priebe, Ioannis Patras

Patients with schizophrenia often display impairments in the expression of emotion and speech and those are observed in their facial behaviour.

Cannot find the paper you are looking for? You can Submit a new open access paper.