1 code implementation • 23 Aug 2023 • James K Ruffle, Robert J Gray, Samia Mohinta, Guilherme Pombo, Chaitanya Kaul, Harpreet Hyare, Geraint Rees, Parashkev Nachev
It remains unknown whether the difficulty arises from the absence of individuating biological patterns within the brain, or from limited power to access them with the models and compute at our disposal.
no code implementations • 14 Aug 2023 • Amy PK Nelson, Joe Mole, Guilherme Pombo, Robert J Gray, James K Ruffle, Edgar Chan, Geraint E Rees, Lisa Cipolotti, Parashkev Nachev
The quantification of cognitive powers rests on identifying a behavioural task that depends on them.
1 code implementation • 27 May 2023 • Guilherme Pombo, Robert Gray, Amy P. K. Nelson, Chris Foulon, John Ashburner, Parashkev Nachev
Here we initiate the application of deep generative neural network architectures to the task of lesion-deficit inference, formulating it as the estimation of an expressive hierarchical model of the joint lesion and deficit distributions conditioned on a latent neural substrate.
no code implementations • 25 Jan 2023 • Dominic Giles, Robert Gray, Chris Foulon, Guilherme Pombo, Tianbo Xu, H. Rolf Jäger, Jorge Cardoso, Sebastien Ourselin, Geraint Rees, Ashwani Jha, Parashkev Nachev
The gold standard in the treatment of ischaemic stroke is set by evidence from randomized controlled trials.
no code implementations • 15 Jan 2023 • James K Ruffle, Samia Mohinta, Guilherme Pombo, Robert Gray, Valeriya Kopanitsa, Faith Lee, Sebastian Brandner, Harpreet Hyare, Parashkev Nachev
Tumour heterogeneity is increasingly recognized as a major obstacle to therapeutic success across neuro-oncology.
no code implementations • 29 Nov 2021 • Guilherme Pombo, Robert Gray, Jorge Cardoso, Sebastien Ourselin, Geraint Rees, John Ashburner, Parashkev Nachev
The model is intended to synthesise counterfactual training data augmentations for downstream discriminative modelling tasks where fidelity is limited by data imbalance, distributional instability, confounding, or underspecification, and exhibits inequitable performance across distinct subpopulations.
1 code implementation • 26 Jul 2019 • Guilherme Pombo, Robert Gray, Tom Varsavsky, John Ashburner, Parashkev Nachev
Second, we show that reformulating this model to approximate a deep Gaussian process yields a measure of uncertainty that improves the performance of semi-supervised learning, in particular classification performance in settings where the proportion of labelled data is low.