no code implementations • 26 Feb 2024 • Segev Wasserkrug, Leonard Boussioux, Dick den Hertog, Farzaneh Mirzazadeh, Ilker Birbil, Jannis Kurtz, Donato Maragno
Significantly simplifying the creation of optimization models for real-world business problems has long been a major goal in applying mathematical optimization more widely to important business and societal decisions.
1 code implementation • 26 May 2023 • Tianchun Wang, Farzaneh Mirzazadeh, Xiang Zhang, Jie Chen
Graph convolutional networks (GCNs) are \emph{discriminative models} that directly model the class posterior $p(y|\mathbf{x})$ for semi-supervised classification of graph data.
1 code implementation • NeurIPS 2019 • Pierre Monteiller, Sebastian Claici, Edward Chien, Farzaneh Mirzazadeh, Justin Solomon, Mikhail Yurochkin
Label switching is a phenomenon arising in mixture model posterior inference that prevents one from meaningfully assessing posterior statistics using standard Monte Carlo procedures.
1 code implementation • NeurIPS 2019 • Mikhail Yurochkin, Sebastian Claici, Edward Chien, Farzaneh Mirzazadeh, Justin Solomon
The ability to measure similarity between documents enables intelligent summarization and analysis of large corpora.
no code implementations • 1 Jun 2019 • Akash Srivastava, Kristjan Greenewald, Farzaneh Mirzazadeh
Well-definedness of f-divergences, however, requires the distributions of the data and model to overlap completely in every time step of training.
2 code implementations • 8 May 2019 • Charlie Frogner, Farzaneh Mirzazadeh, Justin Solomon
Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions.
no code implementations • ICLR 2019 • Charlie Frogner, Farzaneh Mirzazadeh, Justin Solomon
Despite their prevalence, Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions.
no code implementations • NeurIPS 2015 • Farzaneh Mirzazadeh, Siamak Ravanbakhsh, Nan Ding, Dale Schuurmans
A key bottleneck in structured output prediction is the need for inference during training and testing, usually requiring some form of dynamic programming.