Search Results for author: Shlomi Fruchter

Found 5 papers, 2 papers with code

ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion

no code implementations27 Mar 2024 Daniel Winter, Matan Cohen, Shlomi Fruchter, Yael Pritch, Alex Rav-Acha, Yedid Hoshen

To tackle this challenge, we propose bootstrap supervision; leveraging our object removal model trained on a small counterfactual dataset, we synthetically expand this dataset considerably.

counterfactual Object

Style Aligned Image Generation via Shared Attention

1 code implementation4 Dec 2023 Amir Hertz, Andrey Voynov, Shlomi Fruchter, Daniel Cohen-Or

Large-scale Text-to-Image (T2I) models have rapidly gained prominence across creative fields, generating visually compelling outputs from textual prompts.

Image Generation

AnyLens: A Generative Diffusion Model with Any Rendering Lens

no code implementations29 Nov 2023 Andrey Voynov, Amir Hertz, Moab Arar, Shlomi Fruchter, Daniel Cohen-Or

State-of-the-art diffusion models can generate highly realistic images based on various conditioning like text, segmentation, and depth.

Text Segmentation

The Chosen One: Consistent Characters in Text-to-Image Diffusion Models

1 code implementation16 Nov 2023 Omri Avrahami, Amir Hertz, Yael Vinker, Moab Arar, Shlomi Fruchter, Ohad Fried, Daniel Cohen-Or, Dani Lischinski

Our quantitative analysis demonstrates that our method strikes a better balance between prompt alignment and identity consistency compared to the baseline methods, and these findings are reinforced by a user study.

Consistent Character Generation Story Visualization

Cannot find the paper you are looking for? You can Submit a new open access paper.