1/10 🧵 Ever struggled with crafting prompts for generative AI models like Stable Diffusion? Here are some tips on how to consistently generate high-quality images. Let's dive in! #stablediffusion#ai#controlnet#buildinpublic
2/10 The most important variable in generating quality images is the prompt. Using the right words can significantly improve your results, while a single bad word can lead to an ugly image. #stablediffusion#ai#controlnet#buildinpublic
3/10 Unlike humans or language models like ChatGPT, Stable Diffusion doesn't understand language in the same way. To master prompting, we need to learn the language that Stable Diffusion understands. #stablediffusion#ai#controlnet#buildinpublic
4/10 Stable Diffusion learns to associate certain words with certain types of images. So, the key to a good prompt is to include words that the model associates with the images it was trained on. #stablediffusion#ai#controlnet#buildinpublic
5/10 I generally write prompts in a comma-separated list of words. It can sometimes make sense with incomplete sentences if you need specific elements in the images to have a certain relationship to each other. #stablediffusion#ai#controlnet#buildinpublic
6/10 To make the output look like a photo, add style words to the prompt. These are words that push the output towards a certain style, for example "photo", "RAW", "DSLR". These words will make the output look more like a photo. #stablediffusion#ai#controlnet#buildinpublic
7/10 Improve the quality of the output by adding quality words. These are words that Stable Diffusion associates with high-quality images and can improve the output quality even further. Example: "4k, 8k, UHD, professional"... #stablediffusion#ai#controlnet#buildinpublic
8/10 You can add a negative prompt to push the model away from certain elements or styles. This can be useful in getting more consistency and reducing the likelihood of low-quality images. Example: "bad, ugly, jpeg artifacts"... #stablediffusion#ai#controlnet#buildinpublic
9/10 If the image quality isn't as expected, try different words that might yield a nicer output. Spend time testing out different prompts to improve the quality. #stablediffusion#ai#controlnet#buildinpublic
10/10 The key takeaway is that Stable Diffusion doesn't understand language the same way we humans do. Instead of telling it what we want, we have to build up a prompt with words that the model associates with what we want. #stablediffusion#ai#controlnet#buildinpublic
Exciting updates to #stablediffusion with Core ML! - 6-bit weight compression that yields just under 1 GB - Up to 30% improved Neural Engine performance - New benchmarks on iPhone, iPad and Macs - Multilingual system text encoder support - ControlNet github.com/apple/ml-stabl… 🧵
Je voit plusieurs applications nous proposé ce crée un Avatar à des prix plutôt élevés. Je dev une solution qui permet d'entraîner son modèle à un coût plus intéressant. Demo avec @daedalium.
J'ai créé un Shorcut IOS qui permet d'entraîner un modèle de génération d'images directement sur son iPhone, petite demo avec une image généré sur @daedalium
This WE, I've been building a small tool to help me design my YouTube thumbnails out of crappy, bad quality, with wrong ratio, screenshots of a youtube video.
Built some tooling for @photogenicai Generating the same photo in 3 different models for rapid evaluation. It took a while to build the infrastructure but it's a delight to see 30 manual steps automated. 10min => 30 sec #buildinpublic#stablediffusion#aiphotography#aiart #AI
hello #buildinpublic founders, if you are having hard time hosting stable diffusion or any open source in your infra(AWS, GCP, Azure account), hit me up.. I can save your time and money.. #stablediffusion#ChatGPT#GenerativeAI
Just figured out how to create more intricate #StableDiffusion prompts and my experiments produced some pretty amazing visuals - check out these awesome pics! Crafting prompts is like using magic spells 🧙♂️ #buildinpublic#AI @_buildspace
Do anyone know which AI model tiktok AI portraits are using? Results are as good as dreambooth but they are training with just one photo and within seconds.