- Видео 38
- Просмотров 866 047
koiboi
Австралия
Добавлен 4 сен 2022
What Actually is A.I.?
======= Things to Look Into =======
The course I'm teaching through Charles Sturt University: itmasters.edu.au/about-it-masters/free-short-courses/free-short-course-practical-ai-for-non-coders/
Peepa's RSS Feed (he's the most up-to-date person I know where ML is concerned): pipinstallyp.in/
======= Sources =======
Good long modern article on the definition of A.I. link.springer.com/chapter/10.1007/978-3-031-21448-6_2#Fn1
The best paper I could find on buzzwords, the Introduction section is very good in particular: www.ncbi.nlm.nih.gov/pmc/articles/PMC8203706/
MMC report on A.I. startups: www.stateofai2019.com/
Our World In Data A.I. Article with really good graphs: ourworldindata.org/brief-hi...
The course I'm teaching through Charles Sturt University: itmasters.edu.au/about-it-masters/free-short-courses/free-short-course-practical-ai-for-non-coders/
Peepa's RSS Feed (he's the most up-to-date person I know where ML is concerned): pipinstallyp.in/
======= Sources =======
Good long modern article on the definition of A.I. link.springer.com/chapter/10.1007/978-3-031-21448-6_2#Fn1
The best paper I could find on buzzwords, the Introduction section is very good in particular: www.ncbi.nlm.nih.gov/pmc/articles/PMC8203706/
MMC report on A.I. startups: www.stateofai2019.com/
Our World In Data A.I. Article with really good graphs: ourworldindata.org/brief-hi...
Просмотров: 2 851
Видео
🧠 Mind-Reading Stable Diffusion Paper
Просмотров 7 тыс.Год назад
A walkthrough of a recent research paper which had participants view images, and then reconstructed those images using Stable Diffusion and fMRI readings of the participants brains. This was made possible by excellent work from the Neural Scenes Dataset Team. Links High-resolution image reconstruction with latent diffusion models from human brain activity: www.biorxiv.org/content/10.1101/2022.1...
🤔 Ok, but what IS ControlNet?
Просмотров 39 тыс.Год назад
A high level overview of the excellent ControlNet research paper which has been used recently to grant stable diffusion users highly fine grained control over the image generation process. Discord: discord.gg/CNTQPUqK Links ControNet paper: arxiv.org/abs/2302.05543 Huggingface ControlNet models: huggingface.co/lllyasviel/ControlNet/tree/main/models Huggingface Depth-2-img models: huggingface.co...
Offset Noise: Midjourney Dethroned
Просмотров 31 тыс.Год назад
We explain the new Offset Noise discovery which allows latent diffusion model trainers to get vastly improved results by changing a single line of code. We also compare images generated by offset noise models to pre-offset noise images and Midjourney images. This is probably the first time since the release of the v4 model in December 2022 that the stable diffusion community has achieved parity...
Optimal Deforum Animation Settings for Quality/Coherence
Просмотров 25 тыс.Год назад
I performed a search over strength vs camera movement in Deforum animations and I found that 0.6 strength with camera movement within 3% of stationary is a good starting point. Discord: discord.gg/s8rVscu2pM Links Spreadsheet: docs.google.com/spreadsheets/d/1KJ2IIOa61T8IE1YMX0oDvEEcZSnz48pIPyOW4vuShjc/edit?usp=sharing Videos: drive.google.com/drive/folders/1q3vPOSqfoZvcLGfZCsq9MlqjnfuKQItI?usp=...
Easy Audio-Reactive Music Videos with Deforum/Automatic1111
Просмотров 30 тыс.Год назад
I made a small tool for easily creating audio-reactive music animations with stable diffusion using Deforum and the automatic1111 webui. Big thanks to dreamingtulpa for his advice which led to the "skip" mode, and cac0e for sharing his models and embeddings. 00:00 - Summary 00:50 - How does audio reactive animation work? 06:32 - Downloading Song 07:01 - Install Deforum 1...
😕LoRA vs Dreambooth vs Textual Inversion vs Hypernetworks
Просмотров 156 тыс.Год назад
There are 5 methods for teaching specific concepts, objects of styles to your Stable Diffusion: Textual Inversion, Dreambooth, Hypernetworks, LoRA and Aesthetic Gradients. The question is: which one should you use? In this video we review 3 key research papers, look at the underlying mathematical mechanics behind each method, analyze data from civitai to arrive at an informed and final conclusi...
Easy AI Art animation tutorial | Automatic1111, Stable Diffusion
Просмотров 19 тыс.Год назад
We're going to do some stable diffusion animation using a driving video and the Automatic1111 webui. This is a super easy tutorial walkthrough that anyone can follow to do animation easily. Discord: discord.gg/s8rVscu2pM Links ezgif: ezgif.com/video-to-jpg ffmpeg install: ffmpeg.org/download.html Birme Image Cropping: www.birme.net da vinci resolve: www.blackmagicdesign.com/products/davincireso...
Textual Inversion with Automatic1111 (I Read The Paper)
Просмотров 16 тыс.Год назад
Textual inversion is very similar to dreambooth, in both cases you use 3-5 sample images to teach stable diffusion about a concept or style, which the model then learns to generate. Textual inversion has two key advantages, (1) is non-destructive and does not effect the original model and (2) it produces a highly portable embedding, rather than a new model. We talk about how Textual Inversion w...
A.I. Art is More Ethical than Human Art | Data Scientist/Philosophy Grad
Просмотров 3,9 тыс.Год назад
If the AI is cheaper and has the same quality then your decision is actually about how to spend money. On the one hand you allocate the money to a human artist to spend as they wish. On the other hand you pay a small amount for the AI, and then you are free to spend the rest however you like. If you really want to be ethical, you should always use the AI, and then spend the rest in an optimally...
7GB RAM Dreambooth with LoRA + Automatic1111
Просмотров 36 тыс.Год назад
The day has finally arrived: we can now do local stable diffusion dreambooth training with the automatic1111 webui using a new teqhnique called LoRA (Low-rank Adaptation for Fast Text-to-Image Diffusion Fine-tuning). We walk through how to use this new method to create some improved quality images. Discord: discord.gg/s8rVscu2pM Parameters training steps per img: 150 batch size: 1 lora unet lea...
Easy Background Cropping with Stable Diffusion 2.0 and AUTOMATIC1111
Просмотров 18 тыс.Год назад
We walk through how to create a photorealistic image with stable diffusion 2.0 and remove the image background with the new automatic1111 depth mask cropping extension. Discord: discord.gg/s8rVscu2pM Links URL for Extension: github.com/Extraltodeus/depthmap2mask Misc Negative prompt used: disfigured, kitsch, ugly, oversaturated, grain, low-res, Deformed, blurry, bad anatomy, disfigured, poorly ...
New Stable Diffusion Dreambooth Archive
Просмотров 4,7 тыс.Год назад
Civitai is an (awesome!) archive for stable diffusion dreambooth finetunes and it's really easy to work with. In this tutorial we walk through the process of downloading a model from Civitai and generating some images with it! Discord: discord.gg/s8rVscu2pM Links The archive: civitai.com/ Music Music from freetousemusic.com ‘Daily’ by ‘LuKremBo’:ruclips.net/video/Tchb1Q4V-nc/видео.html ‘Late Mo...
Automatic1111 Stable Diffusion 2.0 Install (easy as)
Просмотров 16 тыс.Год назад
A quick (and unusually high energy) walkthrough tutorial for installing and using stable diffusion 2.0 with the Automatic1111 webui. Automatic1111 Install Guide: github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20 Automatic1111 repo: github.com/AUTOMATIC1111/stable-diffusion-webui Xformers Install Tutorial: ruclips.net/video/ZVqalCax6MA/видео.html&ab_channel=koiboi...
1-Click Stable Diffusion Install for my Dad
Просмотров 4 тыс.Год назад
Hey, dad! Follow this tutorial and you can finally try all those things you keep suggesting I do yourself! Finally there's a GOOD one-click stable diffusion installer. It doesn't require any coding or computer knowledge at all! This stable diffusion install tutorial is designed to be simple enough for non-technical boomers like my dad to follow along, if it helps anyone else out that would be r...
What makes Modjourney v4 so much Better?
Просмотров 4,4 тыс.Год назад
What makes Modjourney v4 so much Better?
New Easy VAE Workflow (Stable Diffusion)
Просмотров 15 тыс.Год назад
New Easy VAE Workflow (Stable Diffusion)
What the heck is CLIP Skip and when do I use it?
Просмотров 34 тыс.Год назад
What the heck is CLIP Skip and when do I use it?
Is the new Image-to-Music algorithm Really That Good?
Просмотров 7 тыс.Год назад
Is the new Image-to-Music algorithm Really That Good?
How to use Aesthetic Gradients: Stable Diffusion Tutorial
Просмотров 27 тыс.Год назад
How to use Aesthetic Gradients: Stable Diffusion Tutorial
Stable Diffusion: The Ultimate GPU Guide
Просмотров 62 тыс.Год назад
Stable Diffusion: The Ultimate GPU Guide
AI Art moves really fast: How to stay current without endless scrolling
Просмотров 3 тыс.Год назад
AI Art moves really fast: How to stay current without endless scrolling
New Stable Diffusion Vitrual Reality Tech
Просмотров 9 тыс.Год назад
New Stable Diffusion Vitrual Reality Tech
Install XFormers in one click and run Stable Diffusion at least 1.5x faster
Просмотров 119 тыс.Год назад
Install XFormers in one click and run Stable Diffusion at least 1.5x faster
Visual Paper Summary: Progressive Distillation | Imagen, Stable Diffusion, Dall E
Просмотров 2,5 тыс.Год назад
Visual Paper Summary: Progressive Distillation | Imagen, Stable Diffusion, Dall E
(OUTDATED) How to install XFormers and run Stable Diffusion 1.5x faster on 4GB RAM
Просмотров 45 тыс.Год назад
(OUTDATED) How to install XFormers and run Stable Diffusion 1.5x faster on 4GB RAM
NovelAI Leak: The first big Stable Diffusion Community Drama and the Banning of AUTOMATIC1111
Просмотров 13 тыс.Год назад
NovelAI Leak: The first big Stable Diffusion Community Drama and the Banning of AUTOMATIC1111
Cross-Attention: Prompt based image editing for Stable Diffusion (colab notebook included)
Просмотров 4,6 тыс.Год назад
Cross-Attention: Prompt based image editing for Stable Diffusion (colab notebook included)
Key takeaways from the Dreambooth paper for better Stable Diffusion results ft. Hasan Piker
Просмотров 2,2 тыс.Год назад
Key takeaways from the Dreambooth paper for better Stable Diffusion results ft. Hasan Piker
this is such a great video, would you be able to make a video on IPAdapters?
не работает! после марта и обновления на 1.9.0++ много чего пропало в sd.
Man this video is so helpful. Ty
0:48 yeah current world. let bloat internet and get likes. its youtube that make more and you get worthless wall piece
I appreciate the technical explanation but, in all honesty, I boiled it down after a single picture: ControlNet prioritizes image-to-image for denoising the subject, and text-to-image fills in the details.
I'm not sure the numbers about Dreambooth downloads are accurate. It seems he got that number from the number of "checkpoints" (aka full size models) downloaded from civit ai, but I'm not so sure most of those are made with Dreambooth, a lot (if not the majority) are model merges which is not the same thing. Just thought I'd mention that.
Fkn love this
on training, I got this error: Exception training model: 'type object 'LoraLoaderMixin' has no attribute '_modify_text_encoder''. any thoughts?
outdated?
thanks for the short explanation. Loved it!
There's a few different ways to remove backgrounds in the stock AUTOMATIC1111 interface. Is there a reason you downloaded an additional script? Are the stock functions not working? Just curious, because while I was previously able to change backgrounds now it seems I get a mixed bag of results, which by no means compare to your time of 11 minutes.. Would be interesting video or just a comment as to why this is the background method you're choosing. Thanks for the howto bro. Peace
Excellent !!!!
Niiiiiceee!! Very comprehensive
I couldn't follow these instructions because the launch.py file calls a separate modules\launch_utils.py and the setup of that file is different than here...BUT... if I just added the argument --xformers to the webui(.bat) command line, it did the same thing... so just launching SD by typing 'webui --xformers' without the quotes
This worked for me thanks! I added 'webui --xformers' in to the file 'webui-user.bat' so it is automatically run when SD is started. As a sidenote, the reason why I needed xformers: My 24GB RTX 3090 was running out of memory in images even with dimension under 500x500, but now I can upscale no problem even 100x size images
I really like your videos btw
What about the argument that the art is stolen? Stable Diffusion, etc, were trained on images without artists consent.
my intuition says hypernetwork is better than lora. Hypernetwork would have more layers than Lora.
Are they training for a specific sampler if so how?
it is a fat dislike for this kind a tutorial. Looks you are smart enough. But why did you place your talking head in upper left corner ? That is a big fail.
huh. im creating a js app that lets me do the same thing and be able to edit the entire process with drag and drop within the enviroment, but im not using ai or an image, but opencv and live camera streams instead
Do you understand that you are a legend?
What exactly does learning rate do?
Taking the course! Ofcourseeeeeeeeeeeee
Best video I ever seen. Best vibes! Thanks so much
man it's been a wild west out there an year later.
However can textual inversion create something that the model has never learned before and has no similar shape than anything it learned before? Well a corgi is a dog and SD learned many different dogs. What if SD was never trained on images with bicycles and motorbikes. Could it create an image of a bicycle with 'just' textual inversion?
why would you use zoom < 0 anyway...?
So entertaining and insightful, thank you koiboi. Really enjoyed it!
abi hasan kim :D
All I had to do was edit a line in webui.bat... this video is such nonsense.
Why does this say "one click" but its 13 mins long? Stupid
My GPU, a 1050, only has 2GB of VRAM. It seems impossible to run Stable Diffusion sigh. I have been experimenting with AI-generated content for two years on sites like PixAi, Google Colab, Nai, and more. Currently, I am contemplating generating content on my own PC, but it looks like I'll need to save money to buy the latest GPU
Hey, you're that guy from IT Masters!
Even though I should not write a comment, here it is. Thanks for your try and error. I feel like higher strength is better to get better consistency. Otherwise the videos are flickering a lot. It may also be important on which FPS you are working. If you got more frames per second, you cant afford too much change, but if there are less frames per second, bigger change wont flicker too much.
We need to install both of plugin and krita right?
short, precise, perfect video! 👍
thank you for getting straight to the point with the results.
outdated.. Not useful anymore.
Thanks for all your hard work, Is dev dead?
I really like how you approached this tut and really hand held people to help them through this process. I will not yell at you about the PIP. I do want to know why you think we need to see you on screen? I ask this of everyone who PIPs.
Thank you 🙂
Can you show exactly which setting and "how" to edit that in Deforum? Because I'm not good at math lol.
This guy has super powers that I need
Bro im starting studying this subject, im lost in all of these terms LoRa checkpoint, and everything. Is there any guide to understand this. beginners guide to follow and study.
Very cool.
Still works, one year later, with cuda 12.3
More videos please
Very much enjoyed your presentation. With thanks from ENG-land :)