Why is comfyui faster reddit. Load Fast Stable Diffusion.

Why is comfyui faster reddit So yea like people say on here, your negatives are just too basic. “(Composition) will be different between comfyui and a1111 due to various reasons”. I'm always on a budget so I stored all my models in an hdd. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. I'm new to ComfyUI and using stable diffusion in general. That should speed things up a bit on newer cards. Get the Reddit app Scan this QR code to download the app now. In this article, we will explore these steps to help you “flux1-dev-bnb-nf4” is a new Flux model that is nearly 4 times faster than the Flux Dev version and 3 times faster than the Flux Schnell version. the diffusion process so first I made 3 outputs of 10 20 30 samples. I tested with CFG 8, 6 and 4. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The only cool thing is you can Hi :) I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow. Gaming. Controlling ComfyUI via Script & | by Yushan777 | Sep, 2023 | Medium Once you have built what you want you in Comfy, find the references in the JSON Why is everyone saying automatic1111 is really slow with SDXL ? I have it and it even runs 1-2 secs faster than my custom 1. Updated it and loaded it up like normal using --medvram and my SDXL generations are only taking like 15 seconds. The weights are also interpreted differently. Apparently, that is because of the errors logged at startup. just tested it, definitely forge is faster and has same (almost) results. 10 votes, 14 comments. generally the comfyui images are worst if you use CFG > 4. py--cpu" . CUI is also faster. The CPP version overheats my computer MUCH faster than A1111 or ComfyUI. true. I believe I got fast ram, which might explain it. You have to draw a mask, save the image with the mask, then upload to the UI again to inpaint. i need help (i just want to install normal sd not the sdxl) Share Add a Comment. Feels like it is barely faster than my Hi! Does anyone here use ComfyUI professionally for work, and if so how/why? Also, why do you prefer it over alternatives like Midjourney, A1111 Hyperthreading doesn't make anything go any faster, it just allows for twice the interrupts that help alleviate waiting. support/docs/meta Welcome to the unofficial ComfyUI subreddit. github. For DPM++ SDE Karras I selected scheduller karras /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Yeah, look like it's just my Automatic1111 that has a problem, CompfyUI is working fast. I want to switch from a1111 to comfyui. infizoom possible in ComfyUI Any experience/knowledge on any of the above is greatly appreciated. But I still need to fixautomatic1111, might have to re-install. Had similar experience when I started with Comfyui. More info: https I am running ComfyUI on a machine with 2xRTX4090 and am trying to use the ComfyUI_NetDist custom node to run multiple copies of ComfyUI server, each using separate GPU, to speed up batch generation. When I first saw the Comfyui I was scared by so many options of what can be set. While kohya samples were very good comfyui tests were awful. I find that much faster. I've found A1111 is still useful for many things like grids which Comfy can do but not as well. It's just the nature of how the gpu works that makes it so much faster. 56 votes, 17 comments. A lot of people are just discovering this technology, and want to show off what they created. Share Add a 23K subscribers in the comfyui community. Standalone: everything is contained in the zip, you could use it on a brand new system. This does take 20 to 30 minutes. If you are interested in learning about how things work behind the scenes, then you're better off investing the time into learning ComfyUI. ComfyUI and AnimateDiff It's so fast! | LCM Lora + Controlnet Openpose + Animatediff (12 When the path changes, the "deactived" path now gets a small blank image as the input, that path processes faster as a result. to me Comfy feels like something better suited for post processing instead of image generation there is no point using a node based UI for just generating a image but layering different models for upscale or feature refinement is the main reason comfy is actually good after the image generation part, atm using Loras and TIs is a PITA not to mention a lack I've started with Easy Diffusion, and then moved to Automatic 1111, but recently I installed Comfy UI, and drag and dropped a work flow from Google (Sytans Workflow) and it is amazing. Belittling their efforts will get you banned. That makes no sense. Nodes in ComfyUI represent specific Stable Diffusion functions. In the github Q&A, the comfyUI author had this to say about ComfyUI: QA Why did you make this? I wanted to learn how Stable Diffusion worked in detail. Welcome to the unofficial ComfyUI subreddit. I read that Forge increses speed for my gpu by 70%. I also use CTRL+B and CTRL+M on various nodes to toggle what controlnet nodes are applying to my clip (using fast bypass and fast mute nodes connected to them to quickly toggle individual node state!) Welcome to the unofficial ComfyUI subreddit. If it allowed more control then more people would be interested but it just replace dropdown menus and windows with nodes. I tried generating a 512x768 image both 36 votes, 12 comments. It also seems like ComfyUI is way too intense on using heavier weights on (words:1. and don't get scared by the noodle forests you see on some screenshots. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. once you get comfy with comfy you don't want to go back. The original workflow doesn't use lcm as sampler, I just use it to make the generation faster. Maybe I am doing it wrong, but ComfyUI inpainting is a bit awkward to use. Don't know why. But then I realized, shouldn't it be possible (and faster) to link the output from one into the next instead? /r/StableDiffusion is /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For instance (word:1. Here are some so im getting issues with my comfyui and loading this custom sdxl turbo model into comfyui. Before the Vast /r/StableDiffusion is Yes, you can do it using the ComfyAPI. Faster and/or more resource efficient and/or B: More flexible and powerful for the deep-diving workflow crafters, code nerds who make their own nodes, and wonks Introducing "Fast Creator v1. 1 Turbo model Front-end improvements like group nodes, undo/redo, rerouting primitives Experimental The comfyui target audience are mainly engineer minded high tech people (heck, I'm dealing with PC's for almost 24 years and I scratched my head multiple times on some workflows) This is advertised like it's targeted to families and kids. View community ranking In the Top 20% of largest communities on Reddit. Possibly some Custom Nodes, or a wrongly installed startup package, like torch or xformers. Please share your tips, tricks, and workflows for using this Colab does break in my normal operation. So, while I don’t know specifically what you’ve been watching, the short version is ComfyUI enables things that other UIs can’t. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by After all, the more tools there are in the SD ecosystem, the better for SAI, even if ComfyUI and its core library is the official code base for SAI now days. Only the LCM Sampler extension is needed, as shown in this video. 82 it/s (4090). Shouldn't you be able to reach the same-ish result faster if you just upscale with a 2x upscaler? Is there some benefit to this upscale-then-downscale approach, or is it just related to availability of 2x ComfyUI also uses xformers by default, which is non-deterministic. 4" - Free Workflow for ComfyUI. I'm still experimenting and figuring out a good workflow. Hi! Does anyone here use ComfyUI professionally for work, and if so how/why? Also, why do you prefer it over alternatives like Midjourney, A1111, etc. [Please Help] Why is a bigger image faster to generate? This is a workflow I made yesterday and I've noticed, that the second KSampler is about 7x faster, even though the second sampler processes a larger Also "octane" might invoke "fast render" instead of "octane style". There's a Welcome to the unofficial ComfyUI subreddit. A1111 isn't very polished in terms of UX/UI it's still a lot more intuitive. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the No spaghetti, no figuring out why this latent needs these 4 nodes and why one of them didn't work since the last update. thank you for the advice. 11. if i understood correctly even the extensions are same, if so i'll probably replace a1111 with forge but comfy is still much faster Welcome to the unofficial ComfyUI subreddit. ComfyUI weights prompts differently than A1111. ai account and a Jupyter Notebook for when I'm trying out new things, want/need to work fast and for img2img batch iterative upscaling. 1) in A1111. All it takes is taking a little time to compile the specific model with resolution settings you plan to use. It should be at least as fast as the a1111 ui if you do that. Progressively, it seemed to get a bit slower, but negligible. Valheim; In my experience comfy UI is 4x faster than A1111. As Comfy is faster than A1111 though--and you have a lot of creative freedom to play around with latents, mix-and-match models and do other crazy stuff in a workflow that can be built and re-used. Comfy doesn't really do "batch" modes, really, it just adds individual entries to the queue very quickly, so adding a batch of 10 images is exactly the same as clicking the "Queue Prompt" button 10 times. ( Maybe it's got something to do with the quantization method ? The T5 FP8 + Flux Q3_K_S obviously don't fit together in 8 GB VRAM, and still the Flux Q3_K_S was loaded completely , so maybe I'm just not reading the console right Welcome to the unofficial ComfyUI subreddit. 5 checkpoint on the same pc BUT the quality -at least comparing a few prompts As you get comfortable with Comfyui, you can experiment and try editing a workflow. But everything goes smooth and fast only on 4090. I will say don't dump Automatic1111. ? Welcome to the unofficial ComfyUI subreddit. 7K subscribers in the comfyui community. 5 to 3 times faster than automatic1111. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Welcome to the unofficial ComfyUI subreddit. The quality compare to FP8 is really close. Some of the ones with 16gb vram are pretty cheap now. do i have to use another workflow or why is the images not rendered instant or ´why do i have these image issues? with my 8 gb rx 6600 which I was only able to run sdxl with sd-next (out of memory after 1-2 runs and on default 1024x1024), I was able to use this is comfyui BUT only with 512x512 or 768x512 - 512x768 (memory errors even with these from time to time) Curiously it is like %25 faster run running a sd 1. Fast ~18 Steps Images (2s inference time on a 3080 Welcome to the unofficial ComfyUI subreddit. Initially I was put off by the messy looking workflows in comfy but now I love it and it's all I use. Takes a minute to load. If you are looking for a straightforward workflow that leads you quickly to a result, then Automatic1111. i heard that comfyUI generate more faster. py when launching it in Terminal, this should fix it. ComfyUI Weekly Update: Faster VAE, Speed increases, Early inpaint models and more. Sampling method on ComfyUI: LCM CFG Scale: from 1 to 2 Sampling steps: 4 Locked post. Except I have all those csv files in the root directory Comfyui indicates they need to be in, so why Welcome to the unofficial ComfyUI subreddit. You have to run it on CPU. 10K subscribers in the comfyui community. io DMD2 aims to create fast, one-step image generators that can produce high-quality images with much less computational cost than traditional diffusion models, which typically require many steps to generate an image. Please keep posted images SFW. comfyUI takes 1:30s, auto1111 is taking over 2:05s Comfy is maybe 10 Welcome to the unofficial ComfyUI subreddit. Contribute to gameltb/ComfyUI_stable_fast development by creating an account on GitHub. However, I decided to give it a try At the moment there are 3 ways ComfyUI is distributed: 1. A1111 does a lot behind the scenes with prompts, while ComfyUI Doesn't, making it more sensitive to the Prompt length , sampler shouldn't affect but i always use Euler normal , try it out. don't load Runpod's ComfyUI template Load Fast Stable Diffusion. As far as I understand, as opposed to A1111, ComfyUI has no GPU support for Mac. Please share your tips, tricks, and workflows for using this software to create your AI art. Save up for a Nvidia card, and it doesn't have to be the 4090 one. Easier to install an run but tend While Comfy UI already provides fast rendering, there are several techniques You can implement to further enhance its performance. Share Finally got ComfyUI set up on my base Mac M1 Mini and as instructed I ran it on CPU only: "%python3 main. this stuff gets complex really fast, especially in Comfy! I'd say for your purposes, you can basically Welcome to the unofficial ComfyUI subreddit. ipynb in /workspace. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. and i get the following results. 25K subscribers in the comfyui community. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Share Add a Comment. Want to use latent space, again 1 button. This is something I posted just last week on GitHub: When I started using ComfyUI with Pytorch nightly for macOS, at the beginning of August, the generation speed on my M2 Max with 96GB RAM was on par with A1111/SD. Thanks for implementing this so quickly! Messing around with this, I feel like the hype was a bit too much. I regularly get several hours before it breaks. Have you checked out SD forge, it has the Automatic 1111 interface with the backend more like comfy (ie faster). 21K subscribers in the comfyui community. I think for me at least for now with my current laptop using comfyUI is the way to go. that's why people are having trouble using lcm in comfy now and also the new 60% faster sdxl (both only support diffusers) This is a fan sub, not run or owned by YouTube! Please read the rules: https Welcome to the unofficial ComfyUI subreddit. but can it be used with ComfyUI? In my site-packages directory I see "transformers" but not "xformers". You will have to learn Stable Diffusion more deeply though. But the speed difference is far more noticeable on lower-VRAM setups, as ComfyUI is way more efficient when it comes to using RAM and VRAM. Don't know if inpainting works with SDXL, but ComfyUI inpainting works with SD 1. It will also be a lot slower this way than A1111 unfortunately. Comfy is basically a backend with very light frontend, while A1111 is very heavy frontend. which I rent with a Vast. New comments cannot be posted. ComfyUI is also has faster startup, and is better at handling VRAM, so you can generate larger images, or so I've heard. 55 votes, 19 comments. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. You'll probably have much more At the moment there are 3 ways ComfyUI is distributed: 1. Key improvements over DMD: Eliminates the need for a regression loss and expensive dataset construction The Flux Q4_K_S just seems to be faster than the smaller Flux Q3_K_S, despite the latter being loaded completely. Healthy competition, even between direct rivals, is good for both parties. I have 1060 with 6gb ram. 31 votes, 70 comments. stable-fast A1111 is like ComfyUI with prebuilt workflows and a GUI for easier usage. 2) and just gives weird results. 5 and then using the XL refiner as I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. Comfyui has this standalone beta build which runs on python 3. PSA: RealPLKSR is a new, FANTASTIC (and fast!) 4x upscaling architecture Welcome to the unofficial ComfyUI subreddit. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. I have tried it (a) with one copy of SDXL running on each GPU and (b) with two copies of SDXL running per GPU. Definitely no nodes before that quickly flick green before the KSampler? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I'll I've been using ComfyUI as my go to for about a month and it's so much better than 1111. 1) in ComfyUI is much stronger than (word:1. 24K subscribers in the comfyui community. CUI can do a batch of 4 and stay within the 12 GB. But if you want to go into more detail and have complete control over your composition, then ComfyUI. despite the complex look, it's 2. I switched to ComyfUI from A1111 last year and haven't looked back, in fact I can't remember the last time I used A1111. About knowing what nodes do, this is the hard thing about ComfyUI, but there's a wiki created by the dev (comfyanonymus) that will help to understand many things /r/StableDiffusion is back open after the On my rig, it's about 50% faster, so I tend to mass-generate images on ComfyUI, then bring any images I need to fine-tune over to A1111 for inpainting and the like. This update includes new features and improvements to make your image creation process faster and more efficient. Just check your vram and be sure optimizations like xformers are set-up correctly because others UI like comfyUI already enable those so you don't really feel the higher vram usage of SDXL. I only found comfy quicker in super simple generations or small automated processes to pump out tons of pictures quick. I accidentally tested ComphyUI for the first time about 20 min ago and noticed I clicked on the CPU bat file (my bad🤦‍♂️). But those structures it has prebuilt for you aren’t optimized for low end hardware specifically. Essentially it means that under 100% load from one app that only uses the actual number of cores present, it'll show as 50% utilization because the other half are not in use by the app but your CPU is doing all the actual work it can. Running on a M2 so if it works good speed increase is always great. /r/SanJose will be going dark between 12-14th June in protest Welcome to the unofficial ComfyUI subreddit. 2. Also, if this is new and exciting to you, feel free to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Whether that applies to your case or not really depends on what you’re trying to do. And produces better results than I ever get it A1111 somehow, am I doing something wrong with A1111, or is Comfy UI just that much faster and better? Welcome to the unofficial ComfyUI subreddit. With comfy you can optimize your stuff how you want. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which - I have an RTX 2070 + 16GB Ram, and it seems like ComfyUI has been working fineBut today when generating images, after a few generations ComfyUI seems to slow down from about 15 seconds to generate an image to 1 minute and a half. it has been noticeably faster unless I 123 votes, 148 comments. I merely stop and restart the Jupiter script. On my machine, comfy is only marginally faster than 1111. And above all, BE NICE. A lot of people are just discovering this Hey r/comfyui, I just published a new video going over the recent updates for ComfyUI reaching the end of year. ComfyUI is a bitch to learn at first, but once you get a grasp of it, and build the workflows you want to use for what you're doing, you are on a plateau and it's really easy. Question - Help Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from The big difference is that looking at Task Manager (on different runs so as not influence results), my CPU usage is at 100% with CPP with low RAM usage, while in the others my CPU usage is very ow with very high ram usage. 4". Fooocus would be even faster. I test ran with a simple 512x512 image with no Lora, etc, and it still took forever (well almost 3 mins). It will Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. Sorry to say that it won't be much faster, even if you overclock the cpu. The one thing I would add is that a lot of the time you will spend learning ComfyUI, you will also be learning about the underlying technologies, since you can combine anything together. This KSampler uses the exact same prompt, model, and image that has been generated by the previous one, so why? For me it seems like adding more steps to the previous sampler would achieve similar results. Comfy does launch faster than auto111 though but the ui will I had previously used ComfyUI with SDXL 0. More info: https://rtech The floating point precision on fp16 is very very poor for very very small decimals. Asked reddit wtf is going on everyone blindly copy pasted the same thing over and over. 1. Here are my Pro and Contra so far for ComfyUI: Pro: Standalone Portable Almost no requirements/setup Starts very fast SDXL Support Shows the technical relationships of the individual modules Cons: Complex UI that can be confusing Without advanced knowledge about AI/ML hard to use/create workflows /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. (mostly comfyui) with a 3070ti laptop (8gb vram), and I want to do an upgrade getting a good gpu for my desktop pc. The node based environment means its However, the engine unloading caused by VAE decoding can greatly slow down the overall generation speed. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. but will also experiment with the fast speed. But than I have to ask myself if that is really faster. This is the Official Tower Defense Simulator Reddit, this is a place for our community to interact with each other, post memes, ask Hi everybody, I am running a1111, comfyui, easydiffusion and fooocus-mre in a virtual machine (explaining why at the bottom of this post). (which allows fun things like quickly generating an image with 1. It is not as fast but is more reliable. left one is Forge, right one is A1111, 7it/s vs 5. 13s/it on comfyUI and on WebUI i get like 173s/it. Anything Better? Question - Help Honestly the default is amazingly fast, but still curious. #Comfyui #Ultimate upscale - a faster upscale, same quality . But you an achieve this faster in A1111 considering the workflow of comfy ui. So far the images look pretty good except I'm sure they could be a lot When you drag an image to the ComfyUI window, you will get the settings used to create THAT image, not the batch. I like web UI more, but comfy ui just gets things done quicker, and i cant figure out why, its breaking my brain. If it isn't let me know because it's something I need After noticing the new UI without the floating toolbar and the top menu, my first reaction was to instinctively revert to the old interface. 5 and 2. subreddit Welcome to the unofficial ComfyUI subreddit. What are your normal settings for it? California, the heart of the Silicon Valley. Locked post. For example you can do side-by-side and compare workflows: one with only base and one with base + lora and see the difference. More info: https://rtech /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. That could easily be why things are going so fast, I'll have to test it out and see if that's an issue with generation quality. But yeah it goes fast in ComfyUi. but many anecdotes on this subreddit that ComfyUI is much faster than A111 without much info to back it up. If I restart the app, then it will be faster again, but again, the second generation and so on will be slower again. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Warning. Also it is useful when you want to quickly try something out since you don't need to set up a workflow. A few weeks ago I did a "spring-cleaning" on my PC and completely wiped my Anaconda environments, packages, etc. Commercial Product Background Replace High Resolution, Fast&Effective 7. Or check it out in the app stores     TOPICS. type --cpu after main. After that, subsequent generations will be faster. it's the perfect tool to explore generative ai. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the 22K subscribers in the comfyui community. Unless cost is not a constraint and you have enough space to backup your files, move everything to an ssd. And with the introduction of SDXL and the push for Comfy UI, I fear that it is heading in that direction even faster. 5 models. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the ComfyUI is a great sandbox environment for people with advanced knowledge of SD and AI, but for people who aren't as read up on all the different systems it gets overwhelming fast. 9 and it was quite fast on my 8GB VRAM GPU (RTX 3070 Laptop). How would I specify it to use the venv instead of system python? am currently a bit confused with confyUI rn. The workflow is huge, but with the toggles, it can run pretty fast. Definitely the width on your resolution. everything ai is changing left and right, so a flexible approach is the best imho. 15K subscribers in the comfyui community. I mean I like segmentation but even that exists in Automatic 1111. Experimental usage of stable-fast and TensorRT. Within that, you'll find RNPD-ComfyUI. You'll need to follow the guide below to enable stable fast node. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. Learn comfyui faster I recommend you to install the ComfyUI Manager extension, with it you can grab some other custom nodes available. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Turbo SDXL-LoRA-Stable Diffusion XL faster than light My civitai page: https few seconds = 1 image Tested on ComfyUI: workflow. Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. I started on the A1111. More info: https://rtech. Too much and you get side by side people. Like 20-50% faster in terms of images generated per minute. Lower the resolution and if you gotta go wide screen, use outpainting or the amazing photoshop beta. I'll stay on ComfyUI since it works better for me, it's faster, more customizable, looks better (in that I can arrange nodes where I want), its updates don't completely break the install for me like A1111's always do, and most importantly it allows me to actually generate the images I want without constant out of memory errors. You can lose the top 4 nodes as they are just duplicates, you can link them back to the original ones. 😁 Hadn't messed with A1111 in a bit and wanted to see if much had changed. By being a modular program, ComfyUI allows everyone to make workflows to meet their own needs or to experiment on whatever they want. Tested failed loras with a1111 they were great. Discover helpful tips for beginners using ComfyUI on StableDiffusion. A few new rgthree-comfy nodes, fast-reroutes I was facing similar issues when i first started using ComfyUI, try adjusting CFG scale to 5 and if your prompts are big like in A1111, add a token merging node. When ComfyUI just starts, the first image generation will always be fast (1 minute is the best), but the second generation (no changes to settings and parameters) and so on will always be slower, almost 1 minute slower. you define the complexity of what you build. Instead of complaining about it, I It is actually faster for me to load a lora in comfyUi than A111. Plus, Comfy is faster and with the ready-made workflows, a lot of things can be simplified and I'm learning what works and how on them. In general, image generation on MPS is slow, even on an M2 Max. I use a script that updates Comfyui and checks all the Custom Nodes. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and ComfyUI allows you to build an extremely specific workflow with a level of control that no other system in existence can match. Take it easy! 👍 I watched more carefully, and the the reason for the speed difference should have been blatantly obvious to me the first time: The A1111 run was done using the Euler a sampler, while the ComfyUI run was done using DPM++ 2S a, which is about half as fast. Seems to have everything I need for image sampling. The video covers: New SD 2. The speed is very fast, and you can enable xFormers for even faster speed on nVidia cards. Bf16 is capable of much better representation for very small decimals. When I was using automatic1111 it would let me adjust the ram usage options as well as adding a bunch of command line arguments to the batch file directly, can't seem to find such file under comfyUI. Sort by: Best /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. for me its the UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. Now I've been on Comfyui for a few months and I won't turn on the A1111 anymore. No idea why , but i get like 7. Next. And it's 2. This combo is just as fast as the DDIM one I was using. comfyanonymous. Learn from community insights and improve your experience. and nothing gets close to comfyui here. Draw Things (which has a lot of configuration settings). If someone needs more context please do ask. (and fast!) 4x upscaling architecture ComfyUI is still way faster on my system than Auto1111. Then i tested my previous loras with comfyui they sucked also. Now with comfyui. No matter what, UPSCAYL is a speed demon in comparison. One of the strengths of comfyui is that it doesn't share the checkpoint with all the tabs. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Comfyui makes things complicated but people becomes bored. . It doesn't have all the features and for that I do occasionally have to switch back, but the node style editor in Comfy is so much clearer and being Try using an fp16 model config in the CheckpointLoader node. I started using ComfyUI with ReActor Fast Face Swap. my post. Most of the time, I use a1111. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to Welcome to the unofficial ComfyUI subreddit. It's still 30 seconds slower than comfyUI with the same 1366x768 resolution and 105 steps. More /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Ive tried everything, reinstalled drivers, reinstalled the app, still cant get WebUI to run quicker. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all The main problem is that moving large files from and to an ssd repeatedly is going to wear it out pretty fast. I've played around with different upscale models in both applications as well as settings. With ComfyUI you have access to ready-made workflows, but this can be overwhelming, especially for beginners. (I also have a fast bypassed node at https: That said, Upscayl is SIGNIFICANTLY faster for me. This is why I have and use both. It also runs on CPU as however, you can also run any workflow online, the GPUs are abstracted so you don't have to rent any GPU manually, and since the site is in beta right now, running workflows online is free, and, unlike simply running ComfyUI on some While comfyUI is better than default A1111, TensorRT is supported on A1111, uses much less vram and image generation is 2-3X faster. In a111, when you change the checkpoint, it changes it for all the active tabs. Too high on the height and you get multiple heads. 0. It adds additional steps. ComfyUI : Using the API : Part 1. In ComfyUI using Juggernaut XL, it would usually take 30 seconds to a minute to run a batch of 4 images. For example, SD and MJ are pushing themselves ahead faster and further because of each other. I have yet to find anything that I could do in A1111 that I can't do in ComfyUI including XYZ Plots. Hope I didn't crush your dreams. VRAM optimization throughout means you can run ED with very little memory and still have access to all the features. wivh fvpigqa bsth liz bho kmlsxksi eumwa rxtly madui nrkgd