Comfyui workflow json example reddit

Comfyui workflow json example reddit. You signed out in another tab or window. Is there a way to load the workflow from an image within It's perfect for animating hair while keeping the rest of the face still, as you can see in the examples. x, 2. Workflow in Json format. While I have you, can I ask where best to insert the base LoRA in your workflow? I created a ComfyUI workflow for Nel file scaricabile troverai un file JSON da importare in ComfyUI, contenente due workflow pronti all’uso: uno con Portrait Master, dedicato ai ritratti, e uno per inserire manualmente i prompt positivi e negativi. com and then post a link back here if you are willing to share it. ComfyUI Examples. For example, this is what the workflow produces: Other than that, there were a few mistakes in version 3. The first one is very similar to the old workflow and just called "simple". Share, discover, & run thousands of ComfyUI workflows. ComfyUI Tatoo Workflow | ComfyUI Workflow | OpenArt That being said, even for making apps, I believe using ComfyScript is better than directly modifying JSON, especially if the workflow is complex. ControlNet Inpaint Example. example to sdfx. If you want to automate it, I'm pretty sure there are Python packages that can do it, maybe even a tool that can read information out of a file, like for example ComfyUI workflow json file. This is the link to the workflow. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. I notice the names of the settings in the krita json don't match what's in comfy's json at all, so I can't simply copy them across. But reddit will strip it away. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. 0. Some very cool stuff! For those who don't know what One Button 18K subscribers in the comfyui community. I really really love how lightweight and flexible it is. image saving and postprocess need was-node-suite-comfyui to be installed. I would also love to see some repo of actual JSON or images (since Comfy does build the workflow from the image if everything necessary is installed). safetensors (5Gb - from the infamous SD3, instead of 20Gb - default from PixArt). What is the best workflow you know of? For example I had very good results using resolve and multiple layers that were AI generated and did the rest in standard VFX so to speak. Or what I started doing tonight was disconnect my upscale section but put a load image box at the start of upscale, generate a batch of images with a fixed seed if I like one of them then i load it at the start of the upscale and regeneration, because the seed hasn't changed it skips And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. You may plug them to use with 1. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. Resoltuons 512x512, 600x400 and 800x400 is the limit that I've have tested, I dont't know how it will work at higher resolutions. It's pretty easy to prune a workflow in json before sending it to ComfyUI. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. Has anyone else messed around with gligen much? Thanks. Features. 5 base models, and modify latent image dimensions and upscale values to Welcome to the unofficial ComfyUI subreddit. A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows I'm making changes to several nodes in a workflow, but only specific ones are rerunning like for example the KSampler node. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images Flux Dev. ckpt A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. Where can one get such things? It would be nice to An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. EDIT: For example this workflow shows the use of the other prompt windows. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. It might seem daunting at first, but you actually don't need to fully learn how these are connected. rgthree does it, I've written CLI tools to do the same based on /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Mute the two Save Image nodes in Group E Click Queue Prompt to generate a batch of 4 image previews in Group B. You signed in with another tab or window. I recently discovered the existence of the Gligen nodes in Comfyui and thought I would share some of the images I made using them (more in the civitai post link). Breakdown of workflow content. here is a example: "Help me create a ComfyUI workflow that takes an input image, uses SAM to identify and inpaint watermarks for removal, then applies various methods to upscale the watermark-free image. I have an image that I want to do a simple zoom out on. Official list of SDXL resolutions (as defined in SDXL paper). Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. I think ComfyUI is good for those who wish to do a reproducible workflow which then can be used to output multiple images of the same kind with the same steps. com/models/628682/flux-1-checkpoint It would be great to have a set of nodes that can further process the metadata, for example extract the seed and prompt to re-use in the workflow. mp4 -vf fps=10/1 frame%03d. Save your workflow using this format which is different than the normal json workflows. pt 到 models/ultralytics/bbox/ Will load a workflow from JSON via the load menu, but not drag and drop. ComfyUI workflow ComfyUI . The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge Reply reply More replies More replies Can someone give examples what you can do with the adapter in general? (Beyond what's in the videos) I've used it a little and it feels like a way to have an instant lora for a character. We have four main sections: Masks, IPAdapters, Prompts, and Outputs. Flux Schnell is a distilled 4 step model. The examples were generated with the Welcome to the unofficial ComfyUI subreddit. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. It is a simple workflow of Flux AI on ComfyUI. Ignore the prompts and setup /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Belittling their efforts will get you banned. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Its default workflow works out of the box, and I definitely appreciate all the examples for different work flows. I provide one example JSON to demonstrate how it works. These are examples demonstrating how to do img2img. Comfy UI is actually very good, it has many capabilities that are simply beyond other interfaces. Much appreciated if you can post the json workflow or a picture generated from this workflow so it can be easier to setup. If you want the exact input image you can find it on on Ubuntu it's downloads. SDXL Turbo is a SDXL model that can generate consistent images in a single step. Img2Img ComfyUI workflow. For more details on using the workflow, check out the full guide Note that ComfyUI workflow uses the masquerade custom nodes, but they're a bit broken, I can't be totally sure I downloaded the json but I don't have the images you set up as an example. For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. 0 and upscalers It's a complex workflow with a lot of variables, I annotated the workflow trying to explain what is going on. x, SDXL, LoRA, and upscaling makes ComfyUI flexible. safetensors and 1. sft file in your: ComfyUI/models/unet/ folder. When rendering human creations, I still find significantly better results with 1. I think it is just the same as the 1. I used the workflow kindly provided by the user u/LumaBrik, mainly playing with parameters like CFG Guidance, Augmentation level, and motion bucket. This workflow requires quite a few custom nodes and models to run: PhotonLCM_v10. I know it's simple for now. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool Here you can download my ComfyUI workflow with 4 inputs. 0 for ComfyUI - Now with support for SD 1. ; Download this workflow and drop it into ComfyUI - or you can use one of the workflows others in the community made below. You can also turn each process on/off for each run. I've tried using an empty positive prompt (as suggested in demos) and describing the content to be replaced without success -- Welcome to the unofficial ComfyUI subreddit. It looks freaking amazing! You signed in with another tab or window. json files. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 1 or not. r/ticktick. AP Workflow 6. The closest I found was SaveImgPrompt. In addition, I provide some sample images that can be imported into the program. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. Then you finally have an idea of whats going on, and you can move on to control nets, ipadapters, detailers, clip vision, and 20 A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows Description. json” file format, which lets anyone using the ComfyUI Launcher import your workflow w/ 100% reproducibility. Check ComfyUI here: https://github. safetensors 73. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting Welcome to the unofficial ComfyUI subreddit. For each of the sequences, I generated about ten of them and then chose the one I Plus, you want to upscale in latent space if possible. from a folder but mainly its a workflow designed make or change an initial image to send to our sampler Two workflows included. Nobody needs all that, LOL. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. json of the file I just used. The _offset field is a way to quickly skip ahead some data of same types. Grab the ComfyUI workflow JSON here. ComfyUI Fooocus Inpaint with Segmentation Workflow. json, and verify / edit the paths to your model folders Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. safetensors sd15_lora_beta. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. 5 by using XL in comfy. com or https://imgur. With ComfyUI Workflow Manager -Can I easily change or modify where my json workflows are stored and saved? Yes we just enabled this feature, please go to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. There is a latent workflow and a pixel space ESRGAN workflow in the examples. ComfyUI is a completely different conceptual approach to generative art. safetensors 3. You can load this image in ComfyUI to get the full workflow. Sytan's SDXL Offical ComyfUI 1. 5/clip_some_other_model. The comfyui workflow is just a bit easier to drag and drop and get going right a way. That actually does create a json, but the json Hey all- I'm attempting to replicate my workflow from 1111 and SD1. Load the . One year passes very quickly and progress is never linear or promised. However, when I change values in some other nodes like something like Canny Edge node or DW Pose Estimator, they don't rerun. json file - Thank you very much for your contribution. More examples. Fusion Workflow - JSON From An Alert upvotes r/ticktick. Achieves high FPS using frame interpolation (w/ RIFE). Examples of what Welcome to the unofficial ComfyUI subreddit. A video snapshot is a variant on this theme. a search of the subreddit Didn't turn up any answers to my question. Its just not intended as an upscale from the resolution used in the base model stage. People are running Bots which generate Art all the time and post it automatically to Discord and other places, I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. Just load your image, and prompt and go. config. I've also added a ` TaraApiKeySaver` I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. 5 models like epicRealism or Jaugeraut, but I know once more models come out with the SDXL base, we'll see incredible results. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. This is an example of an image that I generated with the advanced workflow. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Welcome to the unofficial ComfyUI subreddit. Animation using ComfyUI Workflow by Future Thinker If you have the SDXL 0. In ComfyUI go into settings and enable dev mode options. Simple workflow for using the new Stable Video Diffusion model in ComfyUI for image to video generation. Or check it out in the app stores     TOPICS Welcome to the unofficial ComfyUI subreddit. Welcome to the TickTick Reddit! This community is devoted to the discussion of Nel file scaricabile troverai un file JSON da importare in ComfyUI, contenente due workflow pronti all’uso: uno con Portrait Master, dedicato ai ritratti, e uno per inserire manualmente i prompt positivi e negativi. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. Support for SD 1. png. here to share my current workflow for switching between prompts. I've been especially digging the detail in the clothing more than anything else. json but I am having problems with a couple of nodes: I have a tutorial here for those who want to learn it instead of ComfyUI based workflow. Can your ComfyUI-serverless be adapted to work if the ComfyUI workflow was hosted on Runpod, Kaggle, Google Colab, or some other site ? Any help would be appreciated. Reload to refresh your session. com/models/628682/flux-1-checkpoint You can download this webp animated image and load it or drag it on ComfyUI to get the workflow. It lets you change the aspect ratio, resolution, steps and everything without having to edit the nodes. I hope that having a comparison was useful nevertheless. So, I just made this workflow ComfyUI. Actually natsort are not involved in Junction at all. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have So, I started to get into Animatediff Vid2Vid using ComfyUI yesterday and starting to get the hang of it, where I keep running into issues is identifying key frames for prompt travel. safetensors vs 1. 1 that are now corrected. I've been using comfyui for a few weeks now and really like the flexibility it offers. The idea is that it creates a tall canvas and renders 4 vertical sections separately, combining them as they go. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. Now I've enabled Developer mode in Comfy and I have managed to save the workflow in JSON API format but I need help setting up the API. Drag and drop the JSON file to ComfyUI. . I'm just wondering what other folks use it for. It's not for beginners, but that's OK. it is VERY memory efficient and has a great deal of flexibility especially where a user has need of a complex set of instructions I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. ckpt model v3_sd15_mm. json to work. If you download custom nodes, those workflows (. You can use more steps to increase the quality. There are plenty of workflows made you can find. WAS suite has some workflow stuff in its github links somewhere as well. I am trying to find a workflow to automate by learning the manual steps (blender+etc. 1. There is also a UltimateSDUpscale node suite (as an extension). I 表情代码:修改自ComfyUI-AdvancedLivePortrait face crop 模型参考 comfyui-ultralytics-yolo 下载 face_yolov8m. So, if you are using that, I recommend you to take a look at this new one. json or drag and drop the workflow image (I think the image has to not be from reddit, reddit removes metadata, I believe) into the UI. I made an open source ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. The graphic style This is the workflow I use in ComfyUi to render 4k pictures with Dream shaper XL model. Or open it in Visual Code and that can tell you if it ok or not. As far as I can see from the workflow you sent the full image to clip_vision which is basically turning the full image into an embedding, which contain a Reddit removes the ComfyUI metadata when you upload your pic. The entire comfy workflow is there which you can use. You can then load or drag the following This repo is divided into macro categories, in the root of each directory you'll find the basic json files and an experiments directory. So OP, please upload the PNG to civitai. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. And that’s the best part Welcome to the unofficial ComfyUI subreddit. Look for the example that uses controlnet lineart. Pick an image that you want to inpaint. json file - use settings-example. rgthree-comfy. pt 或者 face_yolov8n. For more details on using the workflow, check out the full guide Does anyone else here use this Photoshop plugin? I managed to set up the sdxl_turbo_txt2img_api JSON file that is described in the documentation. json file so I just roughly reproduced the workflow shown in the video on the Github site, and this works! Maybe it even works better than before--at least I'm getting good results with fewer samples. This guide is about how to setup ComfyUI on your Windows computer to run Flux. The examples were generated with the Not a specialist, just a knowledgeable beginner. All of these were generated using this simple Comfy workflow:https: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other Img2Img Examples. Otherwise, please change the flare to "Workflow not included" edit: I didn't see a sample . I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Here is an example of 3 characters each with its own pose, outfit, features, and expression : /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Adding LORAs in my next iteration. Ability to change default paths (loaded from paths. You create the workflow as you do in ComfyUI and then switch to that interfase. The drawback of comfyui is that it cannot change the topology of the workflow once it has already started running. For other types of detailer, just type "Detailer". Still have the problem. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. This is an interesting implementation of that idea, with a lot of potential. (for 12 gb VRAM Max is about 720p resolution). Download. or through searching reddit, the comfyUI manual needs updating imo. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. All the images in this repo contain metadata which means they can be loaded into ComfyUI I just tried a few things, and it looks like the only way I'm able to make this work is to use the "Save (API Format)" button in Comfy and then upload the resulting Flux. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 9 leaked repo, you can read the README. So in this workflow each of them will run on your input image and you can select the one that produces the best results. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. If you haven't already, install ComfyUI and Comfy Manager - you can find instructions on their pages. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. All the images in this repo contain metadata which means they can be loaded into ComfyUI Go on github repos for the example workflows. You can then load or drag the 6 min read. I use a google colab VM to run Comfyui. Tried another browser (both FF and Chrome. Reply reply aliguana23 • when i download it, it downloads as webp without the workflow. json file from CivitAI. You can find the Flux Dev diffusion model weights here. would be really nice if there was a workflow folder under Comfy as a default save/load spot. Also it's possible to share the setup as a project of some kind and share this workflow with others for finetuning. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. That's how I made and shared this. json. You switched accounts on another tab or window. Endless Nodes, but I couldn't find anything that actually can still be installed and works. make sure to also rename sdfx. 13 GB Stage C >> \models\unet\SD Cascade Do you have ComfyUI manager. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers. It does not work with SDXL for me at the moment. 3. [Load VAE] and [Load Lora] are not plugged in this config for DreamShaper. Img2Img works by loading an image Starting workflow. Instructions and listing of necessary Resources are in Note files. More on number 3: I know people would say "just right click on the image and save it", but this isn't the same at all. This is just a slightly modified ComfyUI workflow from an example provided in the examples repo. Reply reply For example: ffmpeg -i my-cool-video. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to But actually I got the same problem as with "euler", just very wildly different results like in the examples above. be/ppE1W0-LJas - the tutorial. With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. This repo contains examples of what is achievable with ComfyUI. Download the following inpainting workflow. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Upload your json workflow so that others can test for you. Yes. It's quite straight forward, but maybe it could be simpler. [This is a JSON uploaded to PasteBin, link also in comments] This means using natural language descriptions to automatically produce the corresponding JSON configurations. Click New Fixed Random in the Seed node in Group A. Please let me know if you have any questions! My Discord - jojo studio /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can Load these images in ComfyUI to get the full workflow. Stage A >> \models\vae\SD Cascade stage_a. ckpt model For ease, you can download these models from here. Upscaling ComfyUI workflow. Hands are still bad though. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint v3_sd15_adapter. Learned from the following video: Stable Cascade in ComfyUI Made Simple 6m 56s posted Feb 19, 2024 by How Do? channel on YouTube . ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. This workflow needs a bunch of custom nodes and models that are a pain to track down: Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you You can use folders too, so eg cascade/clip_model. SD1. This is why I used Rem as an example, to show you can "transplant" the kick to a different character using a character LoRA. Think about mass producing stuff, like game assets. Welcome to the unofficial ComfyUI subreddit. json you had used, helpful. SDXL most definitely doesn't work with the old control net. for example, is "I want to compose a very K12sysadmin is for K12 techs. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. the diagram doesn't load into comfyui so I can't test it out. The experiments are more advanced examples Drag and drop doesn't work for . Mixing ControlNets But standard A1111 inpaint works mostly same as this ComfyUI example you provided. I would like to ask you the following two questions Can we currently use the stable diffusion turbo class model to make the speed faster Examples. ComfyUI was generating normal images just fine. /r/StableDiffusion is back open after the protest of Reddit killing open /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. g. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. This json file can then be processed automatically across multiple repos to construct an overall map of everything. This should update and may ask you the click restart. This ComfyUI Examples. Step 2: Upload an image. I played with hi-diffusion in comfyui with sd1. Because there are an infinite number of things that can happen in front of a virtual camera there are then an infinite number of variables and scenarios that generative models will face. I haven't decided if I want to go through the frustration of trying this again after spending a full day trying to get the last . You can then load or drag the following image in ComfyUI to get the workflow: Well, I feel dumb. AP Workflow 7. Search the sub for what you need and download the . ) to integrate it with comfyUI for a "$0 budget sprite game". What it's great for: Once you've achieved the artwork you're looking for, it's time to delve deeper and use inpainting, Get the Reddit app Scan this QR code to download the app now xpost from r/comfyui: New IPAdapter workflow. Andy Lau is ready for inpainting. So every time I reconnect I have to load a presaved workflow to continue where I started. ComfyUI-Image-Selector. Either you maintain a ComfyUI install with every custom node on the planet installed (don't do this), or you steal some code that consumes the JSON and draws the workflow & Here are approx. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. And above all, BE NICE. This ComfyUI workflow lets you remove backgrounds or replace backgrounds which is a must for anyone wanting to enhance their products by either removing a background or replacing the background with something new. An example of what this workflow can make. Do you want to save the image? choose a save image node and you'll find the outputs in the folders or you can right click and save that way too. Even with 4 regions and a global condition, they just combine them all 2 at a It is a simple workflow of Flux AI on ComfyUI. EZ way, kust download this one and run like another checkpoint ;) https://civitai. It's perfect for animating hair while keeping the rest of the face still, as you can see in the examples. You can write workflows in code instead of separate files, use control flows directly, call Python libraries, and cache results across different workflows. ComfyUI Workflow is here: If anyone sees any flaws in my workflow, please let me know. See my own response here: Flux Schnell. Please share your tips, tricks, and workflows for using this software to create your AI art. I did Install Missing Custom Nodes, Update All, and etc etc, but there are many issues every time I load the workflows, and it looks pretty complicated to solve it. They can create the impression of watching an animation when presented as an animated GIF or other video format. The denoise controls the amount of noise added to the image. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. This workflow needs a bunch of custom nodes and models that are a pain to If necessary, updates of the workflow will be made available on Github. When I saw a certain Reddit thread, I was immediately inspired to test and create my own PIXART-Σ (PixArt-Sigma) ComfyUI workflow. It is not much an inconvenience when I'm at my main PC. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. json) will be/are For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. A lot of people are just discovering this technology, and want to show off what they created. You can save the workflow as json file and load it again from that file. Let's break down the main parts of this workflow so that you can understand it better. I am thinking of the scenario, where you have generated, say, a 1000 images with a randomized prompt and low quality settings and then have selected the 100 best and want to create high quality Welcome to the unofficial ComfyUI subreddit. The workflow is saved as a json file. As always, I'd like to remind you that this is a workflow designed to learn how to build a pipeline and how SDXL works. Prompt: A couple in a Get the Reddit app Scan this QR code to download the app now. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Honestly the real way this needs to work is for every custom node author to use a json file that describes functionality of each node's inputs/outputs and general functionality of the node(s). gah chrome I'm new to comfyui, does the sample image work as a "workflow save", as if it was a json with all the nodes? Reply reply Dezordan I couldn't decipher it either, but I think I found something that works. 5 . SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. You can just use someone elses workflow of 0. 43 KB. *Edit* KSampler is where the image generation is taking place and it outputs a latent image. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Here's the big issue AI-only driven techniques face for filmmaking. A few examples of my ComfyUI workflow to make very You can just open another tab of comfyui and load a different workflow in there. json inside Resource - Update I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. No errors in the shell on drag and drop, nothing on the page updates at all Tried multiple PNG and JSON files, including multiple known-good ones Pulled latest from github I removed all custom nodes. safetensors -- makes it easier to remember Im trying to understand how to control the animation from the notes of the author, it seems that if you reduce the linear_key_frame_influence_value of the Batch Creative interpolation node, like to 0. Ability to load prompt information from JSON and PNG files. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. Tidying up ComfyUI workflow for SDXL to fit it on 16:9 Monitor Toggle for "workflow loading" when dropping in image in ComfyUI. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). You can apply poses with it in same workflow. SDXL Default ComfyUI workflow. 9(just search in youtube sdxl 0. To add content, your account must be vetted/verified. That will give you a Save(API Format) option on the main menu. K12sysadmin is open to view and closed to post. It's simple and straight to the point. They are images of Thanks for the tips on Comfy! I'm enjoying it a lot so far. If you find it confusing, please post here for help or create an Issue in GitHub. 85 or even 0. ComfyUI-Impact-Pack. Ability to change default values of UI settings (loaded from settings. (I've also edited the post to include a link to the workflow) That's awesome! ComfyUI had been one of the two repos I keep installed, SD-UX fork of auto and this. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. It's a bit messy, but if you want to use it as a reference, it might help you. ComfyUI/web folder is where you want to save/load . Then there's a full render of the image with a prompt that describes the whole thing. I also combined ELLA in the workflow to make it easier to get what I want. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. Discussion, samples, tips and tricks on the Sigma FP. 50, the graph will show lines more “spaced out” meaning that the frames are more distributed. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper 17K subscribers in the comfyui community. But all of the other API workflows listed in Custom ComfyUI Workflow dropdown in the plugin window within Photoshop are non-functional, giving variations of "ComfyUI Node type is not found" errors. It's thought to be as faster as possible to get the best clips and later upscale them. Last but not least, I have the JSON template SDXL Turbo Examples. Example: An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. the good thing is no upscale needed. \Stable_Diffusion\stable Makeing a bit of progress this week in ComfyUI. ComfyUI Tip: Add a node to your workflow quickly via double-clicking For example, if you want to use "FaceDetailer", just type "Face". json file, change your input images and your prompts and you are good to go! ControlNet Depth ComfyUI workflow I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. I understand how outpainting is supposed to work in comfyui (workflow. 0/Download workflow . For example you have [11,22,33], then by default you "pluck" starting from the first element, which the first pin with type INT will output 11. Ending Workflow. In case you ever wanted to see what happened if you went from Prompt A to Prompt B with multiple steps in between, now you can! (The workflow was intended to be attached to the screenshot at the bottom of this post, but instead, here's a link to comfy uis inpainting and masking aint perfect. (also fixed the json with a better sampler layout. Since I used ComfyUI, I downloaded tons of workflows, but only around 10% of them work. Then when _offset have something like INT,1, then the first pin that have type INT will be 22. Put the flux1-dev. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, Hello everyone, I got some exiting updates to share for One Button Prompt. https://youtu. But for a base to start at it'll work. - Ling-APE/ComfyUI-All-in-One-FluxDev yes, I've experienced that when the json file is not good. Join the largest ComfyUI community. This workflow needs a bunch of custom nodes and models that are a pain to track down: ComfyUI Path Helper MarasIT Nodes KJNodes Mikey Nodes AnimateDiff AnimateDiff Evolved IPAdapter plus If you drag in a png made with comfyui, you'll see the workflow in comfyui with the nodes etc. However, without the reference_only ControlNetthis works poorly. Ability to save full metadata for generated images (as JSON or embedded in PNG, disabled by default). Upcoming tutorial - SDXL Lora + using 1. For your all-in-one workflow, use the Generate tab. The video is just too fast. For example, it would be very cool if one could place the node numbers on a grid (of This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. ComfyUI Workflow | OpenArt Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. In the original post is a youtube link where everything is explained while zooming in on the workflow in Comfyui. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: Workflow. json workflow file from the C:\Downloads\ComfyUI\workflows folder. If it's the best way to install control net because when I tried manually doing it . 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. This is like the exact same example workflow that exists (and many others) on Kosinkadink's AnimateDiff Evolved GitHub renderartist • This is a great idea Welcome to the unofficial ComfyUI subreddit. This tool also lets you export your workflows in a “launcher. ComfyUI-Custom-Scripts. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. I've uploaded the json files that krita and comfy used for this. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. You can pull PNGs from Automatic1111 for the creation of some Comfy workflows but as far as I can tell it doesn't work with ControlNet or ADetailer images sadly. Please keep posted images SFW. Save this image then load it or drag it on ComfyUI to get the workflow. Merging 2 Images A collection of simple but powerful ComfyUI workflows for Stable Diffusion with curated default settings. I think most of the time I only want the prompt and seed to be reused and keep the layout of my nodes unchanged. com/comfyanonymous/ComfyUI. json file - use paths-example. Input your choice of checkpoint and lora in their respective nodes in Group A. ; When the workflow opens, download the dependent nodes by pressing "Install Missing Custom Nodes" in Comfy Manager. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. anyway. Updated IP Adapter Workflow Example - Asking . It looks freaking amazing! Anyhow, here is a screenshot and the . We would like to show you a description here but the site won’t allow us. safetensors sd15_t2v_beta. found sdxl_styles. hopefully this will be useful to you. json file, change your input images and your prompts and you are good to go! Inpainting Workflow. 5/clip_model_somemodel. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. Installing ComfyUI. It covers the following topics: Merge 2 images together with this ComfyUI workflow. Right now the only way I see is putting an There are a lot of upscale variants in ComfyUI. Nodes/graph/flowchart interface to experiment Img2Img Examples. Still great on OP’s part for sharing the workflow. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. ) That's a bit presumptuous considering you don't know my requirements. The way ComfyUI is built up, every image or video saves the workflow in the metadata, which means that once an image has been generated with ComfyUI, you can simply drag and drop it to get that complete workflow. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. The trick of this method is to use new SD3 ComfyUI nodes for loading t5xxl_fp8_e4m3fn. Also, if this is new and exciting to For example, we take a simple prompt, Create a list, Verify with the guideline, improve and then send it to `TaraPrompter` to actually generate the final prompt that we can send. Merge 2 images together with this ComfyUI workflow. json as a template). The ui feels professional and directed. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. We will walk through a simple example of using ComfyUI, introduce some concepts, and gradually move on to more complicated workflows. It now officialy supports ComfyUI and there is now a new Prompt Variant mode. There are a couple abandoned suites that say they can do that, e. 4 - The best workflow examples are through the github examples pages. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. 1 ComfyUI install guidance, workflow and example. ComfyUI won't load my workflow JSON upvote /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. 5 models and it easily generated 2k images without any distortion, which is better than khoya deep shrink. Simply download the . I was not aware that reddit strips off the metadata of the png. Krita's json settings First, I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. I tried to open SuperBeasts-POM-SmoothBatchCreative-V1. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. 0 workflow with Mixed Diffusion, and reliable high quality High Res Fix, now officially released! Below are some example /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 10 votes, 10 comments. 7 MB Stage B >> \models\unet\SD Cascade stage_b_bf16. Table of contents. It didn't work out. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Hi everyone. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. A good place to start if you have no idea how any of this works Does anyone know why ComfyUI produces images that look like this? Important: This is the output I get using the old tutorial. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. but mine do include workflows for the most part in the video description. Like 1024, 1280, 2048, 1536. Check comfyUI image examples in the link. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. I have searched far and wide but could not find a node that lets me save the current workflow to a json file. spk seqo uiuewyc qra tdepqtn dfx ogiyejus pyqnch wtvr pknjdxw