The workflow section contains workflows that i have personally used to create my AI generated videos. Or at least toyed around with. You need some basic knowledge about ComfyUI and/or stable-diffusion-webui by Automatic1111. How it works, and where to put the weights, loras etc.
CogvideoX Image to Video
The zipfile contains the workflow in json and png format, plus the source image that was used to create the video. 959 kb
Howto
Drag an initial image into the image node, adjust the prompt, and press queue. See red marked nodes. The rest should fit.
Description
An image to video ComfyUI workflow with CogVideoX. Tested with CogvideoX Fun 1.1 and 1.5. Note that the motion lora does not work with the Fun 1.5 model. Just with the 1.1 one.
This workflow also contains a CogVideoX motion lora for the camera movement. And you can also add further instructions in the prompt. CogVideoX relies at motion informations in text form.
It also has a very simple upscaling method implemented. I am at my journey to figure out a special upscaling workflow though. But for some it might still be useful. It is super fast compared to an upsampling by another ksampler.
CogVideo creation size is limited. The old version 1.1 is fixed to a 16:10 format. And 720×480 resolution. The new version 1.5 goes up to double size, but the motion lora that i use here does not work with it.
There are some collapsed Note nodes besides the important nodes. Click at them to expand them.
The Note nodes contain further informations. And in case of the models also links to the models, and where to put them.
Time
Creation time for the six seconds example video was around 8 minutes at a 4060 TI with 16 gb vram.
Requirements
This workflow was generated with 16 gb vram. Minimum requirement is 12 gb vram. You might get it to work with low vram settings. But i could not get CogVideoX to work with my old 3060 TI with just 8 gb vram.