As requested, this is a 2D to side by side video converter.
took a image converter and added on the video combine. if I stole it from you and you want credit let me know. Ill be happy to add it.
This has background removal for Augmented Reality. Working on a better version but for now avoid dark features since in pass through it will cull out the dark details.
Tested and it works well with Meta Quest 2/3/3s or cross eye method.
The default generation takes a while on a 4090 (22-44 min) . You might need to feed it a smaller resolution video or change the upscale to a lower value. But its not like you are regenerating the render.
You have to manually set the aspect ratio for the upscale. I might update this in the future to upscale by x instead. Notes indicate resolutions compatible with a 2:3 ratio.
Tested on a clean install of Comfyui. all nodes download and install so no having to track them down.
Description
FAQ
Comments (6)
Wow so cool! Are the left and right takes filmed independently from different camera positions? How could Wan possibly sync the animations so perfectly? (if there is one take for the left and another for the right how can they sync up?)
it is 2d to 2d sbs, novelty thing is background remover so I guess that's the whole point of this.
ok this is more insight it uses SideBySide_Stereoscope node pack.
"1.1.0
5 nodes
Create immersive 3D stereoscopic images and videos! Transform your ComfyUI generations into stunning side-by-side 3D visuals for videos and image sequences. Powered by Depth-Anything-V2, no external depth maps needed. Perfect for VR, 3D displays, and cross-eyed viewing - no special glasses required!
If you guys are into this, you might also want to check out Auto Depth Image Viewer on Steam. It opened up a whole world for me. Also streams videos from 2d into stereo 3D right Away (uses Depth Anything2 among other great models.). As for your workflow and the nodes: Nice Approach! I love it when people work towards stereoscopic 3D stuff. The 3D distance does not seem to work for me though. Thank you Anyway :-)
So I tested this with PSVR on PS4 pro and RAD app with sideloading and surprisingly it actually works. U have to use Parallel sbs mode in the workflow, 121 frames video clip with default settings in workflow took about 33 minutes to get finished on 5060 TI 16 GB, then I did frame rate change and 2x upscale with Topaz AI video. Result is good similar to what u get in 3d movies, depth is there.
This says "Hunyuan Video" in the model, what does it actually do with Hunyuan video?