I am honored to receive early Gen-1 access over the weekend. It is a learning curve to use effectively but my first experiments seem to have gone well, at the very least I have had some unique results.
The top two montages were mostly experiments with various video inputs and depth-blur levels as well as various pieces of my art merged with Midjourney. I find lower blur levels tend to match to movements with the video, and higher levels tend to match the image more and have less fluid movements. However the input, seed, and whether or not an image is upscaled can make a big difference.
I am told if you grab a still and either paint it or enter it into Control Net for Stable Diffusion, and then reintegrate it, you can have a lot more control. So far I have tried it once with pretty good results, but I am saving it for a more gargantuan undertaking. In the meantime, I have had a lot of fun with the wild chaos that emerged from my own experiments.
I cannot wait for Gen 2 They should have some kind of contest for early access. As an artist, the Image-to-video feature in the white paper excites me more than text-to-image.
Does anyone else have access or is having luck with similar programs, such as Ebsinth? I see so many tutorials now on YouTube it is overwhelming.