I’m happy with the result but much of the process is manual so I may have to spend some time slapping together some automation for the next installment.
I’ve been thinking about producing a more immersive rendition of /sectionb. I’ve also been thinking that producing a “Parapsychological Spy Thriller” via conventional means is not be the correct approach. It needs to be a little more artsy, interpretive, associative. Unfortunately, illustration and animation aren’t really my thing.
Although I can draw some basic proportions and I try to pay attention to composition and colour, I can’t produce the type of visual output that modern artificial intelligence can. But as it happens I also dabbleincode so it wasn’t long before I was fucking around with Stable Diffusion and similar software. Unfortunately, if I wanted to use the AI to produce short films the still images it spat out would need to be animated using something like morphing — doable but laborious.
By one propitious circumstance a fairly recent upgrade to Stable Diffusion by Deforum popped up in my search results one day and as soon as I saw a few samples I got giddy. Not only is the animated output of DSD dream-like and trippy, which is very apropos for /sectionb, it also improvises around the periphery of supplied prompts/themes in surprising ways, which is also quite apropos.
Initially I tried adding voice narration but it just didn’t fit so instead I converted the text to subtitles/closed captions, chucked in some original music, and after that the video basically just produced itself.