Pexote Jack: Ohio! - REVIEW & INTERVIEW

Pexote Jack: Ohio!



Director, Writer, Producer : 
Hoyt Dwyer
Runtime : 5 minutes
Genre : Animation
16:9 - Digital - Color
Language : English
Country : USA
2025


Storyline
A washed-up 1930s cartoon jackalope learns he’s been digitized & optimized into a viral, hyper-performant AI double.




Pexote Jack is an American animated short film directed by Hoyt Dwyer with the help of AI. A modern concept that deserves attention for its aesthetic, which serves its narrative.
More precisely, it's a closed-room drama set in Hollywood, in a large production company office, featuring two very different characters.
The main character, Jack, has the appearance of an old cartoon hero from the 1930s. Uncolored, he's a sort of physical mix between Mickey Mouse and Bugs Bunny. He appears in stark contrast to the crisp, bright colors of the background. A paradox quickly emerges, particularly with his interlocutor: a well-dressed and polished female bureaucrat, firmly rooted in her time and equipped with digital tools. She appears almost robotic. Jack, on the other hand, seems disconnected but paradoxically more human due to his naturalness and naivety. 


© H.Dwyer


Deprived of any power, he is dispossessed of his own image. This one is exploited at his expense by the production working with AI. The mise en abyme is blatant with Hoyt’s creative concept and that’s what is fascinating. The director asks us: What if all the figures in past animation were actually the future? Is human imagination now limited? Is it enough to simply draw on what already exists to innovate? We thus notice a confrontation between two eras through these characters. 
The director builds his screenplay on the themes of temporality and the history of cinema: The American dream is no longer the same as it was. The blue sky and the Hollywood palm trees now appear smooth and immaculate, not to say artificial.
AI at the service of Man to denounce the exploitation of Men by... AI... The key to the film is here. 



INTERVIEW

                        Interview conducted remotely, by email exchange with the director,  Hoyt Dwyer.
In parallel, the director reveals the different stages of his work through several stills.


© H.Dwyer

HFF: Why this choice to make a film with the help of AI? 


HD: Technology gave birth to Cinema. From the first moving image, to sound, to color, to CGI, and now AI - all of these technical leaps have created new possibilities for filmmakers to tell new types of stories. 


With the release of Veo 3 earlier this year, I felt that generative moving imagery as a technical innovation had reached an inflection point where it could finally be used to create hollywood level imagery at a fraction of the budget. 


But what excited me most was pondering about the types of imagery AI could create that simply couldn’t have been created before. Whether impractical from budgetary standpoints, to the “we don’t know what we don’t yet know” types of imagery - things we can’t even dream of yet. 


And so rather than lashing out at the idea of “AI” as many creatives have, I wanted to instead learn to work with this new medium in the form of a short film to fully understand the consequences of this new technology, as well as the possibilities. 



HFF: Have you already completed similar projects before?


HD: I began my career on screen as a writer, and have always loved the ability of CGI to help you tell a story that is “familiar, yet different.” I then began learning CGI myself to help bring my scripts to life, first with Cinema 4D then eventually Unreal Engine.  


When the generative imagery first became consumer facing, I was both excited and slightly terrified at the implications, but began tinkering with the possibilities in early 2022 on TikTok.


Instead of making what we now call “slop,” I’ve always been keen on using these tools to tell a story, and at the time, TikTok was a good outlet to do so in bite sized sketches that the tech of the time could handle. It’s funny though, because all of those filmic experiments look like ancient relics now, just 3 years later. 


Anyway, having directed professional 2D el animation before without any AI, I thought it would be a good litmus test to make a 100% AI film that nobody could tell was made with AI. 




HFF: How to work with a film made with AI? The script, the settings, the colors, the voice dubbing. Tell us more. 


HD: The most challenging part of working with AI is that it is essentially mediumless. But it’s also what makes it so exciting. To explain:


If traditional filmmaking is akin to Newtonian Physics, then generative AI filmmaking is akin to Quantum Physics. There are just so many similarities. 


The first similarity is that with generative AI, there is no physical act of creation. There is no blocking, no gaffing; not even the physical act of moving a Waycom pen in Toon Boom or your mouse in Unreal. 


Like Quantum Physics, everything is a thought experiment. You can’t physically will anything into existence. You just have to patiently iterate until you get to where you want to be. 


Because of this unique aspect, this new process is very spontaneous. Much like the “multiple world” of quantum physics, the first frame can go in an infinite number of directions, even with the “quantum entanglement” of the last frame approach. 


So when making a film intended to look like 2D Cel art, a ton of thought was given to specific mediums. Ensuring that the backgrounds looked like they were made with watercolor and gouache was a massive challenge. Thousands of images were created, and multiple composites made to achieve the shadow directions, reflections, etc. 


Then, the characters were “animated” with Google Veo on chroma key backgrounds, then composited over the backgrounds. For more of the specifics, I go into detail with each of the different photos below. 


But I will say, that of all the parts where AI is the most helpful, the script - or story - is still the most important aspect, to ensure that all of these spectacles work towards luring the viewer in, to let them suspend their disbelief, and then ultimately deliver catharsis. 




1 - "Experimenting with AI as creative sparring partners, I assigned different LLMs the personas of Charlie Kaufman, the Coen Brothers, and Dan Harmon, then let them critique my first draft, then rewrite it, then critique each other’s drafts before distilling their chaos into the finished script. Probably around 3 to 4 lines of dialogue came directly from this process, which is alot in a script this size." 


HFF: Tell us about this character of Jack, how was he born? 


HD: While hiking Joshua Tree once years ago, I saw a jackrabbit, and got this idea for a southwestern Alice in Wonderland, where the rabbit is a jackalope instead. And instead of Alice falling down the rabbit hole, maybe the Jackalope eats pexote and falls into a wormhole of his own consciousness. 


I played with this idea with a 3D short I made in Unreal in early 2024, and although I loved the cactus character, it felt kinda limited. 


Then, when exploring the idea of making a film with AI, I was thinking about the tendency for AI to "hallucinate," and then landed on the idea of Jack being public domain, since he was a rubber hose animation anyway. 


All of these themes are explored in the subsequent writing I’ve done so far. Whether it’s released as a series and this short is the pilot, or it’ll all become a feature film, is still up in the air. 





2 - "Me generating concept art in Comfy UI with Flux Kontext & Canny control nets." 



HFF: How long did it take you to make this film? 


HD: 4 months. I started on June 1st 2025, and after working on it all day every day, finished at the end of September. 


To be honest, so much of that time was spent learning this new process, like Comfy UI, open source models, etc.


I imagine the next episode would take me probably 3 weeks, now, since I have the character design, background styles, etc. 


The worldbuilding is a huge part of the process. 







3 - "Jack's transition from my first initial sketch, to a 3D AI generated model, to a textured model, then back into a refined 2D rubber hose character."




HFF: There is a kind of mise en abyme with Jack exploited by AI in a movie you made with AI, what message do you want to convey? 


HD: Exactly! I’m so glad you picked up on that. 


Like many filmmakers, my biggest fear with AI is that it will eliminate the need for a creator. 


When the consumption determines the creation, there is great danger of a recursive loop forming, where what is watched, determines what is made. 


The algorithm creating the algorithm, in a sense.


Expressing this fear from the standpoint of a “public domain” character with no legal rights on one side and a technocrat on the other spouting an absurdist blend of “social media strategist buzzwords” and “gen Z slang” felt like a good way to have kinetic humor reveal the potential horror. 






4 - "Background art was generated with midjourney as photo realistic images, then using them as control nets, they were style transfered into 2D cel style animation backgrounds (with many, many iterations to get to final pixel backgrounds.)"




HFF: How do you imagine the future of animated cinema? 


HD: My greatest hope is that AI ushers in an “era of the Auteur” for all forms of Cinema.  



HFF: Are you nostalgic for the traditional animation cinema of the 1930s? 


HD: Of course! There was something kinda dark about early rubber hose animation. It was all for adults initially, and so it explored darker themes specific to its era. 


Underneath the slapstick, those cartoons often reflected the spirit of the Great Depression, and expressed the fantasy of bending reality, when life itself was inflexible.

And so in this short, Pexote Jack finds himself staring in the face of a new iteration of the Industrial Age, where workers aren’t just exploited, but completely discarded. 





5 - "Characters were animated with JSON prompts in Google Veo 3 on chroma key backgrounds."





HFF: Do you think that AI is a simple tool at the service of humans? 


HD: “Everything has been figured out, except how to live.”



HFF: Have you participated in public screenings with your film? Do you plan to release it? 


HD: Pexote Jack’s World Premiere will be on Nov 2 in Tokyo in competition for best AI Animated Film at the AI Filmfest Japan. I’m beyond excited about it, and can’t wait to attend!

After it premiers in Tokyo, I’ll decide when to release it at large. 




"
6 - "Then these animations were upscaled with Topaz Astra"



"
7 - "Then everything was composited together in Davinci Resolve."




HFF: Do you have other projects in sight? 


HD: I’m currently working on a short that blends Unreal Engine with AI, which I hope to announce soon. 



HFF: Do you plan to work only with the AI?


HD: No. I can’t wait to collaborate with other talented humans again.




8 - "Voices were designed in Eleven Labs."




9 - "All resulting in the final image above."



Thank you Hoyt! 


Peyote Jack: Ohio! is in competition at the 20th edition of Hallucinea Film Festival. Results - 2025, October 29. 





By Hallucinea Film Festival

Popular Posts