Inside the Runway Gen:48 Aleph - How I Created a Short Film with AI
How I created a short film in 48 hours with AI tools like Runway, Midjourney, ElevenLabs & Suno. Creativity amplified
This post is going to be a little different from what I usually do. Normally, I focus on big-picture insights and strategic thinking around AI. But today, I want to take you behind the scenes of a personal adventure - participating in Gen:48 Aleph Edition, a 48-hour short film competition powered by Runway’s generative video technology.
For me, this was more than a challenge; it was an opportunity to push AI to its creative limits. I’d tried this before, and the biggest test has always been making the technology work while still telling a story that feels human. This time, I wanted to take it further — to build an entire short film with AI as my co-creator across every stage: script, characters, video, sound, and music.
What is Gen:48 Aleph Edition?
Gen:48 Aleph Edition is a unique short film competition where creators have just 48 hours to produce a 1–4 minute film. All generative video content must be created using Runway’s Aleph technology.
There’s no fixed theme, but each participant must include one element from three categories:
Inciting Incident
Archetype
Location
The challenge is both structured and open - it gives you constraints to spark creativity, but freedom to shape your own story. Films are judged on how well they use Aleph to drive storytelling, and participants get 200,000 free credits to experiment with images and video.
Preparation
Every challenge begins with learning. Before I started, I spent half a day diving into the Runway Academy tutorials to refresh myself on the tools. This gave me a quick overview of what was possible and set the stage for experimenting.
Choosing Topics
From the required categories, I selected:
Inciting Incident: World flickering between realities
Archetype: Wanderer
Location: Void
These choices gave me room to explore deep, surreal storytelling — the kind of narrative where technology and imagination could merge.
Script / Story
I began with my own custom GPT trained on different storytelling frameworks. It generated two potential storylines, both promising but not quite satisfying. So I set them aside, took a break, and came back with a new idea: a fusion of both drafts that turned into something entirely fresh.That moment reinforced an important lesson: sometimes the best ideas come when you stop forcing them.
Once story is finalised I used another customGPT to create the storyboard from the story, so I have detailed description of all scenes in the movie including all actions
Creating the Character
The “Wanderer” archetype became my central character. I wanted to show him across different stages of life as this was the part of the script and storyborad
As a baby
As a soldier
As a man in his 30s
As a man past 60
I began with a 30-year-old male as the base reference, then used AI to generate versions across ages. The hardest? Creating the baby. Many results came back with mustaches or beards - surreal, but not what I needed. After few iteration with the prompt, I managed to create four distinct life-stages of the Wanderer.
Creating Videos
Video creation was the most complex part. I used three different methods in Runway:
Image to Video: Start with an AI-generated image (character, environment) and extend it into motion.
Video to Video: Record real video references (like lying on a bed, looking at the ceiling) and then transform them into new environments using Aleph.
Video Extension: Take an existing AI-generated clip and generate new angles or story continuations.
Each generation produced 5-second clips. Once satisfied, I upscaled them to 4K for final editing.
Voiceover and Sound Effects
For audio, I turned to Eleven Labs. I generated a synthetic voice for the 30-year-old Wanderer, then adapted it into different ages:
20 years old
65 years old
Childlike voice (created by pitch-shifting the 20 years old voiceover directly in video editing software, since direct child voices are restricted for safety reasons)
Sound effects were also AI-generated. Eleven Labs allowed me to describe effects, layer them, and produce multiple variations up to 22 seconds long. On a personal level I will say that this is probably the easiest way to generate a sound effect if you know what you are looking for and be able to describe it properly in your prompt.
Music
One of the most fun parts was music creation. I used Suno, my go-to AI music generator, to compose the soundtrack. Once the song was ready, I cut it down to a 20–30 second sequence to fit the film’s ending.
Main Challenges
Of course, it wasn’t all smooth. Three big challenges stood out:
Time Consumption: Video generation, especially the fact that all need to be upscaled to 4K, doubled the waiting time.
Child Characters: All tools I used for generating image, videos and voice generation had restrictions when it comes to promoting children related content. But this is normal as its part of safety policy of all of the platforms
Time Pressure: Working solo meant constant multitasking. I used waiting times productively, generating audio or story elements while videos processed. At the end I manage to upload the video 39 sec before the final bell
Stats and Tools
Here’s the final tally of my production marathon:
36 hours of work
180 video cuts generated
410 images generated
AI Tools Used:
ChatGPT – Story and scene descriptions
MidJourney – Image generation
Runway – Video and image generation
ElevenLabs – Voiceovers and sound effects
Suno – Music generation
Final Video
After all the iterations, challenges, and discoveries, here’s the finished piece:
👉 Watch the Final Video on YouTube
Final Words
This project reminded me why I explore AI in the first place. Yes, it’s technical. Yes, it’s challenging. But more than anything, it shows how AI opens new doors for creativity — allowing us to tell stories, solve problems, and create art in ways that weren’t possible before.
The Gen:48 challenge also proved what happens when human imagination and AI work side by side. It wasn’t always smooth — the waiting times, the technical limits, the race against the clock — but those obstacles pushed me to be more resourceful.
The result is more than just a short film. It’s proof that AI doesn’t replace creativity — it amplifies it. With the right tools, one person can now do what once took entire teams. And this is just the beginning.