Welcome to Brian Sykes | AI LAB, the go-to channel for creative professionals looking to integrate generative AI into their workflows—while retaining their human touch.
Brian Sykes, author of six books on AI, simplifies complex tools and techniques with practical how-to videos, product reviews, and actionable insights designed to empower designers, writers, marketers, and more.
Discover how to harness AI effectively to amplify your creativity, streamline your workflow, and stay ahead in your field.
👉 What to Expect:
• How-to videos for mastering AI tools
• Tips for creating with intention and maintaining your creative voice
• Reviews of the latest AI products for creative professionals
Subscribe now and hit the notification bell to explore AI as a creative partner—not a replacement.
Handle: @theBrianSykes
Brian Sykes | AI LAB
SOLVED: The "Two-Character Problem" in AI Video
How to create a seamless conversation between two AI characters without the "glitches" or "hallucinations."
We have seen increased success in getting images with consistent characters.
Midjourney’s OmniReference was fantastic at getting a consistent solo player - but making that 2 people, and we ran into issues of blended faces.
Then Runway’s Gen-4 Image References let us do some pretty amazing things, and develop some complex storytelling possibilities of people in set places.
Now Google’s Nano Banana has provided a way to really orchestrate some varying camera angles and present our characters from other vantage points, with unlimited variability in action and composition - even adding multiple people to the same scene.
But one thing still has proven a challenge… getting both characters in the scene at the same time, and letting each have their own voice. There are a couple of ways to do that. One - is the Mac Vs PC style videos (which I did before using Hedra, 2 characters on white produced separately and blending them in a video editor).
But what if you wanted a more natural environment? Something that felt like they were in not just the same space - but the same room.
In this video -
🛠️ The WORKFLOW COVERED:
Scripting: How to use Google Gemini to rewrite transcripts with ElevenLabs v3 tags in mind (controlling emotion, sighs, and pacing) - which also comes in handy for Creative Directing LTX!
Assets: How to generate consistent character reference sheets that maintain clothing and facial details.
The Glitch: Why standard tools like Hedra struggle with multi-speaker shots.
The Fix: The specific “Bracket Scripting” technique in LTX Studio that isolates speakers and forces perfect lip-sync.
Atmosphere: How to generate “Indy Bar” background audio using Suno to sell the vibe.
If you found this valuable, be sure & give my YouTube a LIKE!
1 month ago | [YT] | 2
View 0 replies
Brian Sykes | AI LAB
RAMP up your learning with the power of AI... follow along with my step-by-step process!
1 year ago | [YT] | 1
View 0 replies
Brian Sykes | AI LAB
This was one of my favorite interviews to be a part of - yet. So much room to just converse on the subjects of design and the integration of AI. I think you all might genuinely LOVE and gain some fresh insights in this listen. Enjoy - Brian Sykes
1 year ago | [YT] | 0
View 0 replies
Brian Sykes | AI LAB
Honored to be on ‪@packagingunboxd‬ with EVELIO MATTOS and discuss my creative process and how I teach my fellow Creative Professionals HOW to integrated gen-AI into their own workflow.
1 year ago | [YT] | 1
View 0 replies
Brian Sykes | AI LAB
‪@theBrianSykes‬ • If you missed this interview - totally worth the watch as I chatted with Jordan Wilson. Enjoy!
2 years ago | [YT] | 0
View 0 replies
Brian Sykes | AI LAB
@AI.Explore - the interview I had with Chris Do.
2 years ago | [YT] | 0
View 0 replies
Brian Sykes | AI LAB
A.I. Explore: Collaborations - Book 2 was released April 1st (no April Fools joke either!) To grab your digital copy - visit: brianwsykes.gumroad.com/
2 years ago | [YT] | 0
View 0 replies
Brian Sykes | AI LAB
2 years ago | [YT] | 0
View 0 replies