From start to finish, it took the team roughly a month and a half to generate 28 final videos, with 14 minutes of run time adapted for seven screen types at the venue.
Starting with a baseline of shapes, gradients and colors that have always been the foundation for INBOUND graphics, Garcia-Lopez fed those initial images to Midjourney.
That allowed him to generate initial ideas and options to present to the Global Events team.
With AI, the person doing the prompting becomes the director, he says.
“You‘re telling the AI. ‘Here’s my step 5 – add a time frame vision. Now, go out and do that,’” he says, “It takes a while, but if you compare that to a team, you still need somebody to drive the vision.”
Once both teams agreed on the final 2D images for each stage and location, Garcia-Lopez headed to Runway for mini video shorts and animation.
“It took me roughly two days to create the
Boston mini paper city, generating all the flat images and creating a storyboard, going into Runway and animating all of them,” he says. “Then, I went into Adobe Premiere to edit the whole thing.”
He also used Topaz, an AI editing software that enhances the quality of AI videos.
I wondered, how much would it cost if the team worked with a vendor to create these assets the old fashioned way.
Garcia-Lopez estimates requesting a 30-second clip would take at least two weeks and thousands of dollars.
That said, Midjourney wasn’t exempt from the oddities that happen when you use AI to mimic reality.
“What I presented to the team was the most presentable, the cleanest options, but behind the curtains, there were many generations that were not coming out well,” Garcia-Lopez says. “It was very choppy. You would have something weird happen – buses going into each other, buses running into people, like all these weird things.”
What took a lot of time was identifying the best takes and cleaning up inconsistencies, he says. Thankfully, AI works fast.
“We were iterating in a matter of days. We’d have new options in a couple hours, so it was very, very fast,” Garcia-Lopez said. “As a designer, we would’ve needed a big team to deliver all of these assets, even an illustration for all the variation of styles.”
With Midjourney’s assist, he was able to create 25 distinct styles – a result he calls “almost unthinkable” in the time they had.
“You need a big team with specific skills to accomplish a specific style,” he says, “and we were able to go wild and choose what we wanted.”
AI isn’t without its limitations.
When I asked Garcia-Lopez about the step 5 – add a time frame design challenges that come with leveraging AI, he said there’s a big one people often forget.
“Editing something is actually quite hard. With an editable file, like a vector-based design in Adobe Illustrator, you can change every little detail,” he says. “With Midjourney and these AI tools, it’s not that easy.”
It’s a myth that editing with AI is quick, he says. For example, color wise, you might not always get the exact same colors you’d achieve from a color palette. But you can get something pretty close.
Aaver echoes that sentiment.
“It was a big learning experience canada cell numbers for us, as the approvers and reviewers,” she said, “learning what we can and can‘t give feedback on, what’s an easy change, what’s not so easy.”
In addition, AI isn’t doing the bulk of the work, contrary to popular belief. There is a lot of bringing it back to Adobe and then feeding it back into the AI model to get the results that you want., Garcia-Lopez says.