Home News Runway’s Gen-2 shows the limitations of today’s text-to-video tech

Runway’s Gen-2 shows the limitations of today’s text-to-video tech

by WeeklyAINews
0 comment

In a latest panel interview with Collider, Joe Russo, the director of tentpole Marvel movies like “Avengers: Endgame,” predicted that, inside two years, AI will be capable of create a fully-fledged film.

It’d say that’s a slightly optimistic timeline. However we’re getting nearer.

This week, Runway, a Google-backed AI startup that helped develop the AI picture generator Steady Diffusion, launched Gen-2, a mannequin that generates movies from textual content prompts or an present picture. (Gen-2 was beforehand in restricted, waitlisted entry.) The follow-up to Runway’s Gen-1 mannequin launched in February, Gen-2 is without doubt one of the first commercially accessible text-to-video fashions.

“Commercially accessible” is a vital distinction. Textual content-to-video, being the logical subsequent frontier in generative AI after pictures and textual content, is changing into a much bigger space of focus notably amongst tech giants, a number of of which have demoed text-to-video fashions over the previous yr. However these fashions stay firmly within the analysis levels, inaccessible to all however a choose few knowledge scientists and engineers.

In fact, first isn’t essentially higher.

Out of non-public curiosity and repair to you, pricey readers, I ran a number of prompts by Gen-2 to get a way of what the mannequin can — and may’t — accomplish. (Runway’s at the moment offering round 100 seconds of free video technology.) There wasn’t a lot of a technique to my insanity, however I attempted to seize a variety of angles, genres and types {that a} director, skilled or armchair, may prefer to see on the silver display screen — or a laptop computer because the case could be.

One limitation of Gen-2 that grew to become instantly obvious is the framerate of the four-second-long movies the mannequin generates. It’s fairly low and noticeably so, to the purpose the place it’s practically slideshow-like in locations.

Runway Gen-2

Picture Credit: Runway

What’s unclear is whether or not that’s an issue with the tech or an try by Runway to save lots of on compute prices. In any case, it makes Gen-2 a slightly unattractive proposition off the bat for editors hoping to keep away from post-production work.

See also  New Night Vision Tech Lets AI See in Pitch Darkness Like It's Broad Daylight

Past the framerate concern, I’ve discovered that Gen-2-generated clips are inclined to share a sure graininess or fuzziness in widespread, as in the event that they’ve had some kind of old-timey Instagram filter utilized. Different artifacting happens as effectively in locations, like pixelation round objects when the “digicam” (for lack of a greater phrase) circles them or rapidly zooms towards them.

As with many generative fashions, Gen-2 isn’t notably in line with respect to physics or anatomy, both. Like one thing conjured up by a surrealist, folks’s legs and arms in Gen-2-produced movies meld collectively and are available aside once more whereas objects soften into the ground and disappear, their reflections warped and distorted. And — relying on the immediate — faces can seem doll-like, with shiny, impassive eyes and pasty pores and skin that evokes an affordable plastic.

Runway Gen-2

Picture Credit: Runway

To pile on increased, there’s the content material concern. Gen-2 appears to have a troublesome time understanding nuance, clinging to explicit descriptors in prompts whereas ignoring others, seemingly at random.

Runway Gen-2

Picture Credit: Runway

One of many prompts I attempted, “A video of an underwater utopia, shot on an previous digicam, within the type of a ‘discovered footage’ movie,’ led to no such utopia — solely what regarded like a first-person scuba dive by an nameless coral reef. Gen-2 struggled with my different prompts too, failing to generate a zoom-in shot for a immediate particularly calling for a “gradual zoom” and never fairly nailing the look of your common astronaut.

May the problems lie with Gen-2’s coaching knowledge set? Maybe.

Gen-2, like Steady Diffusion, is a diffusion mannequin, which means it learns find out how to steadily subtract noise from a beginning picture made fully of noise to maneuver it nearer, step-by-step, to the immediate. Diffusion fashions study by coaching on hundreds of thousands to billions of examples; in an educational paper detailing Gen-2’s structure, Runway says the mannequin was educated on an inside knowledge set of 240 million pictures and 6.4 million video clips.

See also  China's tech titans race to invest $340M in OpenAI challenger

Range within the examples is essential. If the information set doesn’t comprise a lot footage of, say, animation, the mannequin — missing factors of reference — received’t be capable of generate reasonable-quality animations. (In fact, animation being a broad subject, even when the information set did have clips of anime or hand-drawn animation, the mannequin wouldn’t essentially generalize effectively to all sorts of animation.)

Runway Gen-2

Picture Credit: Runway

On the plus aspect, Gen-2 passes a surface-level bias check. Whereas generative AI fashions like DALL-E 2 have been discovered to bolster societal biases, producing pictures of positions of authority — like “CEO or “director” — that depict principally white males, Gen-2 was the tiniest bit extra various within the content material it generated — at the least in my testing.

Runway Gen-2

Picture Credit: Runway

Fed the immediate “A video of a CEO strolling right into a convention room,” Gen-2 generated a video of women and men (albeit extra males than ladies) seated round one thing like a convention desk. The output for the immediate “A video of a physician working in an workplace,” in the meantime, depicts a girl physician vaguely Asian in look behind a desk.

Outcomes for any immediate containing the phrase “nurse” had been much less promising although, persistently displaying younger white ladies. Ditto for the phrase “an individual ready tables.” Evidently, there’s work to be executed.

The takeaway from all this, for me, is that Gen-2 is extra a novelty or toy than a genuinely useful gizmo in any video workflow. May the outputs be edited into one thing extra coherent? Maybe. However relying on the video, it’d require doubtlessly extra work than capturing footage within the first place.

That’s to not be too dismissive of the tech. It’s spectacular what Runway’s executed, right here, successfully beating tech giants to the text-to-video punch. And I’m positive some customers will discover makes use of for Gen-2 that don’t require photorealism — or lots of customizability. (Runway CEO Cristóbal Valenzuela recently instructed Bloomberg that he sees Gen-2 as a option to supply artists and designers a instrument that may assist them with their artistic processes.)

Runway Gen-2

Picture Credit: Runway

I did myself. Gen-2 can certainly perceive a variety of types, like anime and claymation, which lend themselves to the decrease framerate. With a little bit fiddling and enhancing work, it wouldn’t be inconceivable to string collectively a number of clips from to create a story piece.

See also  As Anthropic seeks billions to take on OpenAI, 'industrial capture' is nigh. Or is it?

Lest the potential for deepfakes concern you, Runway says it’s utilizing a mix of AI and human moderation to stop customers from producing movies that embrace pornography, violent content material or that violate copyrights. I can affirm there’s a content material filter — an overzealous one really. However course, these aren’t foolproof strategies, so we’ll should see how effectively they work in apply.

Runway Gen-2

Picture Credit: Runway

However at the least for now, filmmakers, animators and CGI artists and ethicists can relaxation simple. It’ll be at the least couple iterations down the road earlier than Runway’s tech comes near producing film-quality footage — assuming it ever will get there.

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.