Jump to content

Epic Games release new MetaHuman Animator Iphone app

"Epic Games launches MetaHuman Animator to capture high-quality human faces with an iPhone" -  GamesBeat ( News Article ) 

 

Epic Games + UE5 (Unreal Engine 5) released a new piece of tech to help animators & game Developers  animate faces  

 

 

Traditionally a face motion capture rig was needed , but is now available to hobbyist's and anyone , all within a new I phone app . Now making animating facial motions for Film + TV , Games and animation easier . 

there is also an option to use this with a stereo HMC for a more "greater visual fidelity" .

 

This also works with the Metahumans Creator . for an easy pipeline to creating and now animating character's quicker and easier before . 

 

Quote

"The new feature set uses a 4D solver to combine video and depth data together with a MetaHuman representation of the performer. The animation is produced locally using GPU hardware, with the final animation available in minutes. " - GamesBeat (News Source , link below )

Quote

Epic games said :  " Even better, it’s simple and straightforward to achieve incredible results—anyone can do it."

 

You can check out this Demo video of it being used - By Unreal Engine 

 

What I think   

This new technology looks great , If an animator like myself , Once a tedious task of facial animating faces ( without the high expensive tech )Will become a breeze and  Will definitely come in handy . Epic and UE5 have a reputation of having high quality things like this , Like there Quixel Megascans asset library . I also think with the budget of Epic Games . there is a advantage that competitors may not be able to beat something like this . 

This for sure will now be seen in new game titles . And I'm exited to try it out for myself. This i think will be a huge step into giving smaller artists and studios , cool and cheaper motion capture solutions 

 

 

Sources

GamesBeat ( news article about this topic ) : https://venturebeat.com/games/epic-games-launches-metahuman-animator-to-capture-high-quality-human-faces/

 

Unreal Engine Article / Product page : https://www.unrealengine.com/en-US/metahuman?utm_source=GoogleSearch&utm_medium=Performance&utm_campaign={campaigname}&utm_id=19729682794&sub_campaign=&utm_content=&utm_term=meta+human+creator 

 

Demo Video - Unreal Engine : https://youtu.be/bIGnx2jvrbg

 

 

(Let me know if there is anything i missed or done wrong , First time posting a Tech new article 🙂 ) 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Cool stuff this thing is quite detailed.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/15/2023 at 3:44 PM, Abstract1 said:

 

What I think   

This new technology looks great , If an animator like myself , Once a tedious task of facial animating faces ( without the high expensive tech )Will become a breeze and  Will definitely come in handy . Epic and UE5 have a reputation of having high quality things like this , Like there Quixel Megascans asset library . I also think with the budget of Epic Games . there is a advantage that competitors may not be able to beat something like this . 

This for sure will now be seen in new game titles . And I'm exited to try it out for myself. This i think will be a huge step into giving smaller artists and studios , cool and cheaper motion capture solutions 

 

 

There's a lot left out of context. Like vtubers have been using AR tech for years now, and the common refrain is that "the tech exists, and it's no replacement for proper animation"

 

In 3D space, you have to hit the right point between "realistic" and "unrealistic" because if it's too realistic, it creeps people out, and if it's too stiff it's derided for being awful and ugly.

 

In 2D space, the mapping of 3D to 2D requires a lot of approximation that 2D is often unsuitable for using these systems beyond "2-camera" sitcom" type of animation from the shoulders up.

 

There's also a lot of mockery of "metahuman" as being any kind of replacement for actual modeling. It's intent, and usefulness will be largely pushed to background characters that you don't interact very much with, because you won't be scrutinizing the facial expressions. When you interact with a character on a screen, it is often missing cues that a "real" human would have when it's driven by the computer. Where as when two Vtubers in VRchat talk to each other, a lot of those cues are there because eyes often move with tone (eg rolling up or to the side to express sarcasm. )

 

One of the things that 3D modeling is often extremely terrible at, is when the mouth is open. ARKit can detect the tongue being out, but the render environment often doesn't know what to do with that information. Are they sticking their tongue straight out? Are they trying to touch the tip of their nose? is it hanging to lick something?

 

Overall, it just moves the problems to different stages of the pipeline. If you are creating a game, you go from having possibly "too much data" that you have to reduce until it fits the space allocated in the game, multiplied to X many named NPC's, to maybe letting 'metahuman' extrapolate on top of the same template, and now every character except the hero character is using all the same animation, they're just all wearing different masks. This is why there are so many medieval-fantasy games and not very many space-scifi games. It's the same problem that television has, is that the (budget) limitations imposed, often don't let you create fantastic creatures/aliens/monsters that stray far from "human in a rubber mask". If you want to make  a game from the POV of anything not human shaped (eg a cat) you have to mocap all that traditionally, or use vanilla manually animated data.

 

I'm not saying metahuman is bad. I'm saying that people are again expecting way too much from a tool designed to solve one specific problem. Metahuman's competition is more like SideFX Houdini CharacteFX. Houdini can do a lot more, but it's still very much a GIGO process. You're not going to go out and grab 1000 people to borrow their face for the game. You're going to maybe do this for the 10 characters you want to look like a specific actor who also voices the character, and everything else is going to be just procedurally generated. 

 

This tool is literately:

Quote

MetaHuman presets are based on pre-existing scans of real people and only physically plausible adjustments can be made—so creating realistic digital humans is easy. With a huge range of facial features and skin complexions, plus many different choices for hair, eyes, clothes, and more, you can create a truly varied array of characters.

...

With MetaHuman Animator, you can reproduce facial performances as high-fidelity animation on any MetaHuman. There’s no fancy hardware required—it even works with an iPhone. Every nuance of the actor’s performance is captured and transferred onto your digital human, enabling you to wow audiences with believable characters that deliver immersive experiences.

 

Again, just to put it back into perspective. A gamedev studio is likely going to use their own existing staff for this, not hire people to provide their appearance and performance. 

 

Looking outside of game development (eg Unreal used in low-budget television shows) will have the same problem. 

 

It's neat, but let's not pretend this is anything new. You get no no stylization or exaggeration. You get nothing but "only what a human can do"

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Kisai said:

It's neat, but let's not pretend this is anything new.

Maybe I'm not awake enough yet, the main benefit of this seems to be doing things faster and cheaper.

 

Since you mentioned vtubers, this does seem to have much more complexity than that.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Kisai said:

Like vtubers have been using AR tech for years now, and the common refrain is that "the tech exists, and it's no replacement for proper animation"

I agree , VTubers and even Snapchat filters have been using AR tech for years . But like said :

2 hours ago, porina said:

the main benefit of this seems to be doing things faster and cheaper.

 

Since you mentioned vtubers, this does seem to have much more complexity than that

Epic Game's 'version' of this is much more complex and focused much more with the industry pipeline ( like traditional modeling software , and of course UE5 ) in mind . Yes a big time Film and game studio is most likely not going to use this as much as a induvial artist  , along with Metahumans , but bringing this tech down to give these options to whoever wants it is in my opinions a huge step in the right direction . And to now have 'realistic results' using now Unreal Engine as a hub for all of these (Quixel Megascans , Metahumans creator and Metahuman animator ) for free . I'm not going to complain 😂 , 

 

But there's definitely a lot more use cases for using Metahuman animator than for the final production too . Now an artist at a studio can visualize what the final animation to a talking sequence would look like , by themselves at a desk by using an I phone . Saving time , resources and money in the process . So there is a high chance big time company and studios will could use  this technology , which again is integrated nicely into there pipeline already (sorry for drifting off a little there )

 

I overall don't think this maybe a replacement , especially for tradition style animation ext..  But rather a new tool to add to the belt to help out . And for sure good tech to have for game characters , Like you say most probably NPC ( at this point in time ??)  

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×