Menu Close

Request to Enhance Unreal Metahuman with AI

Request to Enhance Unreal Metahuman with AI


In today’s video, join Siren as we explore the exciting possibilities of enhancing Unreal Metahuman characters with the power of AI! We’ll dive into the main requests and additional improvements that could revolutionize character animation and customization.

Main Requests

Facial Animation AI Generation from Video: Develop a technology like Avatary that can generate facial animations for Unreal Metahuman characters by tracking and transferring facial movements from videos recorded on any device or extracted from various media sources.

Body Animation AI Generation from Video: Implement a system like Rokoko that allows the generation of body animations for Unreal Metahuman characters from video sources, eliminating the need for complex motion capture setups.

Additional Improvements

AI Generate Metahuman Base Mesh from an Image or Short Video: Create a tool or feature that can automatically generate the base mesh of Unreal Metahuman characters from a single image or short video.

Easy Finger Animation Tool: Introduce a user-friendly tool to simplify and streamline the animation of fingers for Unreal Metahuman characters, making the process more accessible for artists.

Deep Fake Face Option: Implement a deep fake face option like Swapface that allows users to swap faces on Unreal Metahuman characters easily, expanding creative possibilities for character customization.

Use Mesh as Cloth Collision for HQ Rendering: Add the option for mesh collision alongside capsule-based collision for High-Quality Cinematic deferred rendering, enhancing realism and improving visual fidelity for Unreal Metahuman characters.

Machine Learning Cloth Simulation within main Interface: Integrate machine learning-based cloth simulation directly into the Metahuman interface, enabling realistic cloth behavior and dynamic animations without the need for external software.

AI-Generated Effects (e.g., Wet Hair, Sweat, Blood, Dirt): Develop an AI-powered tool to generate or inpaint realistic effects like wet hair, sweat, blood, or dirt on Unreal Metahuman characters, enhancing visual realism and storytelling capabilities.

Metahuman Body Sculpt Mode: Introduce a body sculpting mode within Unreal Metahuman that allows artists to easily modify and customize the body shapes and proportions of characters, offering greater flexibility and personalization.

Marph Target Creator in Sculpt Mode: Implement a Morph target creator within the sculpt mode of Unreal Metahuman Creator, enabling artists to create custom morph targets for facial expressions and blend shapes, enhancing character versatility and expression capabilities.

Join us as we explore the exciting possibilities of enhancing character animation with Unreal Metahuman and AI. Together, we can revolutionize the world of virtual characters and storytelling, unlocking new avenues of creativity and realism: https://www.facebook.com/groups/ue4devs/

Transcription

Imagine being able to generate facial animation from any video recorded on any device, whether it’s an Android, an iPhone, a camera, a webcam, or extracted from any movie, TV show, or conference from any era, and bring it to your Unreal Metahuman! It would be huge, right?

I made a video to showcase this concept, but the model I used didn’t give me the expected result, Oh my god it looks awful. I’m really sorry !! But I know for sure that the model works. As you can see, with AI, it’s already possible to track facial movements and transfer them onto a 2D image. What would be even more amazing is being able to transfer this data onto the bones of your Unreal Metahuman. This would allow thousands and thousands of artists around the world to create awesome games and films without the need for sophisticated camera setups, helmets, or complicated workflows.

Body motion to metahuman isalready possible with Rococo Studio, but skeleton dosn’t match Unreal’s one and need o lot of actions to make it work. Also ,facial motion capture from a video to a 3D model has been done in the past by Avatary, but the solution isn’t updated with Unreal Engine 5 and only works with Maya. I think it is interesting to bring these technology directly to the Metahuman interface, in addition to the new Metahuman Animator solution. So, I invite the developers of Unreal to try to train a model for video driven motion to Metahuman. If not, we can still organize ourselves and focus our efforts to create a plugin that can be integrated with Metahuman, just like the “Voice to Facial Rig” plugin, the Metahuman SDK, which I’m currently using to animate my mouth.

I believe we can succeed in doing it. We can meet in the Facebook group ‘Unreal Engine Developers Community’ and discuss together solutions to make it happen. Personally, I don’t have any knowledge in machine learning or deep learning programming, but I am willing to help with simple tasks like preparing dataset of reference videos or facial rig poses. We can also try to contact the developers of this model to see if we can apply it to the Metahuman rig.

Come on, guys, we can really do it .. because if we don’t, someone else will. I mean, an engine that starts with U and ends with Y. Okay, guys, see you later. What a time to be alive!