Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Open Source Data and Tools for Generative Animation - Introducing M-body.ai - Webinar Recording and Documentation

Webinar Recording and Documentation

Open Source Data and Tools for Generative Animation – Introducing M-body.ai Webinar Recording and Documentation

We are thrilled to announce that the recording and presentation of our recent webinar on generative animation are now available!

Event Highlights

During the webinar, our panel of experts delved into various aspects of the project including:

  • The project goals, deliverables and team
  • The human motion dataset with the multimodal datastreams that comprise of: skeletal body animation (including hands, fingers and head), spoken audio for each seperate agent, timed transcript of spoken audio, raw facial performance capture, reference video and topologically consistent body geometry
  • Capture methodology
  • ML research challenges, sample models and evaluation of the metrics and dataset
  • Software tools: development challenges, goals, integration approaches and components
  • Design around the user intention and toolbox.
  • Licences
  • How and why to engage with the project

About M-body.ai

M-body.ai is an applied research project. Powered by a collaboration of four research centers, it’s led by Sheridan College’s Screen Industries Research and Training Centre (SIRT), including Durham College’s Mixed Reality Capture Studio (MRC) and AI Hub, Centre de développement et de recherche en intelligence numérique (CDRIN) of Cégep de Matane, and Le Laboratoire en innovation ouverte (LLio) of Cégep de Rivière-du-Loup. M-body.ai is funded by the National Sciences and Engineering Research Council of Canada (NSERC).

What We’re Creating

All outputs of the project will be open-source, free, and commercially usable. They will include:

  • A dataset of multi-modal, multi-agent interactions: To support training generative ML models with diverse, high-quality data.
  • Generative animation tools: To improve the efficiency of creating humanoid animations.
  • Open-source software systems: To streamline the integration of generative character performance models into industry-standard content creation tools.

Why Watch the Recording?

Whether you are a researcher or developer interested in generative animation or a production studio or animator looking to integrate Generative AI into your workflows, this webinar offers insights and knowledge.

So, don’t miss out on this opportunity to enhance your understanding of the project with the full recording of the webinar.

Don’t hesitate to give us feedback about the webinar here. You can also click here to access the Slidedeck.

Soon, we will also put online a version with French subtitles.

How to engage?

Join us by testing, or even co-developing with us. Fill out this form to get involved! We look forward to your feedback and are excited to continue this journey with you. Stay tuned for more updates and upcoming events!

Got questions? We have answers! Don’t hesitate to send us an email.