VFXRIO Live 2021 launched its second free digital edition on March 21. The event brings leading national and international specialists in the production of visual effects for cinema and TV, games, and new technologies to Brazil. The program includes lectures and workshops with supervisors, directors and artists who participate in the creation of visual effects and immersive media. VFXRIO is sponsored by TV Globo and supported by Foundry and Intel and is associated with ACM SIGGRAPH.
This year’s event kicked off with a keynote presentation by Pinscreen founder and CEO Hao Li. Regarded as one of the best deepfake artists in the world, his talk focused on digital humans, artificial intelligence, and the future of technology. Li is known for his seminal work in capturing facial performance in real-time, scanning hair, and dynamically capturing the entire body.
Li’s work in facial animation was the basis of Animoji on Apple’s iPhone X. He worked at Weta Digital, collaborated on the creation of Paul Walker’s digital reenactment technology in the film Fast and Furious 7, and was a leader in research at Industrial Light & Magic / Lucasfilm and Weta. Li is an associate professor of computer science at the University of Southern California, as well as director of the Vision and Graphics Lab at the USC Institute for Creative Technologies.
“We are living under lockdown and can’t have physical conferences, meetings and all flights are cancelled,” explained Li. “I believe meeting, conferences and presentations will remain virtual even after the pandemic. Devices such as Microsoft’s Hololens will change the way we collaborate, interact, and communicate. You have this idea of people being able to teleport themselves into a common space. The idea of digital humans goes beyond the idea of digital humans in videogames or VFX: We are looking at a future where virtual humans can become the centre of our everyday lives.”
“We believe Hao Li’s research is seminal and needs to be communicated,” commented VFXRIO director Matteo Moriconi. “DeepFake technology is here to stay and will be changing the way we see the world and pushing the boundaries in terms of creativity, communication and ethics. We are proud to host his keynote at VFXRIO LIVE”.
Deepfakes uses a form of artificial intelligence called deep learning to make images of fake events. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures. Digital humans, created with similar technology, are meant to see, and listen to users to understand the meaning behind the words. They can then use their own tone of voice and body language to create lifelike human conversations.