Snap Teases Upcoming AR Experiences, Generative AI Tools

Snap unveiled the early version of its real-time image diffusion model that will allow users to have AR experiences using their smartphones.

The company also introduced generative AI tools that AR creators can utilize during the Augmented World Expo, last June 18.

Snapchat
Souvik Banerjee via Unsplash

Snap Advances AI Integration with AR Experiences, Tools

Snap co-founder and CTO Bobby Murphy presented an AI model that can generate and re-render frames in real time using a text prompt. Users will soon experience the new feature of Lenses with a generative model in the following months.

"This and future real-time on-device generative ML models speak to an exciting new direction for augmented reality, and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether," Murphy said.

Snap AR creators will soon be able to create selfie Lenses through highly realistic ML face effects. The platform will also allow creators to generate 3D asses and integrate them into their lenses.

Snap Introduces Lense Studio 5.0 for Developers

During the event, Murphy also launched Lens Studio 5.0 for developers. The improved feature offers access to new generative AI tools that will help creators generate AR effects faster than expected.

Through the Lens Studio, AR creators can generate custom stylization effects. These changes can be used to make a realistic transformation from the face, body, and even the surroundings of a user in real-time.

Snap also shared that creators are allowed to use text and image prompts to generate characters like aliens using the Face Mesh technology of the company. Other materials like face masks can also be rendered within a few minutes.

An AI assistant will be present in the latest version of the Lens Studio to guide AR creators about new features.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics