Startup Stability elevates GAN. Stability is making waves with an AI-powered video app, albeit the method is more commonly used to reproduce sounds and photos.
The company says that this function is “a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation.” It is called “Stable Video Diffusion.” That basically means it’s a basic model for making movies that is based on a model that’s used to make AI images.
Stable Video Diffusion is different from most AI models because it doesn’t just make movies when given text. In addition, it can turn a single picture into a movie with 14 frames that can be shown at 3 to 30 frames per second.
Along with announcing the tool (which is still being tested), Stability also put out a paper with its view for the future of generative video and made the code for its model available on GitHub. The company wants to build on the model in the future to meet the needs of more users.