A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
It’s not the best idea to call it SVD, as it already stands for Singular Value Decomposition.
I don’t think you’re ever going to find an acronym that doesn’t have some other meaning in a different field.
Singular Value Decomposition is widely used in machine learning, image processing, natural language processing, recommender algorithms…
Stable Video Diffusion is a good marketing name, but SVD is quite confusing from an academic point of view.
Heh, so you can take a still photo of a deceased love one and turn it into a “live photo”
This is already being done for a while. You can probably find some videos of it on youtube and tiktok where people get all teary eyed to see their grandparents in a fake video.
My immediate thought was to have a more jazzed up character portrait for D&D.
I am not a clever man.
Hey don’t be too hard on yourself, that’s a pretty cool idea!
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
It’s an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU.
Last year, Stability AI made waves with the release of Stable Diffusion, an “open weights” image synthesis model that kick started a wave of open image synthesis and inspired a large community of hobbyists that have built off the technology with their own custom fine-tunings.
They can operate at varying speeds from 3 to 30 frames per second, and they output short (typically 2-4 second-long) MP4 video clips at 576×1024 resolution.
In our local testing, a 14-frame generation took about 30 minutes to create on an Nvidia RTX 3060 graphics card, but users can experiment with running the models much faster on the cloud through services like Hugging Face and Replicate (some of which you may need to pay for).
We’ve previously covered other AI video synthesis methods, including those from Meta, Google, and Adobe.
Stability AI says it is also working on a text-to-video model, which will allow the creation of short video clips using written prompts instead of images.
Saved 66% of original text.
More accurately: it can attempt to animate any still image