This Startup Uses AI to automatically Trim Videos
OpusClip can clip video highlights for social media based on expected virality or, in a beta version, freeform user prompts.
Plenty of people and organizations are producing online video, but editing it for posting to different social media platforms can be a laborious task.
A startup called OpusClip offers AI-powered tools that can automatically turn longer videos into short form, vertical video clips designed for social media, and even post them directly to popular platforms like TikTok, Instagram, LinkedIn, and X. Since its launch last June, OpusClip has been used by customers including billboard.com, Telefónica, Univision, and roughly six million other users.
OpusClip, which first launched last June, lets users specify the format of video they want to generate, including how long videos should be, how the screen should be split for discussions between multiple speakers, and what fonts should be used for auto-generated captions. Users can make tweaks to the generated clips within the tool, or export them for heavier duty edits in a third-party platform like Adobe Premiere, but cofounder and CEO Young Zhao says a big part of the product’s appeal is that it’s essentially fully automated, with minimal learning curve.
“It’s not a video editor,” says Zhao, who previously operated a talent agency operating in Shanghai and Singapore. “It’s your autonomous video editing agent that actually does the work for you.”
A free plan lets users process up to 60 minutes of video per month, while paid plans at $15 or $29 per month add additional processing time and features like enhanced social media scheduling, access to AI-generated B-roll material, and silence removal.
Now, a new feature called ClipAnything, currently in beta, will give users more flexibility to control how the AI clips videos. Users will be able to indicate the type of material they’re editing, such as sports or interview content, enter a freeform text prompt specifying what they’re looking for—like all the shots of a particular athlete scoring, scenes where a food reviewer reacts to a particular dish, or even the parts most likely to go viral on social media.
And unlike other tools on the market that use AI to edit video based solely around transcribed dialogue, ClipAnything also analyzes visual elements and audio beyond voice to understand actions, emotions, and events taking place in videos. The tool can even detect funny bits of a video, Zhao says.
The company, which announced Tuesday it’s received a total of $30 million in funding, including a Series A round led by venture capital firm Millennium New Horizons, continually trains its models based on user feedback and based on how generated videos perform on platforms like YouTube, says cofounder and CTO Jay Wu. That can help the algorithm get better over time at understanding particular kinds of content, like music videos or videos of particular video games.
“These data help us actually train our models to understand what clips are good, and how to find better and better highlight moments,” he says. “So that is how we gain better and better results based on the data we have.”
Already, the product can save hours of work manually editing long videos down to short ones—and enable video creators who previously didn’t have the time, skills, or software to create short clips. “People just don’t have resources, or it wastes so much time to do this before,” says Grace Wang, OpusClip’s cofounder and chief marketing officer.
ClipAnything is slated to emerge from beta within a few months after some more refinements, Zhao says. And in the future, users can expect more features to easily enhance video material without having to master a full-fledged editing tool, he says.
“The automation of the workflow is something we invest the majority of our resources on,” he says.