GoPro is the leading video camera for outdoor adventurers - from hobbyist weekend warriors to pros - across a myriad of activities on land, in the water, in the sky. These users use GoPro to capture their experiences, and to potentially share them with friends and family. Quik is GoPro’s app for transforming the raw videos into the exciting, polished short-form reels that we are accustomed to seeing from GoPro ambassadors.
Most of the users of GoPro are not experts at video editing. The aim of GoPro AI Editing is to bring a professional video editor touch to every adventurer - the user just has to give feedback.
The primary goal is to empower users to create high quality action videos of themselves without needing advanced video editing skills.
Metric(s):
1. # times AI recommendations are used
2. # times the AI recommendations are published.
Today, Quik app already suggests reels that users can edit. I talked to existing users about the GoPro experience - from adventuring in the wild to the in-app experience. Below are my findings about opportunities for an AI editor to further empower action video creation.
I gathered and prioritized current user pain-points that hinder them from creating more content, so that AI Editing can help do more of the heavy lifting:
The concept is an interactive video editor, just as if the user is working 1:1 with a professional live video editor. The user can tell the editor bot exactly what they are envisioning, or select from some prompts to get started. The AI editor will make the suggested changes, and users can easily revert back.
For an initial lightweight test of the AI Editor, I inserted prompts in the feed where users normally see pre-made highlight reels. This helps with discoverability.
This is an MVP concept that explores the interactions and capabilities of what AI-assisted editing could look like for GoPro's Quik app. Early usability testing is positive, with user feedback that it is easy to use and thatthey feel empowered. For next steps, there are more complexities to resolve around what kinds of edits to suggest, how to enable expanded creative expression vs converge on similar types of edited videos, loading interactions when the edit is processing, and when/whether to point the users back to manual editing workflows.