AI + Video

A Practical Look at AI in a Real Editing Workflow

People ask me all the time, “How do you use AI in video?” And yes, I do use it. But it’s working behind the scenes, in quiet ways. You would not watch my work and notice anything AI-generated. Rather, I’m using AI to optimize my workflow and make repairs faster than I could do manually. Here’s how.

LLMs

I use LLMs to organize my thoughts and for quick troubleshooting.

ChatGPT has become my notepad in a sense. I can quickly type out thoughts and have them summarized or organized for me.

Gemini, in my experience, is great for fast troubleshooting. Just recently, I couldn’t find something in Premiere Pro after a major update (Premiere 26). Gemini already knew where it was. That was powerful for me, because I didn’t have to go digging through docs or videos.

Claude helps when I need anything more technical or complex. For example, writing JavaScript expressions in After Effects. Claude can knock that out quickly.

I use all three of these LLMs for different reasons. In all cases, I use free accounts and ensure client details stay private.

Audio Repair


AI is being used in powerful ways for audio repair, and as an editor, I’m leaning into it, especially for dialogue. Early noise removal tools often left dialogue sounding thin and unnatural. You could hear the voice, but you could tell it had been altered. Newer AI-driven tools are getting much better.

Final Cut Pro’s Voice Isolate feature is excellent when used subtly. The default is 50%, but I find 35% to be a sweet spot before you really start “hearing” the effect.

Premiere’s tools in Essential Sound can be great, but I’ve experienced inconsistencies on export that make them hard to trust. I’m sure this will improve with time.

My go-to tool is iZotope RX11. It’s a beast. I typically use it directly inside Premiere Pro and often place it on an audio track rather than individual clips. I keep the settings minimal. A little goes a long way. RX11 is CPU-intensive, but the results are worth it.

Generative AI


This is usually what people mean when they ask if I use AI in video editing.
I have used generative AI, but you’d probably never notice it. In one case, I had an interview shot with a terrible flicker in the background. The talent moved through the flicker, so cropping wasn’t an option.

The solution was Final Cut Pro’s auto-mask, which found and tracked the talent, separating them from the background into two layers. I then took the background into Photoshop and used generative fill to repair the flicker. The result was a clean, usable shot.

Another tool I’ve used is Descript’s eye contact feature, which subtly adjusts the talent’s gaze toward the camera. Used sparingly, it can be very effective. Used too much, it’s clearly fake. As always, balance matters.

Auto-generated captions are incredibly helpful, but they’re not 100% reliable. My workflow is to drop videos into Descript, let it generate a transcript, and then manually review and correct everything.
Starting from a transcript instead of a blank page saves a huge amount of time.

So, is AI coming for my job?


No. Not yet.

Incredible tools are being built, but stories still need to be told. Each edit is intentional. It carries weight and meaning. I don’t see AI replicating that decision-making process anytime soon.
For example, AI can now generate vertical versions of videos. But it’s still working from wide source material. A human editor will thoughtfully recreate a piece for vertical, just like a graphic designer creates different artboards for different uses.

I’m choosing to embrace AI and the tools it offers me as an editor. I benefit from it every day. Not because it replaces my work, but because it supports it.

The tools are changing.
The craft still matters.