Been experimenting with AI-generated song covers lately feeding original vocals through different models to emulate various artists, or flipping genre styles altogether. It’s wild how convincing some of them get, especially when you pair the right timbre with a fitting instrumental. But what really makes or breaks it, I’ve found, is post-processing. Even the best model outputs usually need a bit of cleanup to feel cohesive.
I’ve been using a small tool called audiomodify to tweak timing, EQ, and blend things better. It wasn’t made specifically for AI covers, but it’s been surprisingly useful in stitching things together, especially when mixing AI stems with real instruments or sampled loops. Kind of sits in that sweet spot between fast edits and deeper resampling work.
Curious what others are using to refine AI covers after generation do you run it through a DAW? Batch process with something else? I feel like we’re just scratching the surface of making these feel more “finished” and less like cool demos.