A conversation popped up on another platform about the role of AI in music production, generally as its used in the mastering process. Now I’m not sure how much AI that actually involves and see it more as a set of rules that will map your song or music to a contemporary ‘good mix’… basically control the EQ, RMS peak and LUFS. Things like this are becoming more and more prominent on music histinf sites.
I do use AI in some processing as I use software like Steinberg’s SpectraLayers to ‘un-layer’ and un-mix tonal qualities, and so on but I don’t use it in mastering. I do that the old fashioned way.
Your thoughts…? Yay or nay…?
I figured you’d be concerned about music generating AI.
Personally, I’m all for it. In the early days of image generation AI you’d see lots of wild abstract outputs from the intermediate layers/processes. If we can get music to sound like that I think it would be amazing.
The thing about algorithmically generated art is that it’s basically a faucet - turn it on and get as much as you want. So there’s a danger of oversaturation, but that’s basically the case in a lot of genres of human-generated music already. It makes valuing art tricky.The faucet (tap… I’m English) analogy is perfect and yeah… there are so many of us making music now that the arena is literally stuffed… maybe AI generated has a place…? I dunno. Not for me… yet.
I would like to experiment with Ai created music- are their any you would recommend?
Unfortunately I haven’t found any AI-generated music that seems interesting.
I have the same opinion as the other comment. It’s useful as a guide or for checking for issues, but I wouldn’t trust it completely for mixing. It applies for me especially with the kind of music I make, which doesn’t fit into a typical genre that AI is most likely trained on, and therefore might have a different result to what was intended. For general stuff like peaks and LUFS, it will definitely be useful.
I don’t work in a typical or mainstream genre either. My own mixing methods are unorthodox and I generally master ‘un-loud’ so things like Ozone wouldn’t help me anyway. Guides to me are still reference tracks but yes, I see them as helping a great deal in some production for some people.
In Ozone you can actually load a reference track and it does some adjustments that nudge it in the right direction. I often use that feature to see what I can fix in the mix.
Yeah, I guess that’s a useful ,thing to have. It seems that not many people even use a reference track these days for their mix. I do still use them and when I’m mixing for other people ask them if they have one… just to get an idea of what they’re looking for. If Ozone works, it has to be a good thing. I just don’t trust it, to be honest.
Fair enough. It’s always best to use your ears I guess, but I don’t always trust my ears either lol
I use Landr for a quick check during mixing to get an idea of any potential problems with the mix.
I think that’s probably the best use of it… as some kind of guide.
It has its uses as a tool, but as has been mentioned already, I wouldn’t use it to generate music.
I do see it as being useful in sample management software like Sononym to analyze and classify sample sounds.
AI can definitely ‘see’ things I can’t in a spectral layer. It’s not perfect, none of them are but mopping up after them is getting easier as they improve. I just know theres going to be a day that I can’t distinguish between a human tune and an AI one and find that terrifying.
Thankfully, neither you nor I are making that kind of modern ‘homogemastered’ mainstream stuff.
@OneBlindMouse @flockofnazguls Athwart.