I have wondered this too, Paul Van Dyk has his Vonyc sessions on Spotify basically as albums, it’s a great experience since you can skip between the tracks and you have all the metadata as if it was a real mix CD like they used to sell back in the day. But have seen zero other artists that have their mix shows like this on Spotify, would love to see if there are other examples of DJs that have their mix shows set up like they are albums.

I use plugdata for this now, but mainly what I do is add additional features to controllers, so basically max or plugdata acts sort of like Bome's Midi Translator if you know what that does.

headtrauma
OP
1
soundcloud.com/nukage-1
10dLink

The last 3 years I've tried that, it did not stop the ear fatigue (yes I know it sounds weird)

Andrea Botez, Kenya Grace, and all the Tik Tok DJ's out there...

Wow, thanks for the response. I did not expect to get any useful info on this at this point. Well, I ended up actually writing a script for PlugData that would sit between the controller and Bitwig and since the purpose was to have a Reset button for my DJ effects, it basically was able to do the job just fine.

Turning down the highs to avoid ear fatigue

Being someone who makes heavy bass music, I've dealt with ear fatigue after sessions for a long time. Being very cautious about hearing loss, I always mix at as low of a volume as possible. Even listening at relatively low volumes, (definitely under 75db) I can notice some distortions in my hearing after mixing for as little as 1 hour. This will go away in time of course.

I've had a sense that this might be from the extreme amount of distortion and high frequencies that are in a lot of bass music, and so I finally decided to try just turning down the highs above 1.6k down by 5db using a system wide EQ, and lo and behold, I can actually listen and not have the same kind of ear fatigue, even at louder volumes.

My question is first of all, has anyone else tried this? Secondly, if you did, do you mix with the quieter highs as well since that's how you're used to listening to music, or do you turn them back on to mix? I feel like if I'm used to the quieter highs, turning them back on to mix will probably just sound terrible.

I also feel like this just goes against the general wisdom of trying to listen to the most flat way possible, but I literally can't deal with these high frequencies anymore from all the distortion/clipping, maybe I'm just getting old but that's the way it is.

headtrauma
1
soundcloud.com/nukage-1
10dLink

This question also hits me hard.. I used to produce while either drinking or staying up late, and I was having a TON OF FUN but I realized eventually that I was actually not very productive most of the time, I'd have maybe an hour of really productive time within a 4-5 hour span. So I would eventually get tracks done but it would take me like a year to finish an EP of 4-5 instrumental tracks.

Eventually I took someone's advice to try producing in the morning instead, and trying to some kind of structure with when I will be finishing tracks for regular releases, etc, aka deadlines. Fast forward to now, I'm working on the 3rd album that I've made since the start of the pandemic so I'm definitely getting a lot more music finished. Its been a long process to get to this point though.

I googled "Xilent Choose Me II dance video" and this was the 12th result down. You're welcome. https://hlamer.ru/video/220625-Xilent-Choose_Me_II_DUBSTEP

Its probably just a music video from another song so it got copyright claimed or something, which means it might disappear again so probably go ahead and download it from that site with something like Download Helper for Firefox in case you can't find it again.

Now, if someone can figure out what the ORIGINAL song that had that video was, that would be cool.

Xilent used it, idk if he’s still making music or changed his alias or what but he definitely was one of the first big artists I saw using it

Push 2/3 gets my vote for making music without needing to look at the screen at all, but that's kind of a specific use case. Every controller has its pro's/con's. Push 2/3 has the best support for the screen though of the ones I've seen. Downsides being, no keys and no faders, if you want those things. I personally like keys/faders so I also have a Keylab 61 that is my daily driver.

I'm already starting to see that people who would are maybe not quite invested in becoming a producer already are taking pause when it comes to trying to learn to produce. Now they can quickly make some songs that sound better than what they could do if they put years of effort in. They can start to DJ original songs right now, without years of work. Most of us learned to produce because it was literally the only way that we could make original music, now there is an alternative that is a hell of a lot easier.

I feel like this is going to maybe not affect people who are already in the game and enjoy doing it, but it will affect how many people try to get into the game, and most mass market products rely much more on the masses of people who might buy that Ableton Push and maybe never really use it that much, but hold onto it and mess with it every once in a while.

There's already AI that can vastly improve the quality of a recording using AI (Adobe Podcast studio I think one is called) and it will get better over time. Why invest in $1000 mic when you can record something with your phone and run it through an AI to get something studio quality?

For some reason I vividly remember Adam Sessler reviewing Alter Echo on X-play, specifically a part about the character being named Nevin, and he did not give it a glowing review and I did not play it…

Indiana jones and the fate of Atlantis - I gave it to my friend to borrow and his dad figured it out and showed him and in turn showed me how to beat it

Ah, modulating a delay, smart. This is exactly the info I needed.

I will try this.. I tried to use the audio rate modulator into a wavetable that had the 'dc offset' wavetable which is basically a wavetable thats a straight line that goes from -1 to 1 over 256 frames, simulating the DC offset device, if that makes sense. This did not work because the actual oscillator is not making any sound, so changing the phase of it does nothing. I'll try your thing now.

Apply FM/PM to audio in real time?

Seems like it should be possible in The Grid or by some other means to apply FM to audio in real-time, and have the operators be key tracked appropriately. It's been possible with FM8 for a while so there shouldn't be any technical reason why it isn't possible.

For anyone who might have this same issue, I was able to make a patch for Plug Data that solved this for me.. Basically I mapped the controls to a Midi CC device in bitwig, put a Plug Data instance after it, programmed the logic so that the CC's will not be sent until a value of '0' is received per control, then used an instance of HW Instrument to send that midi out to an IAC driver, and finally back into Bitwig via a Generic Flexi controller script, and then mapped that to the controls I needed to control. It does seem like a lot, but I had similar things going on in my template already so it only took me about 90 minutes to set this up.

I still like to play the old Lucas arts adventure games, brutal doom, and the 16 bit console classics like Mario world and sonic 3 and knuckles

Haven't heard of this one, I've seen this one so far: https://www.acestudio.ai/ Someone should do a comparison video..

Midi takeover modes, but on a per-control or per-controller basis

So, I've got an issue.. I very much need the Immediate takeover mode for some of my midi inputs, but for others, Catch or Relative scaling would be much better. I've got a 'reset' button mapped for my effects, but the knobs I'm using for the effects won't magically jump back to 0, and then when I move the knob again, the effect is jumping to the value it was at before it was reset - not good for filters or other effects that need to come in gradually. I can't use catch or value scaling globally though, because I need it for other midi data coming into the set. Any way to do this? Maybe run the controls through a modulator somehow to give the effect of a takeover mode?

I'm open to ideas.. otherwise I need to switch to a different midi controller that has endless encoders, but I need one that is compact with with a crossfader or at least some kind of hardware fader, so I have very few options (currently using a traktor z1 for this purpose).

headtrauma
1Edited
2moLink

I never heard of Suno before, but I haven't looked into this for a few months. The stuff I had heard a few months ago was getting there, but the vocals definitely weren't there yet. I Just checked out some examples on the site, they are really impressive, pretty scary stuff. I've been trying to learn how to sing over the past year because I figured that the sound design / production / composition stuff would be mastered by AI within a short while and human singers would be the last thing it would be able to replicate...

So after fooling around with Suno for a few minutes, it's even crazier than I thought it was. Are those songs just outputs directly from Suno or are you somehow bringing them into a DAW and working on them from there? If you DID want to bring the stems into a DAW, could you get separated stems from Suno, or would you need to try to use AI to separate them?

How was AI used in the music production? Were the vocals generated by AI? How? Was the music generated by AI? How much of it? How much editing did you do to the music, if it was AI generated? What AI tools did you use to generate the video?

I would say that I have had some luck using AI tools to help write lyrics - initially to help create a starting point, but then with specific prompts to help revise as I rewrite them.

Just use Freetube to download them onto your device. I put a ton of mixes on my apple watch for when I go running without my phone, works great.

Absolutely amazing, wow this needs more upvotes asap, incredible work. I'd love to create music videos like this for my music!