Forums > Max For Live

Building a MIDI mixer

June 18, 2014 | 5:33 am

Hi all,

I’m trying to build a patch where two midi patterns are input, differentiated by channel, and then the patch enables me to scale each pattern according to a controllable mixer parameter.

So like a DJ’s audio mixer but instead of each audio channel’s gain reacting to the mixer’s fade, the velocities of each input MIDI pattern reacts to a fade instead.

I hope that makes sense, I’m slightly out of my depth programming-wise, but theoretically I’ve got it down… I think I need to have a control that ‘weights’ a per tick averaging of each patterns velocity towards one pattern or the other?

Sorry if that wasn’t clear, I’d be super grateful for any help or advice on how I would go about programming this.

Thank you



dtr
June 18, 2014 | 6:09 am

That ‘d be very easy. The notein object gives you note number, channel and velocity of incoming midi notes. Scale (multiply) the velocity by the value coming out of your fader (range float 0.-1.). And output to where it needs to go via noteout or whatever.



dtr
June 18, 2014 | 6:11 am

Perhaps you’ll need to check what pattern a note belongs to. Use the midi channel number with for example the route or gate object.


June 18, 2014 | 6:21 am

Ah yes, that all makes sense so far, but the tricky part is if both patterns play the same note at the same time with different velocities. In this case the two note’s velocities need to be averaged for the mixer to output a correctly mixed note.

For instance, in the state where the fader is dead center, the notes velocities would be averaged evenly, but in the state where the fader is more to one side the averaging would have to be weighted towards that side etc. Without this averaging, the mixer would output the note with the greatest velocity (I think?) and wouldn’t result in a smooth mix between the patterns. It’s this weighted averaging that I’m finding particularly unapproachable.

I hope that made sense, thanks v. much for your help



dtr
June 18, 2014 | 7:45 am

Hmmm that’s less trivial. Especially because there is no such thing as ‘at the same time’ in computing terms. Notes playing at the same time will be processed one after the other, with a slight timing difference. Since notes are synced to midi clock there might be a straightforward way of catching them together but I’m not enough of a Max midi pro for that. Midi is so 80′s ;)


June 18, 2014 | 8:07 am

Righttt I see I see. Perhaps I’ll try sequencing and mixing patterns in OSC, and then converting the resulting data to MIDI. Thanks again! :)


June 18, 2014 | 2:57 pm

depends on what you’re controlling and how it responds to MIDI commands. a fair amount of synths have an actual midi volume control, which is the only thing that’d allow you to do smooth (ish as the midi resolution might make it steppy) fades. if you fade the note velocity that will only take effect at the time of your note ons. fun too, but it’s closer to fake compressor ducking than a fade, i think. also note velocity can have a large effect on the sound itself and might change more than just the volume.

tbh, i usually wanna combine both midi volume and velocity.

For instance, in the state where the fader is dead center, the notes velocities would be averaged evenly, but in the state where the fader is more to one side the averaging would have to be weighted towards that side etc. Without this averaging, the mixer would output the note with the greatest velocity (I think?)

i don’t think that would be normal. (not ime at least) what did you try it on? both note on velocities should be sent out of their respective channels.

dead center would be like max velocity for both and one would scale down as you moved to the left or right while the other didn’t change. the dj mixers i’ve used have had track level controls independent of the crossfader, dunno.

i mean in terms of ear volume perception you probs wanna do logarithmic rather than linear tho.

  • This reply was modified 2 months by  jonah.
  • This reply was modified 2 months by  jonah.

June 18, 2014 | 5:23 pm

Jonah,

Yes, velocity controlling more than just volume is integral to this idea in fact. I’m hoping that this would work as a more ‘musical’ way of fading between musical parts than in audio, so like, between two snare roll rhythms played on a physical modeling synth for instance.

I’ve managed to automate between 2 short patterns on a step sequencer device that had knobs for each step’s velocity like some hardware modular sequencers do, and it sounds really musical, like the way that some tribal drumming slowly changes, so I know that it’s possible but it’s really inconvenient doing it this way.

Hmmm, I just set up 3 MIDI channels in Live, 2 are playing clips containing some of the same notes. Both of these channels are sending to the third, which is recording. The recorded clip actually appears to have chosen the notes with the smallest velocity when two of the same play at once, as shown below.

Ah yes logarithmic makes sense.

Thanks for the help :)

Attachments:
  1. Screen-Shot-2014-06-19-at-01.19.52
  2. Screen-Shot-2014-06-19-at-01.19.25
  3. Screen-Shot-2014-06-19-at-01.19.13

Viewing 8 posts - 1 through 8 (of 8 total)