I had a bash at this, with somewhat limited success using [borax], [zl group] and [zl sort 0]. I guess the degree of success depends on how dense the polyphonic texture is, and how regular/irregular those textures are. It might help if you posted an example patch, cos we're kind of in the dark here.
So why aren't you just using ddg.mono right "out of the box"? If you look at its help file, you can just basically remove the kslider and integer boxes, and just connect the notein right to the ddg.mono. I haven't done this, but should work I would think?
I'm late to this thread, but I have an abstraction I call cbm.mono (after ddg.mono) that takes a MIDI stream, and makes a MIDI-compatible mono stream (last note priority.) It can be found here: http://www.xfade.com/max/examples/
I just did some testing with this, and even changed the code to force a note-off at every note-on (a requirement for dealing with an intrinsically polyphonic synthesizer). Even with some code changes, we run into a difficult issue: the serial nature of MIDI notes.
For example, if we hit a chord stack on a keyboard containing C, E, G, D, the notes aren't sent to the notein object as a chord stack; it is sent to notein as four individual notes in some arbitrary order. Let's assume that the order is as shown (C, E, G, D). ddg.mono sees each note come in, stores it and sounds it in that order (if Last Note priority is chosen). When you release the notes, the note-off messages are also sent serially (in some arbitrary order), and may also sound because previous notes are being released from the storage stack, and a new note is becoming the 'current' note.
In doing some basic testing with Live, it appears that chord stacks are sent in low-to-high note order. Thus, even with changes to the code to support polyphonic synthesizer input, you get faux polyphony as the individual notes are all received, sounded, stored, then potentially re-sounded on key release.
Eliminating this problem would either require some sort of speedlim-like behavior, or receipt-time encoding/decoding - both of which are too fragile to consider at this time. If someone were to have a good theoretical alternative, I'd be willing to try to code it up.