Parsing Python pose classification data to Max/MSP
Hi, I was wondering if anyone had any advice/experience regarding a project I am working on.
I am creating an interactive sound space using human gestures as inputs to a granular synthesiser.
I want to implement the gesture recognition and classification in Python using MediaPipe BlazePose and Keras classification models (I am favouring Python implementation over JS due to familiarity). I then want to parse & request the data from the Python model to Max/MSP with OSC, such that when an OSC command is received, the relevant routing command is triggered in my Max granular synthesis patch, e.g., pitch/volume/granule size.
Does anyone have any experience of reading Python data for Max applications, or potentially using a javascript API to do so for real-time applications?
Many thanks!
It is totally possible to do this. You just need to compose the message.
From a project I recently made which used mediapipe:
from pythonosc import udp_client
ip = "127.0.0.1"
sendPort = 9900
inPort = 8000
#Sender
client = udp_client.SimpleUDPClient(ip, sendPort)
Some mediapipe and CV code here....
Sending Hands landmarks (left hand in this case) in my main while loop:
for id, lm in enumerate(handLms.landmark):
currentHand = results.multi_handedness[0].classification[0].label
h, w, c = img.shape
cx, cy = int(lm.x * w), int(lm.y * h)
cz = lm.z * 1000
coordinates = [cx, cy, cz]
client.send_message("/" + str(currentHand) + "/" + str(id), coordinates)
Really helpful, thank you Zancudo.
I know Lysdexic Audio has done some Node for Max handpose work before: https://github.com/lysdexic-audio/n4m-handpose
Might be something to check out!
Forgot on my previous response
The receiving patch in Max:
The [sel] object has the string part of the client.sendmessage()
function in python leaving the XYZ coordinates as ints on the output.
Thank you Zancudo. Works really well