Posenet into Dict
Hello all!
I've got the "Posenet for Installations" package up and working.... yay!
The output messages I receive have a huge amount of data to parse, (see in attached patch) and is waaay beyond what I can wrap my head around... am used to parsing *much* simpler OSC messages that can be handled with route.
This data seems an ideal candidate to put into a dict, I've been trying to regexp the data into something dict will read properly - but had no success. How would i go about pulling out (for example) leftKnee position x/y ?
Thanks very much for any pointers in the right direction !!
jd
you may find this example i made useful, that uses the posenet node-for-max project and parses out features from the returned dictionary:
Awesome Rob!
Thanks Rob for the example.
I have a question about the multi-pose mode, is it possible to use that option? And how to use the data from different persons? The dict.view data doesn't change if i set it from single-pose to multi-pose. Besides the data of the person I get an error in my max window: dict.view: extra arguments for message "dictionary" The dataset of multi-pose looks the same as single-pose, but i suppose to see also // pose 1, // pose 2 to separate the different persons right?
thanks a lot in advance!
Hi Rob,
can't find anymore the posenet node-for-max project, is there any place where to find it?
I just updated my post to point to the correct repository - https://github.com/yuichkun/n4m-posenet
Thanks Rob for the great example!
I also strugling with the dataset of multi-pose to slice the dictionary in pose 1, pose 2... to detect diffrent person.
Would be great for a hint to use the multi-pose.
Thanks in advance!
same thing here...
Also on Mac if Max goes in background while sending data via OSC to another application,
the patch stop working
Hi folks,
I've figured out how to keep the window processing when it's in the background. Works on my OSX(Mojave)
It's a hacky workaround since electron don't seem to support the backgroundThrottling: false
flag to work with requestAnimationFrame()
any more from the looks on github.
You can change main.js to this to allow it to work in the background, but it'll be rendered as a texture so you won't be able to change any of the UI elements in the GUI.
const { app, BrowserWindow } = require("electron");
function createWindow() {
console.log("hello");
// Create the browser window.
const win = new BrowserWindow({ width: 800, height: 600, webPreferences: { offscreen: true } });
// and load the html of the app.
win.loadFile("./camera.html");
}
app.on("ready", createWindow);
fantastic share!
hi! ITs posibble multi - pose?
Hi, any hint to read out multi-pose from the dictionary? I'm not sure if the yuichkun/n4m-posenet send proper multi-pose data, maybe I'm wrong. Would be glad! 8]
Hi, any hint to read out multi-pose from the dictionary? The yuichkun/n4m-posenet send multi-pose in sequence with no clear begin and end of the person ID. If there are two person, I can alternate the dict and see two diffrent person. But if sudenly there are three person, I can't keep them apart. I have to know first there are three person detected. Would be glad to get and split the diffrent tracked persons clearly! Thanks 8]
Hi, I was able to read the multi pose in dict. It seems to be a bit jumpy / swap the person if the amount of person change. I only try with printed images infront of the camera. And I didn't spend a lot of time.
Let me know If someone find an improvement or conclusion!
Thanks
code right after node.script index.js:
and for visualisation:
Hi !
I'd like to try the example posted by Rob but it doesn't work anymore.
I've been able to update dependencies to fix Electron module version that wasn't available but Im' stuck with Electron window starting only with the message "loading the model".
And if I go to the web inspector in Electron, I find an error "require is not defined..." in camera.js.
Any idea on how to make it work ?
regards,
Jérémie
use this one instead - https://github.com/robtherich/jweb-mediapipe