Drawing in 3D in real time from sound?
Is anyone else drawing in 3D from soundwaves? I posted a few videos here:
I'm doing some basic forms of that. jit.catch and jit.gl.graph stuff. Also using oscillators to drive 3D particle animations. Spiralab looks cool. Would love to play with it. Are you sharing it?
(Btw, the video quality in your player is awful, doesn't do it much justice ;)
Nice work, Ernest.
I did a 3D image/sound patch a while ago, but I'm not drawing from soundwaves, I'm generating both (audio, image) from a central matrix (http://vimeo.com/29404191).
On a side note, I have to agree that the video quality really isn't the best.
To make it better maybe you could lower the resolution. Depending on your capture method, the frame rate would probably improve and since your intended low compression data rate doesn't let the high resolution be seen anyway, you wouldn't lose any perceived quality.
Thanks for the suggestion. With regards to quality, this was captured from a rendering of many fast-moving single-pixel-width lines at 1920x1080 resolution, and the videos were made to share on Facebook. The poster image was captured from video, so it is also fuzzy, but I will capture some still images over the next month showing the actual quality.
I would be interested in with some other folks, yes. The entire design is very large, and I'm not sure what snippets to share. It includes its own FM synthesizer in Gen, and my next task it so make a custom poly voice allocator. I plan sustained tones with many small notes, and the existing poly allocator truncates the sustained tone because it assigns poly resources round-robin style. I started a custom poly voice allocator with LRU assignment, and it is more difficult than I expected. Here are some screen shots.
the standard poly~ doesn't cut it? voices can include jitter objects
Yes, I implemented the jitter in poly~. Say there are three instances in the poly. when receiving notes, the poly object seems to assign the incoming notes in cyclic, or round-robin, order...to instance #:: 1, 2, 3, 1, 2, 3, etc.
Imagine the first note is still sustained when the fourth arrives, but the second and third notes have already been released. The way things are, the fourth note gets assigned to instance 1 and cuts off the sustained note. So a least-recent-note algorithm would instead assign it to the oldest note which has already been turned off. What surprises me is, MAX has been around for-- what--30 years now? And still there is no patch I know of which doesn't assign new notes to poly instances in a cyclic style. Trying to make it in Max, I found it more difficult than I expected, so I am thinking of splitting it out into a separate project. And simply using cyclic voice assignment for separate monophonic sequencers for each rendered instance, at least for now, unless anyone has a better suggestion.
Oh. I did actually get it to work, but then, I tried to modify it so when changing to a new sound, the old notes would continue to play the old sound, and it went wrong somewhere.