Oculus Rift
If using the oculus (from max_oculus) object, did you set the oculus config utility to treat the HMD as a second display (i.e. disable "direct mode")?
Otherwise, if using the oculusrift object (from max_worldmaking), you do need to set it to direct mode. Honestly is what I would recommend, it works with the latest oculus drivers and it has better performance too.
Just checking if anyone can replicate.. I'm finding the 360video example can no longer autoload (or load) the comp_bg.tif image anymore.
However, the one on this site works: http://panotool.com/panotool/images/ShinjukuEquirectangular.jpg
So I'm wondering if there may be a bug regarding .tif support in the latest Max version?
Trying the HTC Vive object help patch with a new laptop that has a GTX1060 GPU. I've tested this with other VR apps and have had great performance, including the SteamVR performance test, which rates the GPU as "high quality" and reports all performance over 90fps.
Unfortunately I can't achieve greater than 45fps in the help patch -- any recommendations as to how I might improve the frame rate? This is consistent between the 64bit and 32bit versions of Max. OS is Win10 Professional, 64bit.
As far as I can tell, The 1060 is right on the fence, as in, it may be fine as a user / runtime card but maybe not as a developer card.
It's incredibly easy to get the drop to 45fps with the Vive object in Max, just because Max is so incredibly non-optimized and non-optimizable (even in standalone builds).
One thing I've noticed for sure: any media playback will do better off an SSD. 64-bit tends to see certain issues disappear with media playback as well, meaning extra memory is useful. Again for media playback, CPU will matter for any codec besides hap.
For pure CGI computation and rendering, I suppose it still mainly comes down to the video card. Have you had a look at overclocking it?
That could make the difference for a card on the fence between user/developer appropriate.
I tried overclocking the card tonight, but does not appear possible with this model.
I'm testing with the included help patch for the htcvive object, so I'm not playing any media. The SSD I'm using is quite fast, so I don't think that would be a bottleneck in any case.
From what I understand the 1060 is faster than a borderline card, but apparently not fast enough to run at 90fps in Max. FWIW, TechPowerUp GPU-Z reports only 25% GPU Load when running the patch -- I'm not sure if that's a true indicator, but it may indicate that Max is not fully utilizing the card. Not sure why that would be.
When I run Unigine Valley as a benchmark I see GPU load from 50-75% depending on quality settings. Wondering if there's something limiting the GPU load for Max, or perhaps the htcvive object itself?
Hi
I recently spent quite a bit of time hunting for fps in my vive patch, and found a couple of tricks that helped optimizing and staying as close as possible to the so desired 90 fps... :
- htcvive.maxhelp : in this help patch, the bottleneck is with no doubt the 512 "rock boxes" : try to reduce the number of cubes, or simply delete this jit.gl.multiple, and you'll get your 90 fps !
- displaylist : the displaylist attribute (available on almost all 3D objects in jitter) is very, VERY useful !
("Cache in displaylist flag (default = 0) This feature may be used to speed up rendering time by creating and storing a list of gl drawing commands on the graphics card. This will have no effect if matrixoutput turned on")
enabling displaylist really optimized my patches A LOT (especially when using big meshes / models )
(of course, don't use it if your object is constantly modified / redrawn : for ex a jit.gl.mesh receiving new vertex matrix on each frame..
but it works fine if object is just moved / scaled / rotated)
for ex : in htcvive.maxhelp, enabling displaylist on the "rock boxes" gridshape
jit.gl.gridshape vive_world @automatic 0 @shape cube @name cubik @dim 8 6 @displaylist 1
should give a much better framerate...
- CPU : most of the time, Max do not takes advantage of the multicores of our recent CPUs (it's a different story in MSP / poly~ ...)
that means that if you have a quad core CPU, Max will only use 1 of then, that's why you'll probably won't reach more than 25% CPU load.
Hyper-threading : enabled by default on all intel CPUs :
"For each processor core that is physically present, the operating system addresses two virtual (logical) cores and shares the workload between them when possible"
for example : I have a quad core i7- 6700k : windows "see" 8 cores (8 threads) : Max (when running only some pure Jitter patch) can only use 12.5 % of my total CPU ressources !
I tried to disable Hyper-threading in my bios settings and made some FPS tests : it's better !
I can't really tell "how much better", ... but for sure, I got better framerate...
(not twice as much ! but sometime, only a couple of FPS make the difference and let you run a patch @90 fps instead of 45 ...)
and Max is now able to use 25 % of my CPU.
... in order to use the remaining 75 % of CPU :
split processes as much as possible between several instances of Max / or standalone.
If anyone has other advice in the same vein, it's very welcome !
Mathieu
Hi Mathieu -
I'll look into the @displaylist attribute, that's interesting. Frankly I don't think a scene with 512 cubes should trouble this card, but I'll check to see if removing or using the displaylist helps.
Regarding CPU load, I'm very familiar with these techniques. I was referring to GPU load in my post.
It's interesting to hear that disabling hyerthreading has resulted in higher fps for you. Not sure why that would be but perhaps there's overhead in maintaining the virtual cores. I suppose choosing this would depend on what other processes you have running along with Max.
Best, Jesse
The hyperthreading fix may or may not be due to saving the trouble of making bogus requests between CPU cores, if Max can't make use of multiple cores. Disabling it may ensure that Max requires maximum allocation of a single core on process start-up, then likely to be the most underused core at start-up time.
Mathieu's comments on CPU may still apply in this case - there is less of a guarantee in these Max patches that the bulk of the work is being done on GPUs, it all depends on which version of OpenGL is being made use of by Max. There is all of the control-level code which is arbitrating the various jit.gl objects, and it will potentially be a bottleneck to work which is actually done on the GPU, whereas an optimized C++ project in pure, recent OpenGL will have very little of this.
Graham had a few tips when I asked similar question perhaps one page ago in the thread, see if you can find it.
If you strip down even the example patch to the bare minimum, a totally empty scene perhaps, maybe that will help to pinpoint the exact addition to the scene which is causing the frame rate drop. Doing this would be a great help as it can be hard to find these issues if Graham and I are both developing on machines which do not tend to drop to 45fps - I'll have another look today and see if I can do something small that triggers the drop.
Hi,
I tried to get the Max_Worldmaking_Package to run but Max throws me an "Error 126 loading external htcvive". I guess there is some mismatch (or missing?) openvr_api.dll. I tried several combinations of 32bit and 64bit versions and even downloaded the version from https://github.com/ValveSoftware/openvr but no luck.
my steamvr version is from oct/11 version 1476136918 - if this is of any relevance.
cheers, marius.
Hey All,
camera mode for the vive object depends on @projection_mode frustum but this ignores near and far clipping. I have a particularly LARGE scene and this is causing objects that are far to be cut off or totally disappear. Any tips for giving my scene a deeper depth than what is provided through frustum?
near and far clipping must be set using htcvive's near_clip and far_clip attributes.
apparently, the message "'configure" must then be send to htcvive in order to apply the new clipping...
seems quite unusual... maybe it's a bug ? Graham ?
@MARIUS.SCHEBELLA: The [htcvive] object is currently built against the OpenVR 1.0.2 SDK, I guess there could be some breaking changes with the latest driver, but I don't see anything listed in https://github.com/ValveSoftware/openvr/releases nor any related issues on https://github.com/ValveSoftware/openvr so I doubt it. The openvr_api.dll is in the worldmaking package's support folder, and should be picked up automatically by Max if the package is placed in My Documents/Max 7/Packages -- but that may depend on using Max 7.2.5 or later for it to work, I'm not sure. If you already are, could you try installing the VS2015 runtime libraries (they're free) from https://www.microsoft.com/en-us/download/details.aspx?id=48145, restart, and tell me if that fixes it? If so then there's something I've set wrong in the VS project and I should fix that.
@THOMAS JOHN MARTINEZ / @MATHIEU CHAMAGNE: Yeah that looks like a bug, should be fixable by trigginging configure whenever those attributes are set. I've added the issue to https://github.com/worldmaking/Max_Worldmaking_Package/issues/15. Hopefully get some time this week to look at outstanding issues. In the meantime, as you say, sending the 'configure' message should be enough.
Hi Graham and Mathieu that works for me. Thanks!
Hi Graham, thanks for your reply. I am using the 64bit version of Max 7.3.1 and installed the Microsoft Visual C++ 2015 Redistributable (x64) package. Still the same error. Is it possible to get more information from Max - like which dll is causing the error or additional dependencies?
any tips on rendering vive_world to a secondary monitor so people in the installation can see what the vr headset is seeing (without two cameras) just one. I created another rendering context and window and can get the scene on a videoplane but only by running through a matrix object which decreases performance (also is still two cameras). Does anyone have an example of this?
the easier and cheaper solution is to display in a square window only half of the picture using @tex_scale_x 2. in the jit.gl.videoplane used to monitor the miror texture :
and if you don't like square window (or don't have a square screen !), you can use a 3rd jit.gl.camera that you can place wherever you like (or copy position & orientation of HMD)
with this solution, you can tweak lens angle, camera position, add shaders / slabs before displaying to external monitor, ... :
Mathieu I'm using the second patch you attached. Perfect. It was easy to get lost in node land but you sorted me out.
Attached is a version that allows you to fullscreen this "audience" display. The normal fullscreen message to the window object was messing up the rendering and stalling max. This takes advantage of the "pos" message. Works on a 1080p display or projector so adjust to fit your setup - but this should be pretty standard.
Thanks again.
Hi Thomas, Mathieu,
The glitch with fullscreen is because on Windows going into fullscreen re-creates the OpenGL context, so the Vive's internal textures become invalid. This can be fixed by sending a "connect" message to the [htcvive] object after entering exiting fullscreen.
For showing only a single eye on the desktop display, the best thing is to re-scale the videoplane to fit the window. Adding another camera is probably best avoided if possible, as it means rendering your scene yet another time (using up 33% of your rendering time), increasing the chance of getting a lower frame rate.
Both of these are added to the patcher below:
@Marius,
I've just pushed a new build of the package which maybe might fix your troubles -- could you try downloading from this link (the development version): https://github.com/worldmaking/Max_Worldmaking_Package/tree/devel
Thanks!
By the way, this might be a handy tip for anyone struggling with 45fps performance on the Vive. It's not a solution, but it helps. In the latest SteamVR beta (instructions on how to get it in link below) there's a new interleaved reprojection method that does a lot to smoothen the experience when the scene can't render at 90fps. After installing the beta, pop open the SteamVR settings, under Performance, enable "Allow interleaved reprojection".
https://www.reddit.com/r/Vive/comments/4kk9he/how_to_download_beta_steam_vr/
Hey Graham
Is there any support for haptic feedback from the controls? Would be awesome to have some vibration for collisions!
Not yet, but I've ticketed that and will see if I can add it. Shouldn't be too hard.
And indeed it wasn't -- 'vibrate ' message now added to htcvive :-)
oooh baby
thanks Graham that was fast will try it out in the next couple of days
Pushed a few updates to [htcvive] too: in addition to controller vibrations, now there's support for getting the camera feed, and the controllers output tracking data in world space as well as tracking space (so they properly follow you as you move around), with a navigation example.
Graham
many thanks for this update and for your last messages !
everything is smoother with the interleaved reprojection trick,
getting HMD camera as a gl texture is much more efficient than using jit.grab
and the new controller output in tracking/world space is definitely VERY useful ! (I had to 'emulate' this feature with complicated and ugly combinations of anim.nodes and lot of localtoworld messages... it worked for me, but never took the time to share and ask for better solution... but now it's much clear and efficient ! thanks, i can clean up my patches :)
+ the vibrator controllers are quite fun to use !!
a small FR : would it be possible to get the battery state of the controllers ? (so I could have a notification in my patch / installation saying it's time to plug the controller to charge it ...)
many thanks !
Mathieu
Cheers. Added a ticket for battery state to the github. (In future, everyone please feel free to add issues/tickets there!)
Unless I misunderstood something, I think we miss one more small thing in the new navigation and tracked controllers datas : HMD position & quat
for the controllers we now have :
position + tracked_position + quat + tracked_quat
but from 5th outlet (HMD) we only have tracked_position and tracked_quat
so when using the (very cool !) navigation-example patch , the camera is driven by jit.anim.node vive @name nav
actually, to get world coordinate of the HMD, we need to combine position and quat from jit.anim.node vive @name nav with tracked_position and tracked_quat
(adding .anim.node.position + tracked_position, and multiplying anim.node.quat * tracked_quat using jit.quat)
one other method would be to get camera positions from 2nd and 3rd outlet (left and right cameras)
HMD.position = (left.eye.position + right.eye.position) / 2
HMD.quat = left.eye.quat
but having HDM position and quat coming from 5th outlet would be much more straightforward :-)
I you agree, could I add this as a FR on github ?
thanks
Mathieu
Hi,
Took me a moment to figure out what you were asking, but yes, this makes sense. The 'quat' and 'position' messages have now been added to the 5th outlet, and respect navigation in the same way as the controllers. (no need for FR!)
Other thoughts:
- Now that this object has a lot of outlets, and messages to route, I'm wondering if it doesn't make sense to break up the object into several sub-objects, e.g. htcvive.controller, htcvive.hmd, htcvive.render, htcvive.camera etc. What you you all think?
- I'm also wondering if I might be able to merge a lot of the htcvive/oculusrift features into a generic hmd object that can support both...
Graham
Thanks Graham for this new update !
(HMD position & quat + controllers battery : everything works fine ! and my navigation patch is now much cleaner :-)
Breaking apart external : I'm not convinced that it will bring any significant improvement to our patches...
as it is actually : I guess that we all do the same thing : connecting the different outputs (HMD / controllers positions & quat, camera texture, ...) and input to other max patches with max standard named send / receive pairs...
for me, it works fine like this, and I can't see the benefit of doing the same thing using dedicated externals.
but maybe I'm wrong ? could you imagine a scenario where it could make a real difference ?
of course, this would make cleaner and easier to share patches... but does it really worse the time & energy you would spend on it ?
I can think of an intermediate and maybe simpler solution :
providing a patch in the Max_Worldmaking_package with ready-to-use named send and receive objects connected to all inlets and outlets...
a kind of wrapper.
maybe including a jit.gl.render context // or jit.world ?
(but no stone donuts or cloudy sky ! :-)
So this patch could be the starting point of any VR project.
it could be instantiated as an abstraction, and all parameters and datas could be set and retrieved with send/receive (+ with inlet / outlet, and why not with attributes as well...)
that would allow to provide different additional patches demonstrating various techniques separately
(I mean : not including everything in the .maxhelp , but as distinct patches to open or instantiate like plugins :
- navigation
- using controllers
- adding a leap motion
- play with physics
- ...
Well, that's my modular way of thinking and patching... maybe not the universal / ideal solution !
I think it allows user to quickly test different things without leaving his main development patcher (and without reopening the main htcvive.maxhelp al the time, potentially leading to context name conflicts, crashes, ...)
I could post an example / draft, i anyone think it's a good idea...
About merging htcvive/oculusrift :
sure, 1 external to rule them all is a great idea !
but do the oculus touch and Vive controllers will have the same number of outputs, and range, and names, ... ?
this will probably imply some sort of naming convention and range normalizing...
and more generally : does it make sens to open a patch made for example for a room-scale setup (HTCvive) with another HMD that provide "only" seated or standing VR ?...
that patch will need some adaptation, for sure... so renaming one external shouldn't be the more complicated thing to do ;-)
You're right, a basic template would be a great thing to have. Or a couple really, e.g. for walking vs. flying type navigation. Simple enough to add a /templates folder to the package and drop them in there, then they'll appear in File > New From Template. Actually that would be pretty awesome. A draft would be great, thanks!
Actually dedicated externals isn't that much work, but I guess the advantage in reducing a couple of send/receive and route objects isn't that much either. I'll punt on that for now.
Main benefit of a unified [hmd] object is that patchers don't need much editing -- including the templates. The basic set of properties of Rift/Vive are similar enough at the moment, and OSVR is converging that way too; the seated/standing distinction doesn't seem to matter much in terms of the external's interface. There are just some extra features available for each (e.g. camera & chaperone for vive), and some differences in the buttons on the hand controllers. That said, the [htcvive] object should just work with the Oculus already, since it's just using Openvr/SteamVR. Maybe I should rename it [openvr] or [steamvr].
Hey everyone,
Does anyone have Oculus Touch controllers? I've just commited code updates to the package at https://github.com/worldmaking/Max_Worldmaking_Package to add support for the oculus touch controllers, but I don't have access to the hardware to test it on right now.
Cheers,
Graham
Hi,
Could that change things for Mac Max users and Oculus?
https://cindori.org/vrdesktop/
It should be great if some owner of the required oculus (Oculus Rift DK2) could report some test!
Cheers
matteo
Hi Graham and Vive users
would you consider adding support for the brand new Vive trackers ?
(I'll receive a couple of them soon... and I imagine a world of new existing things to do with these new toys in Max !!)
thanks
Mathieu
Mathieu - where do you order these from? If I can get my hands on some I'd happily add support. They'd be great as references for 3D audio positioning. PS there is a project called Envelop for Live and I'm working with them on a standalone version for Max, it makes good use of the HOA library (higher order ambisonics).
It would be great to deeply connect these projects together for a single 3D audiovisual environment.
They're on backorder currently.
Hi
yes, I ordered trackers at this address https://www.vive.com/us/vive-tracker-for-developer/
I've received my 3 trackers yesterday. They look great, very small and light, seems to work fine !
(but could do anything exiting with them excepted testing them in steamVR ...)
One point : they appear to be very sensitive to IR light (apparently much more than the Vive controllers) : the IR emitted by my leap motion attached to the Vive HMD make them dance !
but with the leap disconnected (or oriented to another direction) , trackers do work great.
Thanks KCOUL for pointing this Envelop project ; i didn't know about it, and sounds very interesting !
I'm actually doing something quite similar, but using OculusSpatializer VST plugin in Live, and and positioning each source with data coming from my VR world in Max, transformed to local coordinate relative to HMD position...
and i used my 2 Vive controllers attached to wireless headphones for head tracking (in order to add 2 more listeners to this virtual audio scene..). The Vive trackers will be much more suitable for this job !
Alright, I've ordered one as for me it would be useful for 3D audio calibration. We'll see when it arrives, being on back order and all.
Envelop right now is strongly coupled to Ableton Live as being the sound source/client, so I am working on getting the architecture a bit more modular so it will fit other use cases like purely Max-based scenarios.
Then, between that project and this one we might have a great game concept prototyping environment before work goes on in Unreal or Unity engine, etc. Or for audiovisual media requiring only realtime control and no backing sequencer, it could be really great too.
Hi Mathieu,
My tracker arrived and I have updated the project to the latest SDK of OpenVR to be sure that the API includes support for the tracker. I am having an issue "Error 126" on one of my development workstations so I will try the other one soon.
Graham hopefully the commit is fine for you - I may have updated the MSBuild toolset to VS2017 inadvertently.
cool, great news !
My 3 Vive trackers can't wait joining the party !
please let me know if there's anything I can do to help you.
Thanks
How's your C++ programming? ;)
I got further remembering I needed to also update the openvr dlls in the support folder, which I've pushed to devel. I'm getting a "submit error" on the VRCompositor Submit function, which has something to do with one of the breaking changes they made in OpenVR SDK 1.0.5. I am trying to acquire the OpenGL texture with the new vr::TextureType_OpenGL but am getting some kind of error on Submit. If I can see which error it is I could pursue the problem, but I'm having a real trouble attaching the debugger to Max, I forget if there is a special way to do it.
Once this gets cleared up it should be easy enough to add support for the trackers, if they are connected the data is probably just streaming off them to the API so we can just duplicate the way we get data from the controllers and do the same for the trackers. I wonder how many simultaneous trackers are supported.
ok... for now I can only offer encouragement + thankfulness !
and reply to your last interogation : "The maximun number of tracking objects can be adopted is 11 pcs of Vive Tracker plus 2 pcs of Vive controller. "
(1 USB dongle per tracker is required ; ...and a bunch of USB hubs !)
Wow 1 USB dongle per tracker! I found a very informative guidelines document:
https://dl.vive.com/Tracker/Guideline/HTC_Vive_Tracker_Developer_Guidelines_v1.3.pdf
Just to be clear we are going to stick with Use Case 3 for now! Simply reporting data output by the tracker itself, although it would be interesting to see if anyone could develop an accessory that could communicate with Max as well. That could have some really cool applications.
It's very informative to see that they've put a limit at 11 trackers. k_unMaxTrackedDeviceCount = 16 so you would have 11 trackers + 2 controllers + 1 HMD = 14. I assume the last two are the lighthouses?
Either way, the good news is that I piggybacked on single tracker support to the HMD's ouput stream... fairly easily once I solved my strange Compositor issue.
That issue was very much a bordercase where I have been setting up a development station on an Alienware laptop with Graphics Amplifier so I can use the Desktop grade GPU, but then if Max opens on the laptop's built-in display, the Texture is associated with that display and not the Desktop GPU's adapter's display, resulting in VRCompositorError_TextureIsOnWrongDevice.
I added a switch on the error codes the Submit() function generates, to give better clues as to what's going on behind the scenes when testing texture submission.
For multiple tracker support, I can see how it should probably be done - the trackers should get their own outlet from the vive object, and I should pass a unique ID (I'm going for the serial number for now) so that the data flow of multiple trackers can be routed individually. This is the only solution I can think of right now that works reliably in all cases.
So please verify if you also can get data off a single tracker.
Actually good news! It's not the most elegant solution, but I have found a way for now where you should be able to use multiple Vive trackers right away. See the devel branch's latest help file - I am still piggybacking my single tracker onto the HMD's output stream, but now I am recognizing my tracker's unique serial number and listening for it.
You should be able to take the "doesn't match anything" output from the route object - capture it, observe your own serial numbers, and then append them into additional Route arguments to be able to filter your 3 trackers' data streams from one another. In the end you would have 12 additional Route arguments - 4 for each of the 3 serial numbers you observe.
I want to figure out a way that makes this into a 0 config solution but I haven't seen in Max capability to do things like route data when the header starts with a known prefix but has a unique suffix that is not known until runtime.
Thouthand thanks ! it works very nicely ; it's just great to get my trackers data in Max !!
using the trackers unique id is fine for me (and I think it's really more useful than the way it works with the controllers : first ON = id 1 = right hand ...)
of course, a dedicated outlet, with a message to get a list of current trackers ids would be probably more user friendly... but for now, I'm really happy with it, there are so much cool things to test with this new setup :-)
thanks, thanks, and thanks gain !
all the best
Mathieu
Hey everyone, in case you didn't already hear about it, the [oculusrift] and [htcvive] objects merged into a new [vr] object, available in the Package Manager now. Cory wrote about it here:
Most of the functionality is carried over, hopefully the rest will follow in the next couple of weeks, and before long several new features, including a binding to the oculus audio SDK :-)
That's cool! I'm also interested in seeing about implementing support for the Valve Audio C API, to get equivalent or better audio in conjunction with the Vive.
https://valvesoftware.github.io/steam-audio/
I'm still working on some standalone Max widgets for the Envelop for Live project which will likely be done much sooner, so there will be lots of good audio options!
Aha! I've been looking at that too, as well as the Oculus Audio SDK.
I had linking trouble with Oculus Audio, waiting for some feedback from them on that, but I did make a little progress with Steam Audio. So far all I did was pan a single source in HRTF. (Weird thing is that I get zipper noise as it moves, not sure if that's a problem with their SDK or how I'm using it.) But there's a lot of interesting stuff in there to explore. Would be great if you can join in when you can.
Hey all - I'm incredibly excited for this! I've spent the last year or so hacking Unity and Max together via OSC, so this looks like it's about to be a major step forwards.
Pulled everything down yesterday and started getting set up with an HTC Vive. I'm connecting OK + receiving head-tracking in the help patch, but having trouble rendering to the HMD itself. Is there an obvious fix that I'm missing, or would anyone be able to point me in the right direction? thanks a ton!
.png)
Hi Matthew,
This error is coming from the SteamVR driver; my guess is that the Vive is not using the same GPU driver as Jitter -- which would be a problem for sharing the textures generated in Jitter to the display on the Vive.
Do you have more than one GPU? Is the desktop display plugged into a different GPU than your Vive? Or, maybe in the Nvidia control panel (if you're using nvidia) you might need to configure it to tell Max to use maximum performance (to prevent it using the CPU renderer)?
What OS version, GPU, Max version etc. do you have? I saw some people had this error using Windows 7 or certain GPU driver versions. E.g. here they recommend upgrading to win10:
Graham
Thanks Graham!
You were spot on re: Nvidia control panel - switched it to use the Nvidia card with Max and it works like a charm. Very excited to dive in, and will keep my ear to the ground re: Oculus Spatializer integration. Will chime in with any notes from the field if I can contribute anything helpful, too.
mg
Good to hear. Adding notes to the README to that effect.
BTW we now have a vr@cycling74.com mailing list for testing and development, if anybody wants to help testing please let me know (you can PM me your email address).
Thanks!
Hey Graham,
Finally got some bandwidth again so happy to help out. I don't think these forums have PM capability anymore (didn't they use to?) but I'd like to join the mailing list.
I'm looking to modify some interfaces for the Envelop for Live project so that it's more modularized and can be used without Ableton Live. VR-based interfaces where you could move a graphical representation of a 3D sound source around in VR to reposition the sound (and hopefully one day reorient its cone of sound projection) are extremely interesting to me.
I'd like to make an audio raytracer for reorientable 3D sound source experimentation. Audio dealbreakers in immersive games I've played have often been when getting close to a sound source oriented directly away from you, and getting a loud direct signal when it should be, at best, the early reflection(s).
As for the Steam audio API, I'm particularly interested in continuing a project I worked on in uni, using depth scanning (i.e. Kinect) to reconstruct a model of a user's head and upper torso, estimate key measurements, plug the values into this project automatically:
https://cycling74.com/tools/fft-based-binaural-panner/
Then retrieve the closest matching HRTF, and use it with the Steam API and see if it's more accurate for the user than a generic HRTF (maybe set up some kind of a blind test where the user doesn't know which is which and tries to evaluate which is more realistic for them).
I had previously been evaluating without headtracking at all so this would be a great step forward for the project.
I don't think we'll get to the point where we can tailor completely personalized HRTF's into 3D audio solutions anytime soon, but I have faith that a good closest-match finding process could yield promising results.
Ah! glad to know I wasn't alone looking for the DM option. Graham - would love to join the mailing list as well ( matthew.d.gantt@gmail.com )
@Matthew Gantt -- will do.
@Kcoul, can you let me know your email address -- I can't find it. graham@cycling74.com.
A VR editor for spatial sound placement (and movement?) sounds like a great idea, and I agree modeling radiation patterns, even with a simple cone, makes a huge difference. SPAT did that well IIRC.
The Steam Audio API has a raytracer model built in, but it's designed mainly for baking reverberation and occlusion. In fact from the API, their environmental modeling looks pretty interesting. But I've not got as far as that yet, I'm still working through the basic mono and ambisonic to HRTF paths. Honestly I'm finding a few bugs and oddities in the Steam Audio library as I go, I'm not sure they've got the whole API tested yet. For example, the ambisonic rotator has no effect right now (but at least at 1st order that's trivial to implement). The mono panner also has zipper noise when moving fast, I think some temporal interpolation is needed. There's no near-field effect yet. etc.
https://github.com/ValveSoftware/steam-audio/issues
I'm also looking at the Oculus Audio SDK, which like the Steam one is completely independent of the HMD and driver. The Oculus one is simpler, but so far I'm having no joy even getting it to link on OSX or Windows, I think there's something a bit odd in how they built them, or else they're getting corrupted in the download. Waiting to hear back.
Your idea of using a Kinect scan to find the closest matching HRTF is interesting -- I can imagine that having some real impact if it works well. Straightforward HCI type project :-)
Graham
Thanks, email sent! I did some work on the Kinect project in my CS degree on the most upstream part - capturing the depth map and estimating the measurements based on gradients etc. It is of course one of those things where it's easier to fine-tune the estimation model for a particular subject but someone with a very differently shaped ear might cause trouble for the same model. Making it work well generally is where the difficulty really lays.
I suspected that Steam Audio would be a bit rough around the edges at this point, so wanted to start with Envelop which is based on the fairly mature HOA library. I think it works quite well at least in the tests I've done so far.
Just wanted to pop on quickly and say a MASSIVE thank you to Graham ! I plugged in my oculus, and boom works in one go, absolute amazing work. Time to try and get a raymarcher working !
Hi Graham,
Just checking in again about the VR package mailing list. Since the announcement of Live 10 and Max 8, I want to push as hard as possible to get my GestureLab package released in conjunction with or soon after the release of Live/Max. Hopefully I'll get my hands on the Live 10 beta.
There's not much work I don't think to patch together support for Kinect and HTC Vive, but a significant amount of work will be needed to get native PlayStation Move controller / PS Eye support (I want to write native Max objects for them using PSMoveAPI https://github.com/thp/psmoveapi )
I was using PSMove.me up until now which requires passing through a PlayStation 3 as a server and is a much bigger barrier to adoption than plugging and playing into Max directly would be.
I decided to make the push as the tighter audio passthrough between Live and Max will greatly improve the Envelop for Live project which GestureLab aims to tie together with the VR package.
It would be so great to have a single robust environment for gesturally controlled 3D audio and graphics in VR
Also, I see now more than ever why there was the request to have Vive controllers working without the camera - unlike the Kinect and PS Move controllers, they are great for spatializing sound in full 3D without no blind spot. I'd love to get that working too.
Hey gang - Wanted to ask how everyone is dealing with spatial audio in conjunction with their JitterVR projects?
I've been having some success pairing object co-ordinates and head tracking with ambisonic panning + soundfield rotation using the HOA library and the included binaural encoding, but still feels super kludge-y.
Would be v curious + grateful to hear what approaches everyone else is using!
@Matthew I am working with another interface to the HOA library called Envelop for Live https://github.com/EnvelopSound/EnvelopForLive which was originally written with Ableton Live as the exclusive front end/controller but in no way needs to be. IMO for linear content like 360 video and possibly game trailers/matinees, a DAW front end makes sense, but for prototyping, interactive media arts, realtime AV synthesis etc it's an added component that doesn't need to be there.
It's nice that the way they've set it up helps separate the client from the server, and this is going to get a lot tighter with Ableton 10 and Max 8 thanks to the plugin~ and plugout~ flexible audio routing system.
So if those panner controls (currently M4L devices) are duplicated in pure Max (which is on my to-do list), it should give a nice interface for realtime control. Will just need to make some kind of small framework for managing instancing within Max, binding controls to sound sources etc.
Anyhow I think there will be a great benefit to working together to share components and develop a system that works for everyone's use cases, rather than just trying to ad hoc solutions for the various use cases together. Hoping to get back on board and generate a lot of momentum, the news about Ableton 10 and Max 8 was what really motivated me, until those feature sets the solution architecture I had in mind was *always* going to be kludge-y no matter what we did within our ability (in other words, components like Jack Audio Connection Toolkit, Soundflower etc were always going to be a weak link).
@kcoul - thanks for the response!
Totally agree with all above. I'd been building off of Zach Berkowitz's MASI interface for the HOA, which seems to be a bit of similar approach to HOA as Envelop (minus the Ableton component).
Might have to dig back into Envelop now that the documentation is a bit more complete, but ditto that Ableton 10/Max 8 will hopefully bring some breakthroughs.
Will share any of my findings in the meanwhile, too!
Hey Gang - has anyone had any issues using jit.gl.material in conjunction with the VR package? I've been working on a few things, and it's been going beautifully, with the exception of trying to put materials on a mesh/gridshape - It works, but I notice I get a bit of 'glitch'/double vision soon as I use the materia - it doesn't seem to be a framerate issue or problem with the trackers, but can't seem to nail down whats causing the issue.
Would be super grateful for any perspectives or advice - thanks a ton!
Hey Matthew,
In general there was an issue with lights in sub-nodes, so I've learned to always put jit.gl.material into the root "vr" context rather than in "world" etc; same for textures, jit.gl.pix, etc. where it makes sense. Maybe that helps?
I still sometimes see flickering when adding jit.gl objects to the world context sometimes; in each case just editing the jit.gl box to re-instantiate it makes the flickering go away. Not sure what the cause is yet.
Graham
Hey Graham -
Happy to hear I'm not crazy - I've noted the 'flickering' issue as well, but re-insantiating (or sometimes just adding arguments(?) seems to fix it up). I actually had some luck after all with the gl.material - turns out the GUI interface for the object was the culprit - will try playing with the 'vr' context vs 'world' context for any future issues - thanks a ton!
mg