Interfacing with USB device at sample rate?

    Oct 31 2013 | 1:06 pm
    Ive written an external which interfaces with an Eigenharp over USB, its working fine but I want to improve it by getting more frequent updates.
    The basic interface is simply, basic in Max I send a bang to the external, and then the external pushes out list of events which occurred in that time.
    Currently, I use a metro to do this.
    This is not ideal, since not only is this only every 1ms, but also being on the normal max thread means timing is not guaranteed.
    What Id like to do, is to get Max to poll the external on the audio thread, so that I can get the updates at sample rate.
    Any tips on how I can achieve this in the external and/or in Max itself. Ive look at the external sdk docs, and section on threading but I cannot really see anything appropriate. Id really like some hints as what to look at, or even some sample code :)
    note: the process of getting the updates is not costly in cpu (the actually usb comms is on a different thread), so it will not have a detrimental effect on audio, and for me 'responsiveness' of the device is very high priority.
    Thanks in advance for any pointers Mark

    • Oct 31 2013 | 2:44 pm
      You can't interact with a non-MSP external at audio rate, you should make an MSP external. In principle, you can retrieve data from the Eigenharp in the perform routine, and output them as audio signals - if it makes sense. If you output them as messages, you can't have a finer temporal resolution than the Max scheduler's.
      Keep in mind that there are things you should not do in the audio thread (i.e., in the perform routine), including locking threads and allocating memory - Nicolas Danet posted a link to a very good article about this. If retrieving your Eigenharp's data requires this kind of stuff, you will probably need to do otherwise.
      hth aa
    • Oct 31 2013 | 3:42 pm
      Thanks... I couldn't find Nicolas Danet's link... though I was quite sure what I was looking for/to search for.
      ok, but checked the MSP docs and seem straightforward enough... i assume its only the dsp/perform function that is performed in the audio thread (i.e. you can have 'lengthy initialisation' in the new function). I could keep this efficient using flipflops and alike to minimise locking etc. (im experienced in multithreaded programming, and assume max/msp just has the usual rules)
      Looks like I could output a number of signals (4), representing key,pressure,roll, yaw etc - thats cool.
      is it only perform method in the high prio thread? or is the dsp function also in a higher prio thread? (I cannot see at the moment of advantage of one over the other)... or is the dsp only used for processing streams, whereas perform() can generate streams?
      I guess this only 'buys' me anything though, if I keep in in the audio domain, e.g. if at some point I shift down to say MIDI, then I will need to use snapshot to bring it back in to the MAX domain. are multiple snapshots (say against my 4 signals) (with same interval) guaranteed to give you the signal values at the same sample time?
      Hmm... in two minds about this route.
      Is there a way to keep in the Max realm, with more of a guarantee of timing... of having an object with some kind of higher scheduling priority? (my fear is basically, some other but of a MAX patch (or VST , AU etc etc) becoming overly greedy on time, and starving my controller of processing time - and so lose responsiveness.)
      Thanks again for the help. Mark
    • Oct 31 2013 | 6:05 pm
      Wow, lots of questions...
      - the dsp method is called in the main thread, its purpose is setting up the perform routine. - the perform routine is the only thing that can run in the audio thread; you can receive, process and generate audio-rate signals in it. - multiple snapshot objects are definitely not guaranteed to act at the same sample time, especially (but not only) if you have Scheduler In Audio Interrupt off. - there is no proper way to be able to schedule messages at higher priority than the scheduler thread's. You could hack up something by creating a custom high priority messaging thread, but be aware that - although this may work somehow - you would be acting against the rules of the Max SDK and setting up a something very unstable and likely to crash Max. I'd rather build long messages, with a lot of "samples" each.
      If you really need responsiveness, why not stick to audio rate as much as you can, and then at some point downsample in some clever way and downsample your streams to Max's control rate?
    • Oct 31 2013 | 9:31 pm
      that all makes sense :o)
      thanks for your help, its saved me alot of time going down blind alleys, definitely some interesting avenues to explore
      Thanks Mark
    • Nov 03 2013 | 6:36 pm
    • Nov 03 2013 | 6:37 pm
      Thanks, yeah im aware of the 'difficulties' ... this is partly why I was asking about which max/msp functions are designed to be used within an audio thread.
      thinking about it more, ive realized I may have difficulty multiplexing the data on to a reasonable number of signals. e.g. Ive a 140 keys, of which many may be pressed at a single time (lets say 10)... so Id need to multiplex 10 pressure data (10bit) (and key info) on one signal line in one sample. yes, I could do over multiple samples but that then means any downsampling (or any other processing) further down the chain has to be more intelligent.
      anyway, so think i need to weigh up the complexity and advantages a bit :o)
      thanks again for your help