Using GPU's with MAX/MSP

pointydrip's icon

Does anyone have any experience using GPU's (Graphics Processing Unit) to process part of MAX/MSP's programs in tandem with their CPU in order to increase their over all processing capacity?

I've read articles on it, but none that give any resources or how to's. (mostly just entheusiasts who have managed to allocate OS functions to strings of GPU's in order to break benchmarking records through overclocking)

Anthony Palomba's icon

This has always been a fantasy of mine as well. To be able to
use the GPU as a DSP. I think there are some significant technical
problems that have to be over come. One being the kernel
bandwidth generated by sending the data to the GPU, then back to
user mode, then down to the audio hardware. This creates a
lot of latency thus killing any real time benefits.

pointydrip's icon

Yes I wondered about that, it seems that however you put it, there will always be a minimal amount of latency when dealing with real time audio manipulation.

However I wonder if there was a way to avoid returning to the user mode. Using the CPU mainly to traffic the data to the GPU (or several GPU) for calculation. I realize this takes out the possibility for true real-time variability, but it might be possible to have it done with pseudo-real-time variability. For example if the GPU was made to retain an alloted portion of a MAX/MSP program in memory and any variable modifications to the equation were sent intermittently (kind of the way midi does it, but in a larger capacity).

What if you could adapt the display output of a GPU to be sent to an external audio controller or soundcard?

The problem is obviously much more complex than this and I'm only beginning to scratch the surface, but to me if a GPU can be made to power the graphic complexities of today's video games, it can be harnessed to use for audio manipulation.

Please correct me where I'm wrong...I'm very committed to stirring interest in the idea of expanding the capabilities of MAX/MSP as my projects frequently deal with far to much processing for a CPU to handle.

Anthony Bisset's icon

Still, I'm sure a few examples of basic synthesis and processing on 1D matrices (or other matrix representations of audio) would encourage Max users to discover some novel methods. Only a few examples would be necessary to get people started e.g.,

matrix amplitude modulation
matrix ring modulation
matrix delay
matrix flange/phase/chorus (some iterative feedback process)
matrix EQ
matrix compression

The concern about signal return times from the GPU is valid, but when contrasted against the power a GPU holds it might be worth digging in now even if the results are not real-time. Many advanced synthesis techniques started out as offline, "wait to render", effects. Check out SonicWORX or TAU or Phymod.

Also, slow-time / offline rendering has some advantages that I'm surprised mastering engineers and post guys haven't pushed developers for like adaptive compression leveraging full song length look ahead, etc.) Or maybe this exists in some corner of the world?

Anyone have any novel Jitter audio processing tricks or ideas?

mzed's icon

Quote: pointydrip wrote on Fri, 26 September 2008 07:44
----------------------------------------------------
> Does anyone have any experience using GPU's (Graphics Processing Unit) to process part of MAX/MSP's programs in tandem with their CPU in order to increase their over all processing capacity?
>

----------------------------------------------------

As I understand it, it's kind of a pain to get data back from the GPU. In the current state of things, you'd loose a lot of efficiency at that point.

mz

yair reshef's icon

"cuda zone" (nvidia's cuda developer resource) shows several examples
of gpu being order of magnitude faster then cpu.
its already used for signal processing.
http://www.umiacs.umd.edu/~odonovan/Waspaa2007.pdf
http://koonlab.com/CUDA_RealFIR/CUDA%20Real%20FIR.html

as a jitter user i can say readback is problems, but it passable.

On Sat, Sep 27, 2008 at 3:52 AM, mzed wrote:
>
> Quote: pointydrip wrote on Fri, 26 September 2008 07:44
> ----------------------------------------------------
>> Does anyone have any experience using GPU's (Graphics Processing Unit) to process part of MAX/MSP's programs in tandem with their CPU in order to increase their over all processing capacity?
>>
>
> ----------------------------------------------------
>
>
> As I understand it, it's kind of a pain to get data back from the GPU. In the current state of things, you'd loose a lot of efficiency at that point.
>
> mz
>
>
>
> --
> || michael f. zbyszynski -- molecular gastronimist
> || http://www.cnmat.berkeley.edu/
> || http://www.mikezed.com/
>
>
>
>

Thijs Koerselman's icon

On Fri, Sep 26, 2008 at 4:44 PM, Chris Maynard wrote:

>
> Does anyone have any experience using GPU's (Graphics Processing Unit) to
> process part of MAX/MSP's programs in tandem with their CPU in order to
> increase their over all processing capacity?
>
>

The processor itself is not the problem, so once the readback issues have
been sorted I think we're in for some really great DSP developments. The
processor used in the original UAD card was a graphics processor. It's
success kind of proves the potential of graphics processors for DSP
applications I guess.

There are audio shaders in the jitter shader collection. I gave them a try a
little while ago but didn't succeed. There's no documentation or examples. I
was kind of surprised that they are sitting there, without anybody using
them apparently.

Thijs

Max Patch
Copy patch and select New From Clipboard in Max.

pointydrip's icon

Thanks...I thought it woulnd't be a bad idea just to get the ball rolling on the subject.

I apologize, this is somewhat more of a jitter topic.

I'll post more as it comes to me...

Bill Canty's icon

I'm surprized no-one's mentioned this yet:

"OpenCL
Another powerful Snow Leopard technology, OpenCL (Open Computing Language), makes it possible for developers to efficiently tap the vast gigaflops of computing power currently locked up in the graphics processing unit (GPU). With GPUs approaching processing speeds of a trillion operations per second, they’re capable of considerably more than just drawing pictures. OpenCL takes that power and redirects it for general-purpose computing."

Sounds like Max'd have to be upgraded to take advantage of it, though.

Cheers, Bill

pointydrip's icon

Cool, thanks for the tip. I'm definitely going to drop the bomb and go with a crossfire type system for next set up...

yair reshef's icon

On Mon, Sep 29, 2008 at 5:12 AM, Chris Maynard wrote:
>
> Cool, thanks for the tip. I'm definitely going to drop the bomb and go with a crossfire type system for next set up...
>

Wesley Smith's icon
t's icon

But is it possible to process 32-bit floating point numbers (audio signals data) on GPU? Joshua Kit Clayton said in one thread "Once on the GPU, *everything* is processed as RGBA. There is no concept of UYVY on the GPU or in any shader except for ones custom programmed to work with that sort of data *masquerading* as RGBA." That was an UYVY / RGBA debate but...? How is it with processing 32-bit floats on GPU?

christripledot's icon

I'm not sure who wrote the audio shaders in the Jitter examples, but if they're reading this, would it be possible to see an example patch of them in use? I've been trying to puzzle them out for ages now.

I suspect it involves: [jit.slab], [jit.gl.shader], [jit.catch~], [jit.release~], [jit.window] and [jit.gl.render]. Sadly, I haven't been able to find a way to make it work.

christripledot's icon

*cough*

Somebody wrote them!

Alessio Cazzaniga's icon