Creating objects for audio synthesis on GPU
I would like to synthesise and process audio signals using the GPU.
Potentially program new objects to do it. Would be a realistic idea ?
I know it has beed done using GLSL, so I am wondering if it would be possible to create objects
( shaders) doing this : https://news.ycombinator.com/item?id=19470135
I do not know if MAX , MSP architecture would allow to run these kins of code.
Do any one has any hints about where to start ? Or comments about this idea ?
Thanks guys.
realistic, yes, but not efficient. take mention of 'single-cycle LUT' in the article linked above, a single-cycle LUT is probably a similar amount of memory whether handled by GPU or CPU, but if it needs to be streamed into an audio-thread handled by the CPU eventually(because that's what actually handles the audio-driver's data), then it doesn't make much sense.
Potentially program new objects to do it.
we're at the cusp of discovering what 'tensor-processing' actually means. people are finally taking matrix processing seriously at the consumer level, thanks to companies like NVidia offering APIs to help people study all manner of applications on these newer and newer processing units.
but right now, people are still distracted by "AI"... 'tensor-processing' is not just about "AI" or shader-processing, nor even crypto-mining(but i think, if it's been used in all these applications, then there are more applications that humans have yet to engage this newer consumer-level offering).
this is finally "parallel processing" offered at the affordable level of everyday people. they are just getting to understand the value. eventually, they'll make 'tensor-audio-processing units'(matrix processing more directly connected with the audio driver). this is when it'll become worthwhile to consider creating newer objects in Max/MSP specifically for this.
until then, Max still has uses for the GPU where audio is concerned, i just think it starts with thinking about what kind of data you really need to process: if it's eventually a 1-dimensional 'vector' of audio, it might as well be done directly on the CPU, if there isn't much other 'matrix/tensor-based' data you need to integrate.
but if you DO need to process data in a 'matrix/tensor-based' way, then the GPU is great for organizing/collecting that data into more interesting forms of control and perception, before you eventually need to transfer to an audio-stream on the CPU... words will fail me here, but the best example i've seen of this is FFT processing such as what Tadej Drojlc has researched:
and Jack Walters:
and of course, the maestro, Jean-Francois Charles:
(and i'm sorry if i'm leaving so many others out)
It's a great idea in general, but the GPU tech(created by humans thus far 👽), is not integrated with audio-drivers efficiently enough yet, to be anymore necessary or attractive than CPU-based audio processing, unless you have a specific 'tensor/matrix'-based context to apply to audio from it.
just my 2cents👽
generic rule, imho, is to use it only for huge processes (IR?) and keep I/O as low as possible.
the modulation effects of gpuaudio (the company) are a nice demo of their technology, but it wont make too much sense in real life.
abusing jitter with shaders seems the easiest way in max, at least easier than making custom compiled objects. the only problem is that you would start from stratch: every code for fft, lowpass, interpolation and so on which you can find elsewhere was made for still images and not for videos representing an audio stream...