Non real time driver

Aug 17, 2007 at 8:06pm

Non real time driver

I’ve managed to create a patch which actually maxes out a quad core 3 GHz Mac. D’oh. I tried to use the non real time driver, but got very odd results. The patch uses pattr to go through a number of presets. What happened when I attempted nonrealtime was that a 38 minute piece was compressed to 7 seconds! What’s going on?!

Secondly, using pattr vastly increases the CPU load. The sound-making parts of the patch typically run at about 25 – 40% CPU, but when I start running the automation (and the patch begins interpolating between presets) the load gradually climbs until eventually playback starts to stutter at around 70%. Is there any way I can reduce the impact of interpolating between presets?

#33305
Aug 17, 2007 at 10:58pm

If you use the NRT driver, note that your scheduler objects (metro, tempo, line, etc.) are still using the Max scheduler for timing, and not the NRT notion of “time”.

In general it’s better to move all of your scheduling to the MSP world. For NRT it’s a sine qua non.

#110796
Aug 17, 2007 at 11:53pm

On Aug 17, 2007, at 3:58 PM, Peter Castine wrote:

>
> If you use the NRT driver, note that your scheduler objects (metro,
> tempo, line, etc.) are still using the Max scheduler for timing,
> and not the NRT notion of “time”.
>
> In general it’s better to move all of your scheduling to the MSP
> world. For NRT it’s a sine qua non.

Not necessarily– if Overdrive and Scheduler in Audio Interrupt are
on, scheduler objects will track DSP time even in the NRT driver.

-Randy

#110797
Aug 18, 2007 at 2:28am

Agreed, be sure that those boxes are checked in the DSP window. If it still doesn’t work get rid of all of your metro, and line objects and replace them with recursive processes that trigger the next iteration of the calculation only after the previous has finished. In the case of interpolation things get a bit more complicated but can be easily managed by tieing the interpolation index that you feed pattr to the exact position you are renderring in the output file. This is easily accomplished with the scale and peek~/poke~ objects. If this isn’t clear, email me and I’ll drop you a recent NRT example employing this very strategy to do synthesis akin to CSound.

Cheers

#110798
Aug 18, 2007 at 9:09am

Many thanks to you all. I suspected it was something to do with the Max scheduler, but had no idea how to fix it! I’ll try all these things. Best wishes…

#110799
Aug 18, 2007 at 9:10am

Many thanks to you all. I suspected it was something to do with the Max scheduler, but had no idea how to fix it! I’ll try all these things. Best wishes…

#110800
Aug 18, 2007 at 8:21pm

Quote: randall jones wrote on Sat, 18 August 2007 01:53
—————————————————-
>if Overdrive and Scheduler in Audio Interrupt are
> on, scheduler objects will track DSP time even in the NRT driver.
—————————————————-

This is good to know. It’s presumably still more precise to do all timing in MSPland?

#110801
Aug 18, 2007 at 9:23pm

On Aug 18, 2007, at 1:21 PM, Peter Castine wrote:

>
> Quote: randall jones wrote on Sat, 18 August 2007 01:53
> —————————————————-
>> if Overdrive and Scheduler in Audio Interrupt are
>> on, scheduler objects will track DSP time even in the NRT driver.
> —————————————————-
>
> This is good to know. It’s presumably still more precise to do all
> timing in MSPland?

I would say so, but having to rebuild “pure signal” versions of
things like adsr~ and line~ would be a huge pain, and when playing
from any kind of controller (well, just about any) you’re going to
have incoming events limiting your accuracy anyway.

I use qfaker when rendering in nonrealtime mode to make sure that all
events I’ve recorded into a seq~ are played back in the proper
scheduler tick. So my accuracy is just limited by the signal vector
size I use when recording.

-Randy

#110802
Aug 19, 2007 at 11:50am

Randy Jones schrieb:
> Not necessarily– if Overdrive and Scheduler in Audio Interrupt are on,
> scheduler objects will track DSP time even in the NRT driver.

I can confirm that, I use scheduler events in NRT applications all the
time, works like a charm. Wonderfull to render lengthy pieces faster
than realtime into a buffer~ and then save it to disk. (sfplay~ can be
slower than realtime which is also nonrealtime… ;-)

Stefan


Stefan Tiedje————x——-
–_____———–|————–
–(_|_ —-|—–|—–()——-
– _|_)—-|—–()————–
———-()——–www.ccmix.com

#110803
Aug 19, 2007 at 4:37pm

Thanks again for all the suggestions guys! Turning on Overdrive and Scheduler in Audio Interrupt did the trick. Unfortunately, processing the soundfile became unbelievably slow – it took an hour to calculate 2’40″ of sound. Took me back to the good old days of Music 11 :-) Maybe I’m doing something else stupid…

The weird thing about it was that the computer was perfectly capable of calculating those initial 3 minutes in *real* time, at a processor load of only about 40%. So why was it taking so long in NRT?

#110804
Aug 23, 2007 at 8:30am

jg schrieb:
> The weird thing about it was that the computer was perfectly capable
> of calculating those initial 3 minutes in *real* time, at a processor
> load of only about 40%. So why was it taking so long in NRT?

You must not use sfplay~ or sfrecord~ in NRT. It will read/write each
byte seperately to disk as if it had to do only one byte. This creates
an enormous overhead. As I said before, if you use NRT, write to a
buffer~ with record~ and then save the content of the buffer~. If you
need to adjust the length of the buffer~ use the crop message to
waveform~…

Stefan


Stefan Tiedje————x——-
–_____———–|————–
–(_|_ —-|—–|—–()——-
– _|_)—-|—–()————–
———-()——–www.ccmix.com

#110805
Aug 23, 2007 at 5:28pm

Hi Stefan,

Actually, I tried both sfrecord~ and then buffer~ after your previous advice. The method with buffer~ oddly seems no quicker (in a test just now, it took 45 minutes of processing to get to the stage of reporting that it had made 2 minutes of sound, which is on a par with the progress I observed using sfrecord~. Secondly, it seems very unreliable: in several cases, nothing was saved in the soundfile (including in my last test), even tho’ everything appeared to be working correctly; also, several times using this method crashed Max, although not *every* time. I suspect I’ve done something silly elsewhere in the patch which is what’s causing the whole thing to be so slow, especially given that a method that you know can make things run faster is not doing so for me. I’m confident that various other scheduler events are running correctly (the output may be slow, but it is right), so I’m a bit baffled at the moment. Thanks again!

#110806

You must be logged in to reply to this topic.