with what may seem like another n00bish question, in the [make-grain-
envelope] part of rgrano patch in the granularized example, the uzi
goes to 512 then through a -1 object to get it to be 0
i change it to [uzi 512 0] (and eliminate -1) then it seems to do the
same thing. so, my question is what is the design consideration
behind this? is it some sort of efficiency thing? does efficiency of
a maxpatch decrease with the increasing number of objects as part of
Yes and no. In general the more objects you include (especially intensive fft stuff and graphical objects that redraw often, for example) the more the toll on your computer will be but in this case the difference will not be noticable. It’s usually better if you can achieve your goal using fewer objects, it’s tidier for your patching too. But optimising your patch shouldn’t be something to worry about until you get to the stages where you need every last bit of computational power you can scrape together. Saying that however, it is always good to start working with these good habits firmly in place.
thanks for your response.
i’m kind of of the mindset that i should keep efficiency in mind when designing things, even though some say to get the algo down first then optimise. regardless, i’ll take it that the [- 1] object is just there for no real reason, and that [uzi 512 0] will work just as well. things gives me another question, is there some sort of ‘benchmarking’ object or system which one can test the efficiency of patches? i’m thinking here along the lines of a max equivalent to supercolliders bench: command?
It could also be that Uzi has gone through many iterations and long ago when rgrano was first created, it may not have had the second argument to set the base index so you had to use -1. But, that’s just my guess…