Forums > MaxMSP

coll simple 'output current' option

November 12, 2007 | 10:30 am

Hi all,

If I’m stepping through a coll with ‘next’, how could I output an entry I reached -without- jumping to the next entry? I would expect a bang to do this, but bang seems similar to ‘next’.

Just making sure I didn’t miss something terribly obvious..

Thanks,
Mattijs


November 12, 2007 | 11:10 am


November 12, 2007 | 11:56 am

That works, only requesting an index from coll requires searching through the Linked List and can be slow for bigger Colls.

In this case, we’d like to be able to address the current item directly. A ‘current’ message would seem like a logical addition to prev and next. We currently have to store the output so we can access it directly.

The patch below would be a workaround, but doesn’t win a beauty prize:

#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P message 171 76 44 196617 current;
#P newex 132 181 32 196617 print;
#P message 171 204 70 196617 symbol tres;
#P newex 171 181 62 196617 prepend set;
#P newex 171 155 36 196617 zl reg;
#P newex 171 99 69 196617 t b next prev;
#N coll ;
#P newobj 200 132 53 196617 coll;
#P connect 2 0 5 0;
#P connect 2 0 3 0;
#P connect 6 0 1 0;
#P connect 3 0 4 0;
#P connect 0 0 2 1;
#P connect 1 0 2 0;
#P connect 1 1 0 0;
#P connect 1 2 0 0;
#P window clipboard copycount 7;


November 12, 2007 | 11:19 pm

Mattijs,

Here are two possible workarounds, slightly different. The first one ignores the current location of the "goto" pointer, but keeps track of the "most recently output" address, so that you can output that item as many times as you want. The second one uses the current location of the "goto" pointer, then immediately resets the pointer to where it was.

–Chris

—–

Solution 1 (most recent):

max v2;
#N vpatcher 20 74 620 474;
#P window setfont "Fixedwidth Serif" 18.;
#P window linecount 1;
#P newex 134 236 65 1441810 print;
#P message 261 147 34 1441810 1;
#P message 134 149 54 1441810 next;
#P newex 338 111 131 1441810 prepend set;
#P window setfont Times 18.;
#P comment 228 94 94 1310738 most recent;
#P button 261 114 15 0;
#P button 134 114 15 0;
#P window setfont "Fixedwidth Serif" 18.;
#N coll ;
#T flags 1 0;
#T 1 this is item number one;
#T 2 this is item number two;
#T 3 this is item number three;
#T 5 this is item number five (there is no four);
#T 9 this is item number nine (6-7-8 are missing);
#T 10 this is item number ten (the last item);
#P newobj 134 189 54 1441810 coll;
#P window setfont Times 18.;
#P comment 122 94 41 1310738 next;
#P connect 2 0 6 0;
#P fasten 7 0 1 0 266 181 139 181;
#P connect 6 0 1 0;
#P connect 1 0 8 0;
#P fasten 5 0 7 0 343 141 266 141;
#P connect 3 0 7 0;
#P fasten 1 1 5 0 153 217 472 217 472 106 343 106;
#P pop;

—–

Solution 2 (current):

max v2;
#N vpatcher 30 89 630 489;
#P window setfont "Fixedwidth Serif" 18.;
#P message 268 162 76 1441810 goto 1;
#P newex 268 97 27 1441810 b;
#P message 285 125 54 1441810 next;
#P newex 347 123 186 1441810 prepend set goto;
#P newex 141 247 65 1441810 print;
#P message 141 113 54 1441810 next;
#P window setfont Times 18.;
#P comment 246 58 61 1310738 current;
#P button 268 78 15 0;
#P button 141 78 15 0;
#P window setfont "Fixedwidth Serif" 18.;
#N coll ;
#T flags 1 0;
#T 1 this is item number one;
#T 2 this is item number two;
#T 3 this is item number three;
#T 5 this is item number five (there is no four);
#T 9 this is item number nine (6-7-8 are missing);
#T 10 this is item number ten (the last item);
#P newobj 141 200 54 1441810 coll;
#P window setfont Times 18.;
#P comment 129 58 41 1310738 next;
#P connect 2 0 5 0;
#P connect 5 0 1 0;
#P fasten 8 0 1 0 290 195 146 195;
#P fasten 10 0 1 0 273 195 146 195;
#P connect 1 0 6 0;
#P connect 3 0 9 0;
#P connect 9 0 10 0;
#P fasten 7 0 10 0 352 154 273 154;
#P connect 9 1 8 0;
#P fasten 1 1 7 0 160 228 543 228 543 117 352 117;
#P pop;


November 13, 2007 | 9:31 am

Hi Christopher,

Thanks a lot for your elaborate reply.

From what I see in the code of coll, some messages should be much faster than others. ‘next’ does nothing but to look at the current entry and acquire the pointer to the next element in the linked list, which should be very fast. Inputting an index on the other hand runs through all the entries starting from the top until it reaches the required one.

BUT now that I’m trying this, the results don’t make sense (see patch below).

Is there a guru on this forum that can explain why dump is so much faster than uzi->next and why uzi->next is not much faster than uzi->index?

Best,
Mattijs

#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P comment 148 166 61 196617 my results:;
#P comment 456 166 51 196617 44;
#P comment 337 166 51 196617 2649;
#P comment 211 166 51 196617 2657;
#P comment 498 40 58 196617 with dump;
#P comment 380 40 58 196617 with next;
#P comment 253 40 58 196617 by index;
#P newex 496 78 40 196617 t dump;
#P flonum 456 146 48 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 481 58 40 196617 t b b b;
#P newex 456 125 35 196617 timer;
#P button 481 40 15 0;
#P newex 49 146 32 196617 print;
#P newex 377 98 37 196617 t next;
#P flonum 337 146 48 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 362 58 40 196617 t b b b;
#P newex 337 125 35 196617 timer;
#P button 362 40 15 0;
#P newex 377 78 56 196617 uzi 20000;
#P flonum 211 146 48 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 236 58 40 196617 t b b b;
#P newex 211 125 35 196617 timer;
#P button 236 40 15 0;
#P newex 251 78 56 196617 uzi 20000;
#P newex 49 79 89 196617 sprintf %i test%i;
#P button 3 40 15 0;
#P newex 3 58 56 196617 uzi 20000;
#N coll ;
#P newobj 49 125 53 196617 coll;
#P comment 19 40 57 196617 1) fill coll;
#P comment 236 24 58 196617 2) output;
#P comment 212 182 115 196617 (On a macbook 2.16GHz);
#P connect 4 2 6 0;
#P connect 4 2 6 1;
#P connect 10 2 9 0;
#P connect 9 0 11 0;
#P connect 5 0 4 0;
#P connect 8 0 10 0;
#P connect 10 0 9 1;
#P connect 10 1 7 0;
#P connect 6 0 3 0;
#P fasten 7 0 3 0 256 119 54 119;
#P fasten 23 0 3 0 501 119 54 119;
#P fasten 17 0 3 0 382 119 54 119;
#P connect 3 0 18 0;
#P connect 15 2 14 0;
#P connect 14 0 16 0;
#P connect 13 0 15 0;
#P connect 15 0 14 1;
#P connect 15 1 12 0;
#P connect 12 0 17 0;
#P connect 21 2 20 0;
#P connect 20 0 22 0;
#P connect 19 0 21 0;
#P connect 21 0 20 1;
#P connect 21 1 23 0;
#P window clipboard copycount 31;


November 13, 2007 | 9:46 am

Thanks Christopher, for you solutions.

However, i’m afraid both of them still use an index to access or re-access the data. We’re using colls that are quite large in real-time situations and we simply don’t want to spend CPU time on searching through a linked list (http://en.wikipedia.org/wiki/Linked_list) while i presume there could well be a pointer to the current node.

I don’t remember whether Mattijs used incremental integer indexes for this particular thing, but if so, the only thing we’d need would be an option to turn coll into an array…. j/k :-)


November 13, 2007 | 11:38 am

>BUT now that I’m trying this, the results don’t make sense (see patch
below).
>
>Is there a guru on this forum that can explain why dump is so much
>faster than uzi->next and why uzi->next is not much faster than uzi->index?
>

Connections are wrong, you need to use 3rd outlet of uzi to get index.
It seems index are faster than next, and bang are slightly faster than next.
Still – I can’t tell why dump is so much faster.
But my suggestion would be to use dump and do the filtering after the coll.

New version of patch here:

#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P comment 613 185 51 196617 11;
#P comment 494 185 51 196617 2389;
#P comment 383 185 51 196617 2394;
#P comment 239 185 51 196617 1888;
#P comment 150 185 84 196617 (print selected);
#P comment 176 169 61 196617 (print all);
#P user umenu 0 231 74 196647 1 64 247 1;
#X add print all;
#X add print selected;
#P user gswitch 60 379 41 32 1 0;
#P newex 90 320 29 196617 gate;
#P newex 60 427 32 196617 print;
#P number 107 252 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 90 283 27 196617 ==;
#P comment 383 167 51 196617 2427;
#P comment 425 38 58 196617 with bang;
#P flonum 382 144 48 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 407 56 40 196617 t b b b;
#P newex 382 123 35 196617 timer;
#P button 407 38 15 0;
#P newex 422 76 56 196617 uzi 20000;
#P comment 108 169 61 196617 my results:;
#P comment 613 166 51 196617 59;
#P comment 494 166 51 196617 2443;
#P comment 238 169 51 196617 1972;
#P comment 239 204 163 196617 (On a MacBook Pro 2.2 GHz);
#P comment 654 37 58 196617 with dump;
#P comment 536 37 58 196617 with next;
#P comment 279 40 58 196617 by index;
#P newex 652 75 40 196617 t dump;
#P flonum 612 143 48 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 637 55 40 196617 t b b b;
#P newex 612 122 35 196617 timer;
#P button 637 37 15 0;
#P newex 533 95 37 196617 t next;
#P flonum 493 143 48 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 518 55 40 196617 t b b b;
#P newex 493 122 35 196617 timer;
#P button 518 37 15 0;
#P newex 533 75 56 196617 uzi 20000;
#P flonum 237 146 48 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 262 58 40 196617 t b b b;
#P newex 237 125 35 196617 timer;
#P button 262 40 15 0;
#P newex 277 78 56 196617 uzi 20000;
#P newex 75 79 89 196617 sprintf %i test%i;
#P button 29 40 15 0;
#P newex 29 58 56 196617 uzi 20000;
#N coll ;
#P newobj 75 125 55 196617 coll;
#P comment 45 40 57 196617 1) fill coll;
#P comment 262 24 58 196617 2) output;
#P comment 306 98 60 196617 3rd outlet!!;
#P comment 107 233 100 196617 select index to print;
#P connect 4 0 43 1;
#P fasten 4 0 42 1 80 312 114 312;
#P connect 5 2 7 0;
#P connect 5 2 7 1;
#P connect 6 0 5 0;
#P fasten 44 0 43 0 5 371 65 371;
#P connect 43 0 41 0;
#P connect 42 0 43 2;
#P connect 39 0 42 0;
#P connect 11 2 10 0;
#P connect 10 0 12 0;
#P connect 9 0 11 0;
#P connect 11 0 10 1;
#P connect 11 1 8 0;
#P connect 7 0 4 0;
#P fasten 23 0 4 0 657 119 80 119;
#P fasten 18 0 4 0 538 119 80 119;
#P connect 16 2 15 0;
#P connect 15 0 17 0;
#P connect 14 0 16 0;
#P connect 16 0 15 1;
#P connect 16 1 13 0;
#P connect 13 0 18 0;
#P connect 21 2 20 0;
#P connect 20 0 22 0;
#P connect 19 0 21 0;
#P connect 21 0 20 1;
#P connect 21 1 23 0;
#P fasten 8 2 4 0 328 110 80 110;
#P connect 35 1 32 0;
#P connect 35 0 34 1;
#P connect 33 0 35 0;
#P connect 34 0 36 0;
#P connect 35 2 34 0;
#P fasten 32 0 4 0 427 114 80 114;
#P connect 40 0 39 1;
#P connect 4 1 39 0;
#P window clipboard copycount 51;

__________________________________

Jakob Riis
jr.abstractions for MaxMSP
http://www.sonicescape.net/maxmsp/
__________________________________


November 13, 2007 | 12:47 pm

Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
—————————————————-

> Connections are wrong, you need to use 3rd outlet of uzi to get index.

Oops. Thanks.

> It seems index are faster than next, and bang are slightly faster than next.
> Still – I can’t tell why dump is so much faster.
> But my suggestion would be to use dump and do the filtering after the coll.

Yeah, I’ll see if I can modify my patch to use dump. I only need to output 2 or 3 entries at a time though, from a possibly long coll. I need to do this every 2 milliseconds, so every performance upgrade would be welcome. If only there was a ‘stopdump’ message…

Anyhow, the fact that next is even slower than index can be considered a bug, right?

Mattijs


November 13, 2007 | 6:42 pm

Random idea? use a char matrix with jit.str, and use getcell for the
data? I have not been actively following the discussion (apologies for
butting in!), but perhaps it would take some time to build a wrapper,
but might perform a bit better?

Im not sure what type of data the coll will contain, but it might be
worth a shot to try?

On Nov 13, 2007, at 7:47 AM, Mattijs Kneppers wrote:

>
> Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
> —————————————————-
>
>> Connections are wrong, you need to use 3rd outlet of uzi to get
>> index.
>
> Oops. Thanks.
>
>> It seems index are faster than next, and bang are slightly faster
>> than next.
>> Still – I can’t tell why dump is so much faster.
>> But my suggestion would be to use dump and do the filtering after
>> the coll.
>
> Yeah, I’ll see if I can modify my patch to use dump. I only need to
> output 2 or 3 entries at a time though, from a possibly long coll. I
> need to do this every 2 milliseconds, so every performance upgrade
> would be welcome. If only there was a ‘stopdump’ message…
>
> Anyhow, the fact that next is even slower than index can be
> considered a bug, right?
>
> Mattijs
>
> –
> SmadSteck – http://www.smadsteck.nl
> Hard- and software for interactive audiovisual sampling


November 13, 2007 | 7:09 pm

I don’t seem to have a jit.str. There are several others. The closest is
jit.str.op but I don’t see a getcell message.

On 11/13/07 10:42 AM, "vade" wrote:

> Random idea? use a char matrix with jit.str, and use getcell for the
> data? I have not been actively following the discussion (apologies for
> butting in!), but perhaps it would take some time to build a wrapper,
> but might perform a bit better?
>
> Im not sure what type of data the coll will contain, but it might be
> worth a shot to try?
>
>
> On Nov 13, 2007, at 7:47 AM, Mattijs Kneppers wrote:
>
>>
>> Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
>> —————————————————-
>>
>>> Connections are wrong, you need to use 3rd outlet of uzi to get
>>> index.
>>
>> Oops. Thanks.
>>
>>> It seems index are faster than next, and bang are slightly faster
>>> than next.
>>> Still – I can’t tell why dump is so much faster.
>>> But my suggestion would be to use dump and do the filtering after
>>> the coll.
>>
>> Yeah, I’ll see if I can modify my patch to use dump. I only need to
>> output 2 or 3 entries at a time though, from a possibly long coll. I
>> need to do this every 2 milliseconds, so every performance upgrade
>> would be welcome. If only there was a ‘stopdump’ message…
>>
>> Anyhow, the fact that next is even slower than index can be
>> considered a bug, right?
>>
>> Mattijs
>>
>> –
>> SmadSteck – http://www.smadsteck.nl
>> Hard- and software for interactive audiovisual sampling
>

Cheers
Gary Lee Nelson
Oberlin College
http://www.timara.oberlin.edu/GaryLeeNelson


November 13, 2007 | 7:52 pm

I meant something like:

#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P message 287 71 30 196617 read;
#P comment 185 32 102 196617 get quick index mb!?;
#P number 206 245 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 321 433 62 196617 prepend set;
#P newex 253 372 84 196617 unpack 0 0 val 0;
#P newex 253 349 53 196617 route cell;
#P newex 322 401 40 196617 itoa;
#P newex 133 400 99 196617 print COLL_OUTPUT;
#P newex 46 171 147 196617 jit.str.op 1 char 200 @op thru;
#P comment 317 50 171 196617 bang to dump it all!;
#P button 298 45 15 0;
#P newex 355 178 147 196617 jit.str.op 1 char 200 @op thru;
#P newex 357 303 85 196617 print line_output;
#P newex 355 281 82 196617 jit.str.tosymbol;
#P newex 355 208 133 196617 jit.matrix 1 char 200 COLL;
#P comment 321 72 171 196617 import our coll data maybe k thnx ?;
#P number 189 55 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 191 77 41 196617 line $1;
#P newex 190 97 58 196617 jit.textfile;
#P newex 9 109 147 196617 jit.str.op 1 char 200 @op thru;
#P message 307 455 153 196617 m;
#P newex 8 427 32 196617 print;
#P message 181 279 55 196617 getcell $1;
#P button 182 204 30 0;
#P newex 6 405 82 196617 jit.str.tosymbol;
#P newex 10 79 95 196617 jit.str.fromsymbol;
#P message 10 44 146 196617 some message to our faux coll;
#P newex 7 325 133 196617 jit.matrix 1 char 200 COLL;
#P connect 27 0 9 0;
#P connect 25 0 5 0;
#P connect 24 0 7 0;
#P connect 21 0 24 0;
#P connect 23 3 21 0;
#P connect 22 0 23 0;
#P connect 0 1 20 0;
#P connect 0 1 22 0;
#P connect 0 0 3 0;
#P connect 19 0 0 0;
#P connect 9 0 19 0;
#P connect 17 0 9 0;
#P connect 14 0 15 0;
#P connect 16 0 13 0;
#P connect 9 1 16 0;
#P connect 13 0 14 0;
#P connect 11 0 10 0;
#P connect 10 0 9 0;
#P connect 4 0 0 0;
#P connect 5 0 0 0;
#P connect 8 0 0 0;
#P connect 2 0 8 0;
#P connect 3 0 6 0;
#P connect 1 0 2 0;
#P window clipboard copycount 28;

On Nov 13, 2007, at 2:09 PM, Gary Lee Nelson wrote:

> I don’t seem to have a jit.str. There are several others. The
> closest is
> jit.str.op but I don’t see a getcell message.
>
>
> On 11/13/07 10:42 AM, "vade" wrote:
>
>> Random idea? use a char matrix with jit.str, and use getcell for the
>> data? I have not been actively following the discussion (apologies
>> for
>> butting in!), but perhaps it would take some time to build a wrapper,
>> but might perform a bit better?
>>
>> Im not sure what type of data the coll will contain, but it might be
>> worth a shot to try?
>>
>>
>> On Nov 13, 2007, at 7:47 AM, Mattijs Kneppers wrote:
>>
>>>
>>> Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
>>> —————————————————-
>>>
>>>> Connections are wrong, you need to use 3rd outlet of uzi to get
>>>> index.
>>>
>>> Oops. Thanks.
>>>
>>>> It seems index are faster than next, and bang are slightly faster
>>>> than next.
>>>> Still – I can’t tell why dump is so much faster.
>>>> But my suggestion would be to use dump and do the filtering after
>>>> the coll.
>>>
>>> Yeah, I’ll see if I can modify my patch to use dump. I only need to
>>> output 2 or 3 entries at a time though, from a possibly long coll. I
>>> need to do this every 2 milliseconds, so every performance upgrade
>>> would be welcome. If only there was a ‘stopdump’ message…
>>>
>>> Anyhow, the fact that next is even slower than index can be
>>> considered a bug, right?
>>>
>>> Mattijs
>>>
>>> –
>>> SmadSteck – http://www.smadsteck.nl
>>> Hard- and software for interactive audiovisual sampling
>>
>
>
> Cheers
> Gary Lee Nelson
> Oberlin College
> http://www.timara.oberlin.edu/GaryLeeNelson
>
>


November 13, 2007 | 9:31 pm

Some version of the coll source is given out with the SDK under
example-externs.

Add your method.

b

On Nov 13, 2007, at 4:47 AM, Mattijs Kneppers wrote:

>
> Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
> —————————————————-
>
>> Connections are wrong, you need to use 3rd outlet of uzi to get
>> index.
>
> Oops. Thanks.
>
>> It seems index are faster than next, and bang are slightly faster
>> than next.
>> Still – I can’t tell why dump is so much faster.
>> But my suggestion would be to use dump and do the filtering after
>> the coll.
>
> Yeah, I’ll see if I can modify my patch to use dump. I only need to
> output 2 or 3 entries at a time though, from a possibly long coll.
> I need to do this every 2 milliseconds, so every performance
> upgrade would be welcome. If only there was a ‘stopdump’ message…
>
> Anyhow, the fact that next is even slower than index can be
> considered a bug, right?
>
> Mattijs
>
> –
> SmadSteck – http://www.smadsteck.nl
> Hard- and software for interactive audiovisual sampling

Barry Threw
Media Art and Technology

San Francisco, CA Work: 857-544-3967
Email: bthrew@gmail.com
IM: captogreadmore (AIM)
http:/www.barrythrew.com


November 13, 2007 | 10:29 pm

Quote: barry threw wrote on Tue, 13 November 2007 22:31
—————————————————-
> Some version of the coll source is given out with the SDK under
> example-externs.
>
> Add your method.
>

I considered that. But the version in the sdk is an old one. coll has been updated since then.

Of course I could roll my own buffer/sync external altogether (that’s what I’m after, ultimately), but I was hoping to save time and do it with max native objects, which one would expect to be possible..

Mattijs


November 13, 2007 | 10:33 pm

Quote: vade wrote on Tue, 13 November 2007 20:52
—————————————————-
> I meant something like:

That’s an interesting approach, vade. In my case though, performance is the primary concern. I’ll post an example of what I’m working on tomorrow, so that all you generous participants of this thread can see why I’m so keen on this.

Mattijs


November 14, 2007 | 4:45 am

Hm. I did not give it much time/thought today, but ‘dumping’ the whole
coll into a large enough matrix, then getting the cell(index) you
want, might improve your performance. Thats what i was hinting at.

hth !

On Nov 13, 2007, at 5:33 PM, Mattijs Kneppers wrote:

>
> Quote: vade wrote on Tue, 13 November 2007 20:52
> —————————————————-
>> I meant something like:
>
> That’s an interesting approach, vade. In my case though, performance
> is the primary concern. I’ll post an example of what I’m working on
> tomorrow, so that all you generous participants of this thread can
> see why I’m so keen on this.
>
> Mattijs
> –
> SmadSteck – http://www.smadsteck.nl
> Hard- and software for interactive audiovisual sampling


November 14, 2007 | 2:16 pm

Here is what I’m working on:

This is a pluggo:

#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#N vpatcher 10 59 311 395;
#P window setfont "Sans Serif" 9.;
#P newex 80 134 20 196617 t b;
#N vpatcher 10 59 610 459;
#P outlet 50 131 15 0;
#P window setfont "Sans Serif" 9.;
#P newex 50 111 32 196617 sel 0;
#P newex 50 91 27 196617 > 0;
#P newex 50 50 27 196617 t i i;
#P newex 50 71 27 196617 – 0;
#P inlet 50 30 15 0;
#P connect 0 0 2 0;
#P connect 2 1 1 0;
#P connect 1 0 3 0;
#P connect 3 0 4 0;
#P connect 4 0 5 0;
#P connect 2 0 1 1;
#P pop;
#P newobj 171 109 43 196617 p edge< ;
#P newex 171 89 27 196617 t i i;
#P newex 110 178 30 196617 t ms;
#P newex 50 258 38 196617 zl join;
#P newex 50 155 27 196617 t b l;
#P newex 110 198 57 196617 timestamp;
#P newex 50 238 27 196617 + 0.;
#P newex 50 218 40 196617 / 44.1;
#P newex 50 198 27 196617 – 0;
#P newex 50 178 27 196617 i;
#P newex 80 50 91 196617 loadmess enable 1;
#P newex 80 178 27 196617 i;
#P newex 80 88 40 196617 change;
#P newex 80 108 41 196617 sel 1 0;
#P newex 80 69 118 196617 plugsync~;
#P inlet 50 135 15 0;
#P outlet 50 280 15 0;
#P connect 1 0 12 0;
#P connect 12 0 7 0;
#P connect 7 0 8 0;
#P connect 8 0 9 0;
#P connect 9 0 10 0;
#P connect 10 0 13 0;
#P connect 13 0 0 0;
#P fasten 15 1 7 1 193 175 72 175;
#P connect 5 0 8 1;
#P connect 11 0 10 1;
#P connect 12 1 13 1;
#P connect 6 0 2 0;
#P connect 2 0 4 0;
#P connect 4 0 3 0;
#P connect 16 0 17 0;
#P connect 3 0 17 0;
#P connect 17 0 5 0;
#P fasten 15 1 5 1 193 175 102 175;
#P connect 17 0 14 0;
#P connect 14 0 11 0;
#P connect 2 7 15 0;
#P connect 15 0 16 0;
#P pop;
#P newobj 32 162 73 196617 p prependtime;
#B color 5;
#P newex 32 142 18 196617 t l;
#P message 291 126 49 196617 accurate;
#P newex 74 112 76 196617 prepend param;
#P newex 58 91 58 196617 prepend cc;
#P newex 32 72 67 196617 prepend note;
#P newex 32 52 92 196617 midiparse;
#P newex 32 32 55 196617 plugmidiin;
#P message 291 108 62 196617 windowsize;
#P newex 32 182 120 196617 udpsend 127.0.0.1 1111;
#N pp 1 TimerTest 0 1;
#P newobj 98 32 97 196617 pp 1 TimerTest 0 1;
#P message 233 88 71 196617 capture 1 Init;
#P message 306 88 44 196617 recall 1;
#P message 232 32 214 196617 window size 100 100 370 330 , window exec;
#N thispatcher;
#Q end;
#P newobj 232 49 61 196617 thispatcher;
#N plugconfig;
#C useviews 0 1 1 1;
#C numprograms 4;
#C preempt 1;
#C sigvschange 1;
#C sigvsdefault 32;
#C windowsize;
#C defaultview Interface 0 0 0;
#C dragscroll 1;
#C noinfo;
#C package ????;
#C setprogram 1 Init 0 0. 0. 192. 39. 32.;
#C uniqueid 182 150 109;
#C accurate;
#C initialpgm 0;
#C synth;
#C latency 441;
#P newobj 233 107 53 196617 plugconfig;
#P connect 8 0 9 0;
#P connect 9 0 10 0;
#P connect 12 0 14 0;
#P connect 11 0 14 0;
#P connect 10 0 14 0;
#P connect 14 0 15 0;
#P connect 15 0 6 0;
#P connect 9 2 11 0;
#P connect 5 0 12 0;
#P connect 2 0 1 0;
#P connect 13 0 0 0;
#P connect 7 0 0 0;
#P connect 4 0 0 0;
#P connect 3 0 0 0;
#P window clipboard copycount 16;

And this is the receiving end:

#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P newex 134 225 148 196617 loadmess open bassdrum3.wav;
#P user ezdac~ 83 297 127 330 0;
#P button 67 129 15 0;
#P comment 174 129 130 196617 format: time (ms) event;
#N vpatcher 1335 332 1656 751;
#P window setfont "Sans Serif" 9.;
#P newex 36 249 27 196617 t b l;
#P newex 55 135 46 196617 t dump i;
#N coll ;
#P newobj 55 161 53 196617 coll;
#N vpatcher 1733 425 1936 726;
#P window setfont "Sans Serif" 9.;
#P newex 50 222 27 196617 + 0.;
#P newex 50 139 60 196617 loadmess 1;
#P newex 50 200 27 196617 – 0.;
#P newex 67 114 48 196617 cpuclock;
#P newex 50 179 48 196617 cpuclock;
#P newex 67 50 48 196617 loadbang;
#P newex 67 70 38 196617 t b ms;
#P newex 95 91 57 196617 timestamp;
#P newex 50 159 46 196617 metro 2;
#P outlet 50 244 15 0;
#P connect 8 0 1 0;
#P connect 1 0 5 0;
#P connect 5 0 7 0;
#P connect 7 0 9 0;
#P connect 9 0 0 0;
#P connect 4 0 3 0;
#P connect 3 0 6 0;
#P connect 6 0 7 1;
#P connect 2 0 9 1;
#P connect 3 1 2 0;
#P pop;
#P newobj 55 115 74 196617 p currentTime;
#B color 5;
#P newex 36 279 27 196617 i;
#P newex 69 181 38 196617 t i i;
#P newex 36 230 29 196617 gate;
#P newex 137 95 37 196617 + 100;
#P newex 69 201 27 196617 < ;
#P newex 137 115 38 196617 zl join;
#P newex 137 75 27 196617 i;
#P newex 137 55 51 196617 zl slice 1;
#P newex 36 299 82 196617 prepend remove;
#N comlet (float , ms) latency;
#P inlet 211 37 15 0;
#N comlet events in;
#P inlet 137 37 15 0;
#N comlet (int) events out;
#P outlet 53 336 15 0;
#P newex 137 135 75 196617 prepend insert;
#P fasten 8 0 10 0 74 224 41 224;
#P connect 10 0 16 0;
#P connect 16 0 12 0;
#P connect 12 0 4 0;
#P connect 11 1 12 1;
#P connect 16 1 1 0;
#P connect 13 0 15 0;
#P connect 15 0 14 0;
#P fasten 4 0 14 0 41 321 26 321 26 157 60 157;
#P connect 0 0 14 0;
#P connect 14 0 10 1;
#P connect 14 1 11 0;
#P connect 11 0 8 0;
#P connect 15 1 8 1;
#P connect 2 0 5 0;
#P connect 5 0 6 0;
#P connect 6 0 9 0;
#P connect 9 0 7 0;
#P connect 7 0 0 0;
#P connect 3 0 9 1;
#P connect 5 1 7 1;
#P pop;
#P newobj 85 177 71 196617 p eventbuffer;
#B color 5;
#N vpatcher 409 247 672 470;
#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P newex 91 108 32 196617 sel 1;
#P newex 91 88 27 196617 > 0;
#P newex 91 128 21 196617 t 1;
#P window linecount 0;
#P newex 50 68 51 196617 zl slice 1;
#P newex 50 48 56 196617 route note;
#N sfplay~ 1 120960 0 ;
#P newobj 91 148 44 196617 sfplay~;
#P inlet 50 30 15 0;
#P inlet 121 128 15 0;
#P outlet 91 168 15 0;
#P connect 2 0 4 0;
#P connect 4 0 5 0;
#P connect 5 1 7 0;
#P connect 7 0 8 0;
#P connect 8 0 6 0;
#P connect 1 0 3 0;
#P connect 6 0 3 0;
#P connect 3 0 0 0;
#P pop;
#P newobj 85 225 47 196617 p player;
#B color 5;
#P newex 146 61 72 196617 loadmess 100;
#P flonum 146 81 53 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 85 249 61 196617 dac~ 17 18;
#P user meter~ 148 249 228 262 50 0 168 0 103 103 103 255 153 0 255 0 0 217 217 0 153 186 0 12 3 3 3 3;
#P newex 85 129 86 196617 udpreceive 1111;
#P comment 202 81 67 196617 latency (ms);
#P connect 1 0 9 0;
#P connect 1 0 7 0;
#P connect 4 0 7 1;
#P connect 6 0 3 0;
#P connect 6 0 3 1;
#P connect 6 0 2 0;
#P connect 5 0 4 0;
#P connect 11 0 6 1;
#P connect 7 0 6 0;
#P window clipboard copycount 12;

These would be too much objects to even start looking at if I were you, but well, just in case you’re interested..

This is the basic version. I dump the entire coll every 2 ms. It should be possible to do this more efficiently, knowing that coll is just a linked list that I could step through a few steps at a time with ‘next’, only when needed.

Mattijs


November 14, 2007 | 3:47 pm

Mattijs Kneppers schrieb:
> Is there a guru on this forum that can explain why dump is so much
> faster than uzi->next and why uzi->next is not much faster than
> uzi->index?

I am not a guru, but why would you expect that message handling has no
effect on the overhead of a patch?

Its easy to explain why this result is the one to expect. Even though I
always prefer abstractions over externals, I know that externals are way
faster to do what they do. Especially a dump message is done internally
with the highest possible optimisation. It does have to deal and decode
a single message. using next, bang, or numbers will have to decode and
route 20000 messages to its places…

Your computer seems to be 10 times faster than mine with the patch you
thought is faster (more than 25 seconds). The dump needed only 3 time
more… (150 ms).

It seems thinking about efficiency is more dangerous if you have a
faster machine… ;-)

Mattijs Kneppers schrieb:
> Anyhow, the fact that next is even slower than index can be
> considered a bug, right?

To decode a string will always be slower than decoding a number…
I’d call it expected behaviour…

Mattijs Kneppers schrieb:
> Here is what I’m working on:

As you deal only with numbers, you could dump the numbers into several
buffer~s and access them with peek~/poke~.

I don’t know why you would want to dump the whole coll every 2 ms. That
doesn’t make sense for me. You should always know when something changed
and could just take that info…

I also don’t know why the existing next message will be a problem,
including the workarounds for a current output. It looks you are dealing
with Midi. And Midi is comparatively slow…

It could be even feasable to record your data into a soundfile, and
access this from your clients… just a thought…

Stefan


Stefan Tiedje————x——-
–_____———–|————–
–(_|_ —-|—–|—–()——-
– _|_)—-|—–()————–
———-()——–www.ccmix.com


November 14, 2007 | 5:01 pm

Hi Stefan,

‘To decode a string will always be slower than decoding a number…
I’d call it expected behaviour…’

Could you explain this a little bit more? I could well be true that decoding ‘next’ takes longer than decoding an int (even though both are 4 byte, haha), but surely decoding ‘next’ doesn’t take longer than following 10k of pointers on average? Or am i missing some of the functionality of the linkedlist?

Later, Bas.


November 16, 2007 | 12:04 am


November 21, 2007 | 10:46 am

I haven’t done a great deal of speed testing, but i know that in another tool, we use a single line in (a number of) colls for every frame we render, and we try to avoid rendering compositions that are much longer than 5 minutes (which is only 300×50 = 6k lines, maybe 20 colls total i think). It appears the thing really slows down towards the end, which makes us believe the thing follows the pointers from the beginning of the coll.

But i would indeed be interested in more info about this, sometime…


November 21, 2007 | 10:06 pm

Coll is incrementally slower at looking up higher indices. The size doesn’t matter, it’s the index position:

#P button 262 55 15 0;
#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P message 262 126 14 196617 1;
#P number 326 127 69 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 262 83 74 196617 b 2;
#P newex 285 125 33 196617 delay;
#P newex 326 104 34 196617 timer;
#P newex 262 103 56 196617 uzi 99999;
#P button 103 56 15 0;
#P newex 103 238 27 196617 i;
#P message 103 128 20 196617 99;
#P number 167 128 69 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#N coll ;
#T flags 1 0;
#T 1 1;
#T 2 2;
#T 3 3;
#T 4 4;
#T 5 5;
#T 6 6;
#T 7 7;
#T 8 8;
#T 9 9;
#T 10 10;
#T 11 11;
#T 12 12;
#T 13 13;
#T 14 14;
#T 15 15;
#T 16 16;
#T 17 17;
#T 18 18;
#T 19 19;
#T 20 20;
#T 21 21;
#T 22 22;
#T 23 23;
#T 24 24;
#T 25 25;
#T 26 26;
#T 27 27;
#T 28 28;
#T 29 29;
#T 30 30;
#T 31 31;
#T 32 32;
#T 33 33;
#T 34 34;
#T 35 35;
#T 36 36;
#T 37 37;
#T 38 38;
#T 39 39;
#T 40 40;
#T 41 41;
#T 42 42;
#T 43 43;
#T 44 44;
#T 45 45;
#T 46 46;
#T 47 47;
#T 48 48;
#T 49 49;
#T 50 50;
#T 51 51;
#T 52 52;
#T 53 53;
#T 54 54;
#T 55 55;
#T 56 56;
#T 57 57;
#T 58 58;
#T 59 59;
#T 60 60;
#T 61 61;
#T 62 62;
#T 63 63;
#T 64 64;
#T 65 65;
#T 66 66;
#T 67 67;
#T 68 68;
#T 69 69;
#T 70 70;
#T 71 71;
#T 72 72;
#T 73 73;
#T 74 74;
#T 75 75;
#T 76 76;
#T 77 77;
#T 78 78;
#T 79 79;
#T 80 80;
#T 81 81;
#T 82 82;
#T 83 83;
#T 84 84;
#T 85 85;
#T 86 86;
#T 87 87;
#T 88 88;
#T 89 89;
#T 90 90;
#T 91 91;
#T 92 92;
#T 93 93;
#T 94 94;
#T 95 95;
#T 96 96;
#T 97 97;
#T 98 98;
#T 99 99;
#T 100 100;
#P newobj 103 211 53 196617 coll;
#P newex 103 84 74 196617 b 2;
#P newex 126 126 33 196617 delay;
#P newex 167 105 34 196617 timer;
#P newex 103 104 56 196617 uzi 99999;
#P connect 9 0 14 0;
#P connect 9 1 11 0;
#P connect 12 0 9 0;
#P connect 3 0 0 0;
#P connect 0 1 2 0;
#P connect 0 0 6 0;
#P connect 14 0 4 0;
#P connect 11 0 10 1;
#P connect 12 1 10 0;
#P connect 10 0 13 0;
#P connect 15 0 12 0;
#P connect 8 0 3 0;
#P connect 4 0 7 0;
#P connect 6 0 4 0;
#P connect 1 0 5 0;
#P connect 3 1 1 0;
#P connect 2 0 1 1;
#P window clipboard copycount 16;


November 21, 2007 | 10:22 pm

Bas van der Graaff schrieb:
> It appears the thing really slows down towards the end, which makes
> us believe the thing follows the pointers from the beginning of the
> coll.

I hacked together a little test, and I can confirm that its slower
towards the end. Not really slow, but significantly on my machine.

Till about 1000 entries its not measurable, at 20000 it will need 6 or 7
milliseconds… No matter whats in the coll as data (symbols or numbers)

No random access it seems…

I tested a buffer~ as well. That seems to be the random access to go
for… (You might need several of them but who cares if its fast… ;-)

Stefan

#P window setfont "Sans Serif" 9.;
#P window linecount 1;
#P message 100 319 76 196617 6;
#P newex 100 296 62 196617 prepend set;
#P number 80 219 50 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P number 51 296 39 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 80 242 50 196617 t b i b;
#P newex 51 274 39 196617 timer;
#P newex 100 274 88 196617 index~ testbuffer;
#P newex 523 309 89 196617 peek~ testbuffer;
#P newex 564 81 114 196617 buffer~ testbuffer 500;
#P user umenu 37 60 100 196647 1 64 76 1;
#X add refer symboltest;
#X add refer numbertest;
#P newex 459 176 70 196617 random 1000;
#P newex 432 277 37 196617 zl join;
#P newex 432 116 37 196617 t i b;
#N coll numbertest 1;
#P newobj 432 309 88 196617 coll numbertest 1;
#P message 96 191 76 196617 347;
#P newex 96 168 62 196617 prepend set;
#P number 76 91 50 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P number 47 168 39 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P newex 76 114 50 196617 t b i b;
#P newex 47 146 39 196617 timer;
#P newex 289 260 50 196617 itoa;
#N coll numbertest 1;
#P newobj 96 146 88 196617 coll symboltest 1;
#P message 222 70 50 196617 20000;
#P newex 316 210 98 196617 +;
#P newex 404 186 29 196617 t 97;
#P newex 372 186 29 196617 t 65;
#P newex 372 164 42 196617 sel 1;
#P newex 316 164 54 196617 random 25;
#P newex 289 236 50 196617 zl group;
#P newex 262 285 37 196617 zl join;
#P newex 262 116 64 196617 t i b b;
#P newex 316 139 66 196617 uzi 5;
#P newex 222 91 50 196617 uzi;
#N coll symboltest 1;
#P newobj 262 309 86 196617 coll symboltest 1;
#P fasten 15 2 14 0 121 139 52 139;
#P connect 14 0 16 0;
#P fasten 29 2 28 0 125 267 56 267;
#P connect 28 0 30 0;
#P connect 17 0 15 0;
#P connect 15 0 14 1;
#P fasten 24 1 12 0 132 142 101 142;
#P connect 15 1 12 0;
#P connect 12 0 18 0;
#P connect 18 0 19 0;
#P connect 31 0 29 0;
#P connect 29 0 28 1;
#P connect 29 1 27 0;
#P connect 27 0 32 0;
#P connect 32 0 33 0;
#P connect 11 0 1 0;
#P connect 1 2 3 0;
#P connect 3 0 4 0;
#P connect 4 0 0 0;
#P connect 10 0 5 0;
#P connect 3 1 5 0;
#P connect 5 0 13 0;
#P connect 13 0 4 1;
#P connect 3 2 2 0;
#P connect 2 0 6 0;
#P connect 6 0 10 0;
#P connect 2 2 7 0;
#P connect 7 0 8 0;
#P connect 7 1 9 0;
#P connect 9 0 10 1;
#P connect 8 0 10 1;
#P fasten 1 2 21 0 267 111 437 111;
#P connect 21 0 22 0;
#P connect 22 0 20 0;
#P connect 21 1 23 0;
#P connect 23 0 22 1;
#P fasten 22 0 26 0 437 301 528 301;
#P window clipboard copycount 34;


Stefan Tiedje————x——-
–_____———–|————–
–(_|_ —-|—–|—–()——-
– _|_)—-|—–()————–
———-()——–www.ccmix.com


November 22, 2007 | 10:48 am

Quote: johnpitcairn wrote on Wed, 21 November 2007 23:06
—————————————————-
> Coll is incrementally slower at looking up higher indices. The size doesn’t matter, it’s the index position:
>

That’s right, and it makes sense because coll is simply a linkedlist that you run through until you reach the index you need.

BUT the ‘next’ command should only go to the next entry of the linkedlist, so running through the coll with the ‘next’ command should be much faster than entering incremental indeces. This is not the case, as you can see in my second message in this thread (#120901). Which is very weird.

Mattijs


November 22, 2007 | 2:13 pm

Mattijs Kneppers schrieb:
> BUT the ‘next’ command should only go to the next entry of the
> linkedlist, so running through the coll with the ‘next’ command
> should be much faster than entering incremental indeces. This is not
> the case, as you can see in my second message in this thread
> (#120901). Which is very weird.

It seems to be a bad test, if I test it with the patch I posted
recently, a next or bang in the range of 19000 (after a slow goto) is as
fast as a recall in the beginning of the coll…

You measured more of the overhead produced by message decoding than the
actual access…

Stefan


Stefan Tiedje————x——-
–_____———–|————–
–(_|_ —-|—–|—–()——-
– _|_)—-|—–()————–
———-()——–www.ccmix.com


November 22, 2007 | 5:04 pm

Stefan Tiedje schrieb:
> It seems to be a bad test, if I test it with the patch I posted
> recently, a next or bang in the range of 19000 (after a slow goto) is as
> fast as a recall in the beginning of the coll…

I have to correct myself, I made a wrong connection in my test. Next is
in the higher range as slow as a direct access…
I’d switch to buffer~/peek~/index~, that seems reasonably fast enough.
(But would be tricky if you want to store symbols… ;-)

Stefan


Stefan Tiedje————x——-
–_____———–|————–
–(_|_ —-|—–|—–()——-
– _|_)—-|—–()————–
———-()——–www.ccmix.com


November 22, 2007 | 7:36 pm

Quote: Stefan Tiedje wrote on Thu, 22 November 2007 18:04
—————————————————-

> Next is
> in the higher range as slow as a direct access…

Cycling 74, could you comment on this? Does this mean there is no way in max to walk through a linked list in a proper (efficient) way except for the coll ‘dump’ message?

Mattijs


November 23, 2007 | 11:07 am

It seems, given how many times the performance of this one particular object has come up, and how useful the coll object is, that a version of coll (coll~?) that worked the same way as the current coll but offered better performance for random-access-of-large-datasets would be greatly appreciated by many people. I don’t know how easy that is to do, ie, how tied the functionality is to the linked-list nature which seems to be the source of performance limitations.

(Alternatives: buffer~ can only hold four numbers per index, ftm’s mat is buggy in my experience, Larray and Lmatrix aren’t functional replacements (nor intended to be), jit.matrix requires, well, Jitter, which I don’t use…)


November 23, 2007 | 11:55 am

For Stefan’s numbertest coll, a table would be more efficient.

In general for colls in the form

0, 42;
1, 27;
3, 38;
.
.
.

table is the more appropriate and efficient option. Coll’s forte is for symbol and list data storage.

I almost forgot to point out that there *is* an efficient alternative to coll for managing large lists of arbitrary data sets (ints, floats, symbols, lists). It’s called lattice and is part of iCE Tools. Might be interesting for Chase and Mattijs.

< http://www.dspaudio.com/software/ice/ice_overview.html>


November 24, 2007 | 2:26 am

Quote: Peter Castine wrote on Sat, 24 November 2007 00:55
—————————————————-
> For Stefan’s numbertest coll, a table would be more efficient.

Or funbuff, which is actually somewhat more efficient than table.


November 25, 2007 | 12:05 am

Quote: Peter Castine wrote on Fri, 23 November 2007 12:55
—————————————————-
> I almost forgot to point out that there *is* an efficient alternative to coll for managing large lists of arbitrary data sets (ints, floats, symbols, lists). It’s called lattice and is part of iCE Tools. Might be interesting for Chase and Mattijs.

That’s interesting. But lattice is a ui object, no? Unfortunately, user interface updates are still in the same thread as f.e. jitter operations (the low priority queue), and thus have significant impact on framerates..

Mattijs


November 25, 2007 | 7:49 pm

Screen updates are in the low priority queue. Processing of bangs is in high priority.

This is like the issue about table updates we had a week or so ago. If you use your ears, these objects are *fast*. It’s only the eye candy that’s slow. Lattice may process a few hundred bangs between screen updates, but the data *does* get processed as fast as your CPU can handle it.

Jitter is a different story.


November 25, 2007 | 10:11 pm

Quote: Peter Castine wrote on Sun, 25 November 2007 20:49
—————————————————-
> Processing of bangs is in high priority.

Uhm, I’m sure you know that that depends on what generates the bangs.

But that was not my point, I assume your objects are properly coded and will process high priority events correctly, but when I store data in lattice, eventually lattice will want to do a screen update. This will be done on the one processor available to the low priority queue. Which will cost me frame rate.

Of course I am perfectly willing to sacrifice frame rate if I’d actually be using lattice’s user interface. But I don’t need an interface, I only need the linked list.

Best,
Mattijs


Viewing 32 posts - 1 through 32 (of 32 total)