coll simple 'output current' option

Mattijs's icon

Hi all,

If I'm stepping through a coll with 'next', how could I output an entry I reached -without- jumping to the next entry? I would expect a bang to do this, but bang seems similar to 'next'.

Just making sure I didn't miss something terribly obvious..

Thanks,
Mattijs

Stefan Tiedje's icon
Bas van der Graaff's icon

That works, only requesting an index from coll requires searching through the Linked List and can be slow for bigger Colls.

In this case, we'd like to be able to address the current item directly. A 'current' message would seem like a logical addition to prev and next. We currently have to store the output so we can access it directly.

The patch below would be a workaround, but doesn't win a beauty prize:

Max Patch
Copy patch and select New From Clipboard in Max.

Christopher Dobrian's icon

Mattijs,

Here are two possible workarounds, slightly different. The first one ignores the current location of the "goto" pointer, but keeps track of the "most recently output" address, so that you can output that item as many times as you want. The second one uses the current location of the "goto" pointer, then immediately resets the pointer to where it was.

--Chris

-----

Solution 1 (most recent):

Max Patch
Copy patch and select New From Clipboard in Max.

-----

Solution 2 (current):

Max Patch
Copy patch and select New From Clipboard in Max.

Mattijs's icon

Hi Christopher,

Thanks a lot for your elaborate reply.

From what I see in the code of coll, some messages should be much faster than others. 'next' does nothing but to look at the current entry and acquire the pointer to the next element in the linked list, which should be very fast. Inputting an index on the other hand runs through all the entries starting from the top until it reaches the required one.

BUT now that I'm trying this, the results don't make sense (see patch below).

Is there a guru on this forum that can explain why dump is so much faster than uzi->next and why uzi->next is not much faster than uzi->index?

Best,
Mattijs

Max Patch
Copy patch and select New From Clipboard in Max.

Bas van der Graaff's icon

Thanks Christopher, for you solutions.

However, i'm afraid both of them still use an index to access or re-access the data. We're using colls that are quite large in real-time situations and we simply don't want to spend CPU time on searching through a linked list (http://en.wikipedia.org/wiki/Linked_list) while i presume there could well be a pointer to the current node.

I don't remember whether Mattijs used incremental integer indexes for this particular thing, but if so, the only thing we'd need would be an option to turn coll into an array.... j/k :-)

Jakob Riis's icon

>BUT now that I'm trying this, the results don't make sense (see patch
below).
>
>Is there a guru on this forum that can explain why dump is so much
>faster than uzi->next and why uzi->next is not much faster than uzi->index?
>

Connections are wrong, you need to use 3rd outlet of uzi to get index.
It seems index are faster than next, and bang are slightly faster than next.
Still - I can't tell why dump is so much faster.
But my suggestion would be to use dump and do the filtering after the coll.

New version of patch here:

Max Patch
Copy patch and select New From Clipboard in Max.

__________________________________

Jakob Riis
jr.abstractions for MaxMSP
http://www.sonicescape.net/maxmsp/
__________________________________

Mattijs's icon

Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
----------------------------------------------------

> Connections are wrong, you need to use 3rd outlet of uzi to get index.

Oops. Thanks.

> It seems index are faster than next, and bang are slightly faster than next.
> Still - I can't tell why dump is so much faster.
> But my suggestion would be to use dump and do the filtering after the coll.

Yeah, I'll see if I can modify my patch to use dump. I only need to output 2 or 3 entries at a time though, from a possibly long coll. I need to do this every 2 milliseconds, so every performance upgrade would be welcome. If only there was a 'stopdump' message...

Anyhow, the fact that next is even slower than index can be considered a bug, right?

Mattijs

vade's icon

Random idea? use a char matrix with jit.str, and use getcell for the
data? I have not been actively following the discussion (apologies for
butting in!), but perhaps it would take some time to build a wrapper,
but might perform a bit better?

Im not sure what type of data the coll will contain, but it might be
worth a shot to try?

On Nov 13, 2007, at 7:47 AM, Mattijs Kneppers wrote:

>
> Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
> ----------------------------------------------------
>
>> Connections are wrong, you need to use 3rd outlet of uzi to get
>> index.
>
> Oops. Thanks.
>
>> It seems index are faster than next, and bang are slightly faster
>> than next.
>> Still - I can't tell why dump is so much faster.
>> But my suggestion would be to use dump and do the filtering after
>> the coll.
>
> Yeah, I'll see if I can modify my patch to use dump. I only need to
> output 2 or 3 entries at a time though, from a possibly long coll. I
> need to do this every 2 milliseconds, so every performance upgrade
> would be welcome. If only there was a 'stopdump' message...
>
> Anyhow, the fact that next is even slower than index can be
> considered a bug, right?
>
> Mattijs
>
> --
> SmadSteck - http://www.smadsteck.nl
> Hard- and software for interactive audiovisual sampling

Gary Lee Nelson's icon

I don't seem to have a jit.str. There are several others. The closest is
jit.str.op but I don't see a getcell message.

On 11/13/07 10:42 AM, "vade" wrote:

> Random idea? use a char matrix with jit.str, and use getcell for the
> data? I have not been actively following the discussion (apologies for
> butting in!), but perhaps it would take some time to build a wrapper,
> but might perform a bit better?
>
> Im not sure what type of data the coll will contain, but it might be
> worth a shot to try?
>
>
> On Nov 13, 2007, at 7:47 AM, Mattijs Kneppers wrote:
>
>>
>> Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
>> ----------------------------------------------------
>>
>>> Connections are wrong, you need to use 3rd outlet of uzi to get
>>> index.
>>
>> Oops. Thanks.
>>
>>> It seems index are faster than next, and bang are slightly faster
>>> than next.
>>> Still - I can't tell why dump is so much faster.
>>> But my suggestion would be to use dump and do the filtering after
>>> the coll.
>>
>> Yeah, I'll see if I can modify my patch to use dump. I only need to
>> output 2 or 3 entries at a time though, from a possibly long coll. I
>> need to do this every 2 milliseconds, so every performance upgrade
>> would be welcome. If only there was a 'stopdump' message...
>>
>> Anyhow, the fact that next is even slower than index can be
>> considered a bug, right?
>>
>> Mattijs
>>
>> --
>> SmadSteck - http://www.smadsteck.nl
>> Hard- and software for interactive audiovisual sampling
>

Cheers
Gary Lee Nelson
Oberlin College
www.timara.oberlin.edu/GaryLeeNelson

vade's icon

I meant something like:

Max Patch
Copy patch and select New From Clipboard in Max.

On Nov 13, 2007, at 2:09 PM, Gary Lee Nelson wrote:

> I don't seem to have a jit.str. There are several others. The
> closest is
> jit.str.op but I don't see a getcell message.
>
>
> On 11/13/07 10:42 AM, "vade" wrote:
>
>> Random idea? use a char matrix with jit.str, and use getcell for the
>> data? I have not been actively following the discussion (apologies
>> for
>> butting in!), but perhaps it would take some time to build a wrapper,
>> but might perform a bit better?
>>
>> Im not sure what type of data the coll will contain, but it might be
>> worth a shot to try?
>>
>>
>> On Nov 13, 2007, at 7:47 AM, Mattijs Kneppers wrote:
>>
>>>
>>> Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
>>> ----------------------------------------------------
>>>
>>>> Connections are wrong, you need to use 3rd outlet of uzi to get
>>>> index.
>>>
>>> Oops. Thanks.
>>>
>>>> It seems index are faster than next, and bang are slightly faster
>>>> than next.
>>>> Still - I can't tell why dump is so much faster.
>>>> But my suggestion would be to use dump and do the filtering after
>>>> the coll.
>>>
>>> Yeah, I'll see if I can modify my patch to use dump. I only need to
>>> output 2 or 3 entries at a time though, from a possibly long coll. I
>>> need to do this every 2 milliseconds, so every performance upgrade
>>> would be welcome. If only there was a 'stopdump' message...
>>>
>>> Anyhow, the fact that next is even slower than index can be
>>> considered a bug, right?
>>>
>>> Mattijs
>>>
>>> --
>>> SmadSteck - http://www.smadsteck.nl
>>> Hard- and software for interactive audiovisual sampling
>>
>
>
> Cheers
> Gary Lee Nelson
> Oberlin College
> www.timara.oberlin.edu/GaryLeeNelson
>
>

barry threw's icon

Some version of the coll source is given out with the SDK under
example-externs.

Add your method.

b

On Nov 13, 2007, at 4:47 AM, Mattijs Kneppers wrote:

>
> Quote: Jakob Riis wrote on Tue, 13 November 2007 12:38
> ----------------------------------------------------
>
>> Connections are wrong, you need to use 3rd outlet of uzi to get
>> index.
>
> Oops. Thanks.
>
>> It seems index are faster than next, and bang are slightly faster
>> than next.
>> Still - I can't tell why dump is so much faster.
>> But my suggestion would be to use dump and do the filtering after
>> the coll.
>
> Yeah, I'll see if I can modify my patch to use dump. I only need to
> output 2 or 3 entries at a time though, from a possibly long coll.
> I need to do this every 2 milliseconds, so every performance
> upgrade would be welcome. If only there was a 'stopdump' message...
>
> Anyhow, the fact that next is even slower than index can be
> considered a bug, right?
>
> Mattijs
>
> --
> SmadSteck - http://www.smadsteck.nl
> Hard- and software for interactive audiovisual sampling

Barry Threw
Media Art and Technology

San Francisco, CA    Work: 857-544-3967
Email: bthrew@gmail.com
IM: captogreadmore (AIM)
http:/www.barrythrew.com

Mattijs's icon

Quote: barry threw wrote on Tue, 13 November 2007 22:31
----------------------------------------------------
> Some version of the coll source is given out with the SDK under
> example-externs.
>
> Add your method.
>

I considered that. But the version in the sdk is an old one. coll has been updated since then.

Of course I could roll my own buffer/sync external altogether (that's what I'm after, ultimately), but I was hoping to save time and do it with max native objects, which one would expect to be possible..

Mattijs

Mattijs's icon

Quote: vade wrote on Tue, 13 November 2007 20:52
----------------------------------------------------
> I meant something like:

That's an interesting approach, vade. In my case though, performance is the primary concern. I'll post an example of what I'm working on tomorrow, so that all you generous participants of this thread can see why I'm so keen on this.

Mattijs

vade's icon

Hm. I did not give it much time/thought today, but 'dumping' the whole
coll into a large enough matrix, then getting the cell(index) you
want, might improve your performance. Thats what i was hinting at.

hth !

On Nov 13, 2007, at 5:33 PM, Mattijs Kneppers wrote:

>
> Quote: vade wrote on Tue, 13 November 2007 20:52
> ----------------------------------------------------
>> I meant something like:
>
> That's an interesting approach, vade. In my case though, performance
> is the primary concern. I'll post an example of what I'm working on
> tomorrow, so that all you generous participants of this thread can
> see why I'm so keen on this.
>
> Mattijs
> --
> SmadSteck - http://www.smadsteck.nl
> Hard- and software for interactive audiovisual sampling

Mattijs's icon

Here is what I'm working on:

This is a pluggo:

Max Patch
Copy patch and select New From Clipboard in Max.

And this is the receiving end:

Max Patch
Copy patch and select New From Clipboard in Max.

These would be too much objects to even start looking at if I were you, but well, just in case you're interested..

This is the basic version. I dump the entire coll every 2 ms. It should be possible to do this more efficiently, knowing that coll is just a linked list that I could step through a few steps at a time with 'next', only when needed.

Mattijs

Stefan Tiedje's icon

Mattijs Kneppers schrieb:
> Is there a guru on this forum that can explain why dump is so much
> faster than uzi->next and why uzi->next is not much faster than
> uzi->index?

I am not a guru, but why would you expect that message handling has no
effect on the overhead of a patch?

Its easy to explain why this result is the one to expect. Even though I
always prefer abstractions over externals, I know that externals are way
faster to do what they do. Especially a dump message is done internally
with the highest possible optimisation. It does have to deal and decode
a single message. using next, bang, or numbers will have to decode and
route 20000 messages to its places...

Your computer seems to be 10 times faster than mine with the patch you
thought is faster (more than 25 seconds). The dump needed only 3 time
more... (150 ms).

It seems thinking about efficiency is more dangerous if you have a
faster machine... ;-)

Mattijs Kneppers schrieb:
> Anyhow, the fact that next is even slower than index can be
> considered a bug, right?

To decode a string will always be slower than decoding a number...
I'd call it expected behaviour...

Mattijs Kneppers schrieb:
> Here is what I'm working on:

As you deal only with numbers, you could dump the numbers into several
buffer~s and access them with peek~/poke~.

I don't know why you would want to dump the whole coll every 2 ms. That
doesn't make sense for me. You should always know when something changed
and could just take that info...

I also don't know why the existing next message will be a problem,
including the workarounds for a current output. It looks you are dealing
with Midi. And Midi is comparatively slow...

It could be even feasable to record your data into a soundfile, and
access this from your clients... just a thought...

Stefan

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

Bas van der Graaff's icon

Hi Stefan,

'To decode a string will always be slower than decoding a number...
I'd call it expected behaviour...'

Could you explain this a little bit more? I could well be true that decoding 'next' takes longer than decoding an int (even though both are 4 byte, haha), but surely decoding 'next' doesn't take longer than following 10k of pointers on average? Or am i missing some of the functionality of the linkedlist?

Later, Bas.

Bas van der Graaff's icon

I haven't done a great deal of speed testing, but i know that in another tool, we use a single line in (a number of) colls for every frame we render, and we try to avoid rendering compositions that are much longer than 5 minutes (which is only 300x50 = 6k lines, maybe 20 colls total i think). It appears the thing really slows down towards the end, which makes us believe the thing follows the pointers from the beginning of the coll.

But i would indeed be interested in more info about this, sometime...

johnpitcairn's icon

Coll is incrementally slower at looking up higher indices. The size doesn't matter, it's the index position:

Max Patch
Copy patch and select New From Clipboard in Max.

Stefan Tiedje's icon

Bas van der Graaff schrieb:
> It appears the thing really slows down towards the end, which makes
> us believe the thing follows the pointers from the beginning of the
> coll.

I hacked together a little test, and I can confirm that its slower
towards the end. Not really slow, but significantly on my machine.

Till about 1000 entries its not measurable, at 20000 it will need 6 or 7
milliseconds... No matter whats in the coll as data (symbols or numbers)

No random access it seems...

I tested a buffer~ as well. That seems to be the random access to go
for... (You might need several of them but who cares if its fast... ;-)

Stefan

Max Patch
Copy patch and select New From Clipboard in Max.

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

Mattijs's icon

Quote: johnpitcairn wrote on Wed, 21 November 2007 23:06
----------------------------------------------------
> Coll is incrementally slower at looking up higher indices. The size doesn't matter, it's the index position:
>

That's right, and it makes sense because coll is simply a linkedlist that you run through until you reach the index you need.

BUT the 'next' command should only go to the next entry of the linkedlist, so running through the coll with the 'next' command should be much faster than entering incremental indeces. This is not the case, as you can see in my second message in this thread (#120901). Which is very weird.

Mattijs

Stefan Tiedje's icon

Mattijs Kneppers schrieb:
> BUT the 'next' command should only go to the next entry of the
> linkedlist, so running through the coll with the 'next' command
> should be much faster than entering incremental indeces. This is not
> the case, as you can see in my second message in this thread
> (#120901). Which is very weird.

It seems to be a bad test, if I test it with the patch I posted
recently, a next or bang in the range of 19000 (after a slow goto) is as
fast as a recall in the beginning of the coll...

You measured more of the overhead produced by message decoding than the
actual access...

Stefan

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

Stefan Tiedje's icon

Stefan Tiedje schrieb:
> It seems to be a bad test, if I test it with the patch I posted
> recently, a next or bang in the range of 19000 (after a slow goto) is as
> fast as a recall in the beginning of the coll...

I have to correct myself, I made a wrong connection in my test. Next is
in the higher range as slow as a direct access...
I'd switch to buffer~/peek~/index~, that seems reasonably fast enough.
(But would be tricky if you want to store symbols... ;-)

Stefan

--
Stefan Tiedje------------x-------
--_____-----------|--------------
--(_|_ ----|-----|-----()-------
-- _|_)----|-----()--------------
----------()--------www.ccmix.com

Mattijs's icon

Quote: Stefan Tiedje wrote on Thu, 22 November 2007 18:04
----------------------------------------------------

> Next is
> in the higher range as slow as a direct access...

Cycling 74, could you comment on this? Does this mean there is no way in max to walk through a linked list in a proper (efficient) way except for the coll 'dump' message?

Mattijs

chase's icon

It seems, given how many times the performance of this one particular object has come up, and how useful the coll object is, that a version of coll (coll~?) that worked the same way as the current coll but offered better performance for random-access-of-large-datasets would be greatly appreciated by many people. I don't know how easy that is to do, ie, how tied the functionality is to the linked-list nature which seems to be the source of performance limitations.

(Alternatives: buffer~ can only hold four numbers per index, ftm's mat is buggy in my experience, Larray and Lmatrix aren't functional replacements (nor intended to be), jit.matrix requires, well, Jitter, which I don't use...)

Peter Castine's icon

For Stefan's numbertest coll, a table would be more efficient.

In general for colls in the form

0, 42;
1, 27;
3, 38;
.
.
.

table is the more appropriate and efficient option. Coll's forte is for symbol and list data storage.

I almost forgot to point out that there *is* an efficient alternative to coll for managing large lists of arbitrary data sets (ints, floats, symbols, lists). It's called lattice and is part of iCE Tools. Might be interesting for Chase and Mattijs.

johnpitcairn's icon

Quote: Peter Castine wrote on Sat, 24 November 2007 00:55
----------------------------------------------------
> For Stefan's numbertest coll, a table would be more efficient.

Or funbuff, which is actually somewhat more efficient than table.

Mattijs's icon

Quote: Peter Castine wrote on Fri, 23 November 2007 12:55
----------------------------------------------------
> I almost forgot to point out that there *is* an efficient alternative to coll for managing large lists of arbitrary data sets (ints, floats, symbols, lists). It's called lattice and is part of iCE Tools. Might be interesting for Chase and Mattijs.

That's interesting. But lattice is a ui object, no? Unfortunately, user interface updates are still in the same thread as f.e. jitter operations (the low priority queue), and thus have significant impact on framerates..

Mattijs

Peter Castine's icon

Screen updates are in the low priority queue. Processing of bangs is in high priority.

This is like the issue about table updates we had a week or so ago. If you use your ears, these objects are *fast*. It's only the eye candy that's slow. Lattice may process a few hundred bangs between screen updates, but the data *does* get processed as fast as your CPU can handle it.

Jitter is a different story.

Mattijs's icon

Quote: Peter Castine wrote on Sun, 25 November 2007 20:49
----------------------------------------------------
> Processing of bangs is in high priority.

Uhm, I'm sure you know that that depends on what generates the bangs.

But that was not my point, I assume your objects are properly coded and will process high priority events correctly, but when I store data in lattice, eventually lattice will want to do a screen update. This will be done on the one processor available to the low priority queue. Which will cost me frame rate.

Of course I am perfectly willing to sacrifice frame rate if I'd actually be using lattice's user interface. But I don't need an interface, I only need the linked list.

Best,
Mattijs