Gen~ Data operator length

jonbenderr's icon

Hello!

I am currently working with the Electrosmith Daisy Patch eurorack module and trying to make a sample recorder/playback patcher strictly in gen~. The catch is I cannot reference a buffer~ outside of the gen patch. Everything needs to be contained strictly within the gen patch.

So when sampling, it appears I am limited to using the Data operator. It appears the data operator MUST have a size/length attribute set to be able to poke/record into it longer than the default 512 samples, correct?

So is there anyway to set the length of a data object dynamically without using a buffer object?

I suppose I could set the size of the data object longer than I would typically require and then do some operations to playback just the space the actual sample takes up?

Just curious if anyone has any input. I apologize if any of this is a bit unclear. I'm just starting to dip my toe in the gen~ water and dealing with limitations of the hardware I am trying to program.

Thanks!

Carlton flores's icon

actually i was also looking for the same

Max Gardener's icon

You cannot resize a data operator interactively; instead, set a value higher than you expect to use and then keep track of the number of samples (you'll be at 48 K in your Daisy, remember) independent of what the data operator broadcasts and truncate your requests accordingly.

marde riasi's icon

I am making a MIDI-wav player on daysi seed. I have the same issue
https://cycling74.com/forums/using-a-variableparam-to-specify-sample-length-on-data

You cannot resize a data operator interactively; instead, set a value higher than you expect to use and then keep track of the number of samples (you'll be at 48 K in your Daisy, remember) independent of what the data operator broadcasts and truncate your requests accordingly.

setting a higher valuea would not be efficient. It is not necessaty to resize it interactively every time, but at least you need to be able to "reset" the patch and change the [data] sizes by a external value (the length of the files you are trying to read), so you don't have to manually enter the number by hand

jonbenderr's icon

Hi guys! So I actually managed to figure out for myself what Max Gardener suggested above and created a simple sampler/looper/slicer.

Included in the download link is also the gen patch so you can see better how everything works.

Worth noting, this IS for the Daisy Patch. With the included gen stuff, you should be able to translate it to other Daisy devices rather easily.

Graham Wakefield's icon

Pre-allocating a larger size is the right way to go.

Changing the size of the data dynamically would involve dynamic memory allocation, which would add some overhead, could lead to degraded performance, possible audio dropouts (while memory is being allocated), and other problems, depending on the hardware context.

Dynamically changing data size would mean bringing a whole smart memory allocator requirement into the library, eating up precious code memory footprint, possibly leading to hard to debug errors, degraded performance, and still very likely audio dropouts.

One of the attractive features of the Daisy for me is how well it performs with very small block sizes, which would make audio dropouts even more likely.

The Daisy has two kinds of memory, a fast internal RAM and a slower but much larger SDRAM. Oopsy is careful to use the faster RAM if the data size is small enough to fit in, but falls back to SDRAM if there is no memory left. That would make the dynamic allocator even more complex to design.

Moreover, in general, when designing for an embedded hardware, you want to optimize your design for worst-case performance. The nature of something like the Daisy makes the CPU load very predictable. If a patch works with a small data but then loading a larger file makes it break (CPU overload, which will basically hang the module), that's going to be a pretty bad user experience. But if it works for the largest data size, then you know that using a smaller section of that data will have exactly the same CPU load.

So, pre-allocating the data with the largest size you need is far more reliable.