Forums > MaxMSP

scrambled hackz idea…

April 3, 2006 | 1:27 pm

http://youtube.com/watch?v=eRlhKaxcKpA

for those of you who dont know what it is, jsut have a look at the link above.
I have a question that might be a bit over my head, but I am trying to create a similar patch in Max for a school project. im going to leave out the video part of it, but how would I go about ‘searching the database’ for similar sounds?
im think I have an idea of how to do the rest, but scanning the buffer in real time for a similar waveform/spectrum is the place where I am stumped.

any suggestions?


April 3, 2006 | 2:16 pm


April 3, 2006 | 2:49 pm

I’ll quickly note that I have a patch which analyzes for beats which
I gave up on because once I got down to a 20 ms window, I couldn’t
get it to pick a specific sample which was correct. if you actually
succeed in doing this please shoot me an email. (in fact I feel like
this video beat me to a punch)

(sorry to thread jack)

. . . . . . . . . . . .
http://www.EstateSound.com
http://ideasforstuff.blogspot.com
. . . . . . . . . . . .


April 3, 2006 | 3:00 pm


April 3, 2006 | 3:30 pm

I have been trying to figure this out as well.
See the thread Spectral Recontextualization (was very interesting
video…).
I have the saving of FFT data working. The real trick is doing
the comparison. If I has a window size of 512, I would have 512 FFT
data values to look at each time. If I could calculate some kind of
distribution over those values and save the resultant value, I could
then order those values and save them in a coll. This would speed
up the lookup part of things.

So the steps would be…
1. Process source soundfile and save FFT data using sfrecord~.
2. Load source FFT data in buffer~. Use index~ to access it.
Calculate FFT distributions and build data structure for fast
access of FFT data. Store this data structure in a coll. You
would also save the index of index~. These values could then
be ordered for faster searching.
4. Analyze incoming sound, calculate FFT distribution, lookup
closest value in Col.

I am trying to keep this as simple as possible, no jitter, no DB.
This method would consume a lot of RAM though. Step 2 is the step
that needs to be figured out. When I say distribution, I mean some
equation that takes a set of values and then calculates some
identifying value. I am not a mathematician, I was wondering if
someone out there could offer some suggestions. Any ideas?

Anthony


April 3, 2006 | 3:58 pm

now I’m excited!

first of all, he makes it sound like he’s matching the full samples
live. this is impossible because of the age old "you dont know the
future rule" in other words, he cant say CHAAH and expect to get a
CHAAAH live because at the instant it selects the slice, all its
heard is CH.

also, I’ve thrown this structure our in an earlier thread but it
seems applicable. forgive the repetition. you’ve rekindled my
interest in it.

STORE
A: analyze each slice with as much as much metadata as possible…
brightness~,fiddle~, loudness~, try and find some formants, max
pitch, min pitch, etc…
B: save an xml file for each transient (or in this case song with
multiple transient descriptions)
C: upon load, go find all the xml files and load them into a database.

now rather than 512 fft bands which is unorganized data, you have 30
pieces of useful information.

so to search you might analyze the input and find the closest match.
ie average difference of values is closest to zero.

this is just what I’m working on, I don’t know if its how he’s doing it.
-matt


April 3, 2006 | 4:06 pm

I think it is easier to work directly with the
FFT data because that is what you are trying to
match. You trying to find the closest set of FFT
values that matches the incoming one, regardless
of loudess, briteness, or pitch.

Anthony


April 3, 2006 | 4:41 pm

On second watch. this looks to be more probable. I was under the
impression that it reacted to a full syllable. it seems it only
takes the first fft frame of the input and matches it to an ?average?
of the recorded slice. if no inputs received it just continues
playing. brute force.

sorry. in creativity world I’m a lonely guy, I get excited when
people even approach being in phase with me.
-matt


April 3, 2006 | 5:34 pm

thanks a lot for the help so far, im still thinking in thoery right now, as I think I’ll drive myself crazy if im trial and erroring without knowing the outcome.
FFT and bonk~ definitely seem to be the way forward, considering the short amount of time i have to do it. but i would appreciate the help!


April 3, 2006 | 6:34 pm


April 3, 2006 | 6:55 pm

Yes I understand that some frequency bins are
not as relevant. What is the point in trying to
extract all this extra information when all your
are trying to do is find the closest matching spectrum.
The whole point is to keep things simple. By treating
the FFT values as points on a line you could calculate
the distibution and save that for later comparison.
Looking at the Bark~ documentation shows that its
output is at the control rate. I am not sure it would
be able to have the time resolution needed to do spectral
remapping. For this to work you need to be going at the
audio rate.

Anthony


April 3, 2006 | 7:22 pm


April 3, 2006 | 8:32 pm

the first thing that led me to metadata (aside from the fact that I
work on it) is the speech. he could conjure speech pretty easily
which means that somewhere it knew that there was a focused fft bin
which implies that its not just the first fft frame. because if you
say CHAAH your analyzing CH.

it could just rigged like a trade show computer to perform well under
x circumstances.

I draw no conclusions.I’m leaning toward fft for him, but its still
metadata for me.
you guys rock.
-matt


April 5, 2006 | 7:42 pm

I have hit a roadblock in solving this problem. So far,
have a way to save all my FFT windows. What is needed
now is a way to represent these FFT windows and
compare them with the FFT windows of an incoming
signal. My first thought would be to do a histogram
on the FFT windows? I am not sure if that would be
best thing to do. Does anyone have any suggestions
as to what method I could use that would accurately
give me a way to compare spectra?

Anthony


April 6, 2006 | 4:13 pm

I have been looking at the following site…
http://www.mat.ucsb.edu/~b.sturm/sand/VLDCMCaR/VLDCMCaR.html
This is a pretty serious implemetation of spectral recontextualization.
The author states that the following sound qualities are
considered when comparing spectra:

Number of Zero Crossings – General noisiness, existence of transients
Root Mean Square – (RMS) Mean acoustic energy (loudness)
Spectral Centroid – Mean frequency of total spectral energy
Spectra Dropoff – Frequency below which 85% of energy exists
Harmonicity Deviation – from a harmonic spectra
Pitch – Estimate of fundamental frequency

I doubt I could do all these in real time on my laptop. But I
might be able to get good results if I choose one. Which one
would be the best one to use? Does any one know of an external
that will do any of these? If not could someone please point
me in the direction of what algorithm to use.

Anthony


April 6, 2006 | 5:02 pm

Actually, you probably could get most of these in realtime using the
Tristan Jehan objects. The catch with analysis is generally not doing
the analysis, but having overhead to do other things at the same
time… (He says wistfully from a 3 1/2 year powerbook…) Analyzer~
will output rather a lot of useful information. For instance, you
could use the bark output of that with the MLP object if you wanted to
do neural net matching…

Peter McCulloch


April 6, 2006 | 6:14 pm

So it looks like the author of scrambled hackz stored each chunk in a
vector-space. If you’ve got k bins per window and j windows per
chunk, then you’re talking k * j dimensions, which is huge.
Then it sounds like he stores a reference to the sample in this k * j
space and the "closeness" is really just the scalar distance. Scalar
distance over greater than 3-space isn’t simple pythagoras, I forget
how to do it but I think a good linear algebra textbook would have the
answer.

So the storage would need to be sparse in k * j space, I would
recommend a list to start. Maybe make a buffer for the imported sound
file and address a chunk by the beginning and ending sample. The
lookup will be faster if you restrict the database size, you can
optimize later. So imagine you’ve got 32 chunks, for each incoming
chunk (that you’re mapping to a sample in the database), you just
iterate through the list and find the chunk that is of least scalar
distance.

To make this simple: imagine that you only have fft bins per fft
window, and 3 windows per chunk, and 8 chunks per bar and one bar of
input sample. Each sample would have 3 * 3 = 9 coordinates per
chunk. So each saved chunk would be an entry in a list that looks
like [1.1 1.2 1.3 2.1 2.2 2.3 3.1 3.2 3.3 begin_sample end_sample].
Then you need to write an abstraction that takes a 9 element list (the
incoming chunk’s coord’s) and produces a 2 element list (the outgoing
chunk’s position in the saved buffer).

This entire process would of course be optimized later, but you could
probably get away with only using a small number of fft bins.

Does this make any sense? Or am I being stupid?

_Mark


April 6, 2006 | 6:38 pm

The problem is I am already doing the FFT analysis and
saving that data in a file. I then load that data in
a buffer~, i then take the incoming signal and find the
closest matching FFT window in my source buffer~ (which
again is holding FFT data NOT samples). I then replace incoming FFT
window with the data of the source FFT window. This has
to be done at the audio rate. The problem with using Bark~ is
that it is doing its own FFT, so I lose the correlation between
a specific FFT window and the analysis produced by bark~.
Trying to tie my own FFT data to bark~ output would be really
hard.

Anthony


April 6, 2006 | 7:03 pm

You could save the FFT data in the original soundfile on some
additional channels, so that you can access either. (if you’re doing
2x overlap, you’ll need two extra channels)
There’s all sorts of data striping, etc. that you could do with it, but
it sounds like an SDIF file may be the direction that you’re headed in,
and they are at a higher level of analysis than what you get from an
FFT. I guess it just depends on how much off-line processing you’re
looking to do. Spear will save Sdif files, IIRC, and there’s a set of
them for playing back in Max.

You could probably deal with the bark~ latency by compensating with
delay~, and use the scheduler in audio interrupt (and using fftout~ to
convert your buffers back into real-world audio), though. How are you
doing the correlation? (looking at bin delta?) (and is this in
real-time, or NRT?)

Peter McCulloch


April 6, 2006 | 7:11 pm

Mark,

I think he is doing something a little more complicated
when it comes to comparing the spectra, I don�t think scalar
distance is a meaningful comparison. For now I am trying to
keep things simple. So I have k bins per window and j windows
per chunk. Lets say K is a window size of 1024, j would vary depending
on
the size of the original file. I am not using a DB so this could
potentially
use a lot of RAM. I would run my source sound file through fft~ and
save the FFT data to disk. I then load this FFT file into buffer~, and
use index~ to get to a specific window FFT data. Lets just say for now
I am going
to do a linear search (later on I know I will have to optimize this).
What I am missing is the piece that takes as input an FFT window and
spits out some meaningful value that I can use to compare. Like wise
I would do this same process on the FFT window of the incoming signal.
I then compare values and keep repeating this process until I find the
closest match. I then insert the matching FFT window into the output
audio stream. Perhaps there is some easy way to compare things that
I am just not thinking of. But it sounds like I need something more
specific.
Is there any source out there that I could perhaps use to calculate
Pitch,
Loudness, Brightness given FFT input?

Anthony


April 6, 2006 | 8:47 pm

I had not thought about a multichannel file. That might be
doable although more complex than I wanted.

The SDIF file sounds fascinating, does Max support SDIF files?
I was looking to do this in real-time. So that is why I am trying
to use existing Max objects like sfrecord~, fft~, buffer~, etc…
I also need to easily do an ifft~ to produce the output audio
stream. So using the native Max representation of fft data is
much easier.

Trying to sync up bark~ with delay~ sounds complicated. If I
use audio interrupt scheduling, will I be assured
that each output from bark~ would correspond to a fft frame?
It would be nice to not have to worry about that. Again I am
trying to keep things simple. But If I could not find
anything else that worked, I could try it.

Anthony


April 6, 2006 | 11:26 pm

If you consider each of these data points as one value of a feature vector:

Number of Zero Crossings – General noisiness, existence of transients
Root Mean Square – (RMS) Mean acoustic energy (loudness)
Spectral Centroid – Mean frequency of total spectral energy
Spectra Dropoff – Frequency below which 85% of energy exists
Harmonicity Deviation – from a harmonic spectra
Pitch – Estimate of fundamental frequency

you will have a vector of length 6. With a bunch of vectors, you are
basically populating a 6 dimensional space with data. If you have a
ton of data, this is going to be a lengthy search as that space is
massive compared to how much data you have. There are a few ways to
do fast searching if you consider your data to be distributed in
specific ways. For instance, if you assume your data to be gaussian
dsitributed, you could use principle component analysis to compare
things in a smaller dimensional space. Or, you could use something
like an approximate nearest neighbor search:
http://www.cs.umd.edu/~mount/ANN/ and find your seult very quickly.
That would be a good way to query a database imho.

best,
wes


April 7, 2006 | 7:15 pm

I am not trying to do all of these I am just trying to
do at least one of them. Just to get things working. But
I seem to be running into problems figuring out how to
do the analysis of the FFT windows. There seems to be no
convenient externals that take FFT input and calculate
meaningful things like pitch, loudness, brightness.

Anthony


April 7, 2006 | 7:57 pm

There aren’t any that I know of. If you store the FFT data in extra
channels on the soundfile, you can just send the original sound into
the objects that will calculate all of these things. It may take a
slight amount of extra fanangling, but it’s definitely the way to go.
(basically just test it out, see what the lag is, and then adjust for
it with delay~) Most of these analysis objects are quite optimized, so
the performance saving isn’t necessarily that significant.

Peter McCulloch


April 7, 2006 | 8:06 pm

It is certainly worth a try. What do I use to create
a multichannel file?


April 7, 2006 | 9:17 pm

>It is certainly worth a try. What do I use to create
>a multichannel file?

Shea Ako made this aiffpack command line utility. It’s been working
great for me!

/J

—————- Begin Forwarded Message —————-

18/04/05, kl. 16:34 +0200 , skrev Shea Ako:

I recently needed to create multichannel audio files myself and
couldn’t find a solution so I wrote a command line utility for
combining multiple input files into a single multichannel output file.
It’s called ‘aiffpack’, here’s the output of aiffpack -h:

Usage: aiffpack [-h] [-v] [-b | -f | -d] [-w] [-B ]
input_files… output_file Options:
-h help
-v verbose
-b output PCM bytes per sample (-b 2 or 16 bit PCM is default)
-f 32 bit floating point output
-d 64 bit floating point output
-w Microsoft WAV format output (AIFF is default)
-B processing blocksize in samples (default 2048)

aiffpack is a utility for creating multichannel AIFF (or optionally
WAV) sound files from a set of input sound files of varying formats and
resolutions. The sample data type and bit resolution of the output file
can also be specified. The length of the output file is the length of
the longest input file. Shorter input files will create tracks that are
padded at the end with silence. The order of the input files specified
determines the order of the tracks in the output file. The first track
of the output file is the first track of the first input file and the
last track of the output file is the last track of the last input file.

aiffpack version 0.12 is Copyright (C) 2005, Shea Ako and is made
possible by libsndfile-1.0.11 Copyright (C) 1999-2004 Erik de Castro Lopo.

I haven’t had a chance to create an official web page or anything but
you can download an OSX binary with source code here:

http://www.hyperspasm.com/aiffpack-0.12-darwin.tar.gz

I hope that helps,

shea

—————– End Forwarded Message —————–


April 7, 2006 | 9:29 pm

so you mean I have to save the streams to disk seprately
(from Max) and then join them with some external utility?
Is this how everyone is doing it? What do I use to read
the multi channel file back into Max? Doing it this way doubles
your memory usage because you have the original PCM data and
the FFT data in memory.

Anthony


April 7, 2006 | 10:06 pm

>so you mean I have to save the streams to disk seprately
>(from Max) and then join them with some external utility?
>Is this how everyone is doing it? What do I use to read
>the multi channel file back into Max? Doing it this way doubles
>your memory usage because you have the original PCM data and
>the FFT data in memory.

Maybe I misunderstood you. Have a look at sfrecord~ and sfplay~. I use
aiffpack when i have a number of mono- or stereofiles i want to join
into an eightchannel file or something. This is Max, everyone might do
it differently :-)

/J


April 7, 2006 | 10:10 pm

Quote: Anthony Palomba wrote on Fri, 07 April 2006 22:29
—————————————————-
> so you mean I have to save the streams to disk seprately
> (from Max) and then join them with some external utility?
> Is this how everyone is doing it?

You can record up to four channels at once to ram using record~

or

upto 32 channels to disc using sfrecord~

see help files for details.


April 8, 2006 | 1:25 am

Those are all fuzzy, psychoacoustic phenomena and your max patch will
be a fuzzy psychoacoustic max patch. I’m going to try my scalar
distance idea since it’s simpler and easy to define. I’m guessing my
approach will approximate any that you come up with in the asymptotic
case. I’ll let you how it goes.

_Mark


April 20, 2006 | 3:06 pm

so have we reached the end of the road? i am not particulary skilled in Max and have tried all your suggestions to no avail. is it time for me to think of a different project topic (and fast!) or is tehre still hope?


April 20, 2006 | 3:14 pm

Sure there is hope, I am busy implementing the very ideas
that were offered by the forum. I will keep you guys posted.

Anthony


April 20, 2006 | 10:15 pm

I would think that were I to attempt an implementation of scrambled
hackz it would take me at least 1 month to do it right. If you don’t
know max, figure an extra month in. Time for you to use a bit of the
old gestalt, is it better that you attempt something really cool and
fail or that you finish something reasonably cool? There’s no glory
to be gained from staying the path to failure. Took me a few class
projects to figure that one out.

On the other hand, if you do decide to stick with the project and
finish, you won’t be able to say that you’re not skilled in Max at the
end :)

_Mark


April 21, 2006 | 1:47 am

i do not understand the question either.
when you want to reuse the files in max
anyway, why the attempt to write a multi-
channel file to disk?

multichannel files really only make sens when you
actually encode some industry standard and you are
able to open them after you made it.

just write 6 or 8 files and then read the 6 or 8
files when you want to use them again.
it is a few clicks more and will not work with
sfplay but only with buffers, but it will not
make any difference otherwise.


April 21, 2006 | 1:35 pm

ive limited myself (or perhaps opened up) to bonk~ it seems to be the best way forward. what i had done was route the list it outputs to individual wav files of drum sounds, depending on the list which was outputted by each sound i made into the mic.
but the list is so general that it triggered more than one at a time. i plan to read about in detail how the list is processed. i know that it gives a list of 11 numbers giving the signal "loudness" in the 11 frequency bands used, but this information is not enough to accurately trigger a file. i am going to assume i am going about this very wrong, and stand corrected on the whole thing, once i have been.


April 21, 2006 | 7:23 pm


April 21, 2006 | 8:35 pm


May 3, 2006 | 3:26 pm

thanks peter, that was quite helpful.

i am currently using ‘funbuff’ the store the packed data from analyzer alongside the ‘time’ in ms of the sample. so i will have an x,y reference point x being the time, y being the spectral analysis information coming out from analyzer~.

because there are 4 useful outputs from ‘analyzer~’, (loudness, brightness, noisiness and bark), I have had to pack them and send them as a linear list to ‘funbuff’. problem is, ‘funbuff’ is only listening to the first value (in this case, ‘loudness’) and using that as the y references.

anyone with a good knowledge of ‘funbuff’ know how i can get it to listen to all 4 points in the list or is packing them not the solution? perhaps seperate funbuffs for each output?

if you think posting my patch here would make more sense, just ask…?

thanks in advance!


May 3, 2006 | 4:17 pm

it’s not possible. funbuff is made only for xy pairs. If you’re storing
lists like you described, there are better tools for that. Coll is one, but
you might want to look into Java for managing your arrays. It’ll give you
more speed and flexibility in storing, retrieving and sorting your data.

just my 2cents.

-thijs


May 3, 2006 | 5:25 pm


May 3, 2006 | 5:28 pm

i just realised that that one doesnt have the output of ‘pack’ going into funbuff, just the mtof from the pitch output. this is not a mistake, i was just seeing if i could store those values and forgot to change it back.
so just put in the pack output or maybe mess around? see what you think.

thanks a lot.


May 3, 2006 | 9:23 pm

On 3-May-2006, at 19:25, Declan wrote:

> is it possible to combine all the info into one huge number (ie.
> have loudness, brightness etc as one long list without spaces) and
> have it as the x or y? i am adimant to use funbuff because of the
> easy referncing options. the best thing is just to post my patch i
> think…

In principle: yes. Richard Dudas has a nice example of doing this
with the anal/prob objects to simulate higher order Markov Chains. I
think it may be inside the prob.help patch, but it might be elsewhere.

However: some old Max objects are so MIDI-oriented that they max out
at 128 different values. You’ll need to check if funbuff will go
beyond the 0-127 range. I ain’t optimistic, but I’ll wish you good luck.

– P.

————– http://www.bek.no/~pcastine/Litter/ ————-
Peter Castine +–> Litter Power & Litter Bundle for Jitter

iCE: Sequencing, Recording & |home | chez nous|
Interface Building for |bei uns | i nostri|
Max/MSP Extremely cool http://www.castine.de

http://www.dspaudio.com/


May 3, 2006 | 9:43 pm

I’m not sure what you’re trying to do here. I don’t think funbuff is an
option for storing analysis data in general, and combining the data into one
number really doesn’t sound like a good idea to me.

Also define a time window for your analysis. It looks like your streaming
the data from the analyzer directly into the storage. I don’t see how this
is going to be useful. You can use coll to store all numbers at one index. I
don’t get what your trying to do with "pack" in front of the colls. Don’t
you need time and sample id information with the analysis data too?

I can’t really help you because this patch doesn’t make a lot of sense to
me. I’m not sure what the actual problem is. No offense but I get the
feeling that you can really use a little more study on the helpfiles and
docs.

good luck,

-thijs


May 4, 2006 | 8:29 pm

i know, i am definitely a newbie in Max, but I do believe I can do it with the help of my tutor. excuse the stupid mistakes and such. thanks for the help as well

as for the time, its going into the funbuff on the other side from the sample itself. im re-thinking it at the minute…


May 8, 2006 | 7:32 am

I am receiving messages from Peter McClloch to maxmsp@cycling74.com. I do
not know this person and this is
not my email address. My email is rubato@telus.net . Can you block this?

Thank you,
Carolynn Cordsen


May 8, 2006 | 2:28 pm


May 8, 2006 | 5:16 pm

When sending a list into coll, the first item the list must be an
integer or a symbol; it cannot be a float. (coll does not truncate the
float into an int). If you need the extra precision, multiply by 1000.

Peter McCulloch


May 9, 2006 | 3:16 am

Im Max MSP, and I DEMAND to know what OTHER emails you’ve recieved for
me!?!?:)

hahaha


May 9, 2006 | 5:59 pm

ok, I am very close to finishing this patch. well i say very close because I have figured out and built everything except the search process itself.

i have one coll, 0-31 index numbers (there are 32 segments in the sample I am using), each index number has the packed info from ‘analyzer~’ output. i have saved this as an xml file and have loaded into another coll in a patch window and taken the second outlet of coll (number assoc with data) and scaled it from ’0 31′ to ’0 8210′ which is the length of the sample in ms.

i have put one output from this connected to ‘select start ms’ on a waveform~ and connected the other to an object box reading ‘+ 260.725′. this is the length of each segement in ms. i connected this to ‘select end ms’ on the waveform~ object. now I can scroll up and down between segments using the output from the coll and I have info on each segment I am scrolling through coming out the left side of the coll.

where do I go from here? I have the adc~ data packed and waiting to be compared with the stored data and the appropriate segment to be played back. can anyone tell me what object will compare this information for me and output some kind of message when a match is found?


May 10, 2006 | 1:03 pm


May 10, 2006 | 3:00 pm

This may shed some light:

http://en.wikipedia.org/wiki/Euclidean_distance


May 18, 2006 | 3:44 pm

ok, i have solved the problem by using the ‘distance’ object by Paul Hill (www.paulhill.org.uk) which outputs the index number of the closest match. its pretty much done and it works ok, a few tweaks are necessary. will i post it up for anyone to have a look at and play around with?


May 18, 2006 | 4:46 pm


May 22, 2006 | 11:08 pm

hooray!
finally…you will probably laugh at the inaccuracy, but I achieved something I never thought I would! to attempt something like this as my first ever Max project and to get even close is a good achievement for me!
it does work! but can be a little off…just give it a go, even if its not 100% accurate, it still bodes interesting results!

thanks for all your help too. my project is due in tomorrow.

the analyzer~ and bonk~ included are both PC, you can get the Mac versions here: http://www.maxobjects.com
the DISTANCE object is also PC but I have included the Mac version inside the zip file which is in the root zip.
let me know how you find it!

cheers…

EDIT: um, i did attach the file to this message but i dont see it so here it is: http://www.pureclassforums.com/maxx.zip


May 25, 2006 | 7:45 pm


October 3, 2007 | 7:30 am

hello everybody,
Puting the old topic on the top again!
ok, i would love to take a look on the patch maded by Deklin,
i contacted him but 2 years after, the source are hard to find.
So if there a crazy cycling forum’s archivist or somebody who had interest in that work and still have the patch, please forward ;)

billion of thanks
freeka

PS:any others linked works’ll be welcome too!!


October 4, 2007 | 10:33 am

I have it on good authority that THERE IS NO PATCH.

He uses Ableton Live for it. I have good reason to believe that
there never was a Pd patch, it was all either faked or performed
using off-the-shelf software.

I did attempt to re-create this idea using "average ffts" taken over
the length of certain samples, but didn’t get very far. It’s fairly
a complicated process. What would be really nice is spectral
envelope tools that could detect and match against different spectral
envelopes, but AFAIK, these don’t really exist in Max (for free, anyway)

Please prove me wrong…

Cheers
Evan

On Oct 3, 2007, at 8:30 AM, freeka wrote:

>
> hello everybody,
> Puting the old topic on the top again!
> ok, i would love to take a look on the patch maded by Deklin,
> i contacted him but 2 years after, the source are hard to find.
> So if there a crazy cycling forum’s archivist or somebody who had
> interest in that work and still have the patch, please forward ;)
>
> billion of thanks
> freeka
>
> PS:any others linked works’ll be welcome too!!


October 4, 2007 | 10:37 am

yikes!
there is indeed a patch, i found it on my old Mac yesterday. Shall I upload? It is probably no good to you guys because all the maths involved in splitting the sample up into transients were made based on a specific drum loop which has been lost in the sands of time!

so you can load in any sample you want and give it a go, it will be completely random due to the maths involved, but I guess you can try and put in your own maths once you figure it out.

But yes, there is a patch (i dont think there is such on-the-shelf software!) so let me create a zip of it and the associated ext objects you need and I’ll upload it


October 4, 2007 | 11:45 am

OK guys here you go

the main object ‘DISTANCE’ which is needed to make it work is PC ONLY. It was made by a guy who was in the MA of my course at the time and my lecturer recommend I talk to him about it.
As far as I know, the object was never released online, and I can’t even remember the guys name, though his first name was Paul.

As far as the patch goes, I found the .aif drum loop so I’ll include it. My advice is, don’t unlock it! I did it and I couldn’t even understand it! It was so long ago and was made so carelessly. If you have any questions feel free to ask and hopefully I can answer them.

I have included everything you need, but it’s all PC ONLY
I’m actually on a Mac now so I can’t try it out myself but there are instructions included on the GUI.

I didn’t use Ableton Live by the way, I don’t even know how one would do that…

Here you go: http://www.box.net/shared/400k3eatro

EDIT: After looking at the ZIP I’ve realised there is a Mac build of DISTANCE included in it! Just goes to show how much I remember! Anyway, if you just get the Mac versions of the other objects (they’re all pretty easy to find) it’ll work on Mac then, but I haven’t tried it…


October 4, 2007 | 12:19 pm

Hey thanks Declan!
i will take a look on it tonite!
if you still interest by that concept, i heared about "soundspotter" , i think you should enjoy it!

;)

freeka


October 4, 2007 | 12:49 pm

There’s also CataRT by Diemo Schwarz, which will be worth a look:

http://imtr.ircam.fr/index.php/CataRT

As someone pointed out in the original thread, the segmentation strategy
is pretty important – scrambled hackz didn’t seem to be storing
individual fft frames, but ‘musically meaningful’ sub-divisions of
material. CataRT will let you supply SDIF files with segment markers,
but you still need to generate them somehow…


Owen

freeka wrote:
> Hey thanks Declan!
> i will take a look on it tonite!
> if you still interest by that concept, i heared about "soundspotter" , i think you should enjoy it!


October 4, 2007 | 1:27 pm

Yes an other amazingly cool guy, told me about,
i downloaded it on my laptop and tried it.
but honestly i doesn’t make sense for me,
"but you still need to generate them somehow…"
is this what they call corpus?
booyaka

freeka

There’s also CataRT by Diemo Schwarz, which will be worth a look:

http://imtr.ircam.fr/index.php/CataRT

As someone pointed out in the original thread, the segmentation strategy
is pretty important – scrambled hackz didn’t seem to be storing
individual fft frames, but ‘musically meaningful’ sub-divisions of
material. CataRT will let you supply SDIF files with segment markers,
but you still need to generate them somehow…


October 4, 2007 | 2:53 pm

A corpus is any database of sound segments in cataRT – you add sounds to
a corpus by dragging files or folders onto the desired bank.

The segmentation dictates how the app stores and analyses units – it has
a built-in capability to segment files into regular chunks ms long,
or import an SDIF file with segment markers in it. For anything rhythmic
and non-quantized the latter is of considerably more use (obviously),
but you need some app that can generate such files (e.g. Audio Sculpt).

Booyaka back atchya.


Owen

freeka wrote:
> Yes an other amazingly cool guy, told me about,
> i downloaded it on my laptop and tried it.
> but honestly i doesn’t make sense for me,
> "but you still need to generate them somehow…"
> is this what they call corpus?
> booyaka
>
> freeka
>
>
>
>
>
>
>
>
>
>
> There’s also CataRT by Diemo Schwarz, which will be worth a look:
>
> http://imtr.ircam.fr/index.php/CataRT
>
> As someone pointed out in the original thread, the segmentation strategy
> is pretty important – scrambled hackz didn’t seem to be storing
> individual fft frames, but ‘musically meaningful’ sub-divisions of
> material. CataRT will let you supply SDIF files with segment markers,
> but you still need to generate them somehow…
>


October 4, 2007 | 3:05 pm

BooyOwen!
you rules


October 7, 2007 | 11:50 am

hi declan,

sorry for the misunderstanding – i didn’t mean that about your patch,
i meant it about the original scrambled hackz app/demo (which was
supposedly done in Pure Data).

if anyone has that version i’d love to see it.

cheers
evan

On Oct 4, 2007, at 11:37 AM, Declan wrote:

>
> yikes!
> there is indeed a patch, i found it on my old Mac yesterday. Shall
> I upload? It is probably no good to you guys because all the maths
> involved in splitting the sample up into transients were made based
> on a specific drum loop which has been lost in the sands of time!
>
> so you can load in any sample you want and give it a go, it will be
> completely random due to the maths involved, but I guess you can
> try and put in your own maths once you figure it out.
>
> But yes, there is a patch (i dont think there is such on-the-shelf
> software!) so let me create a zip of it and the associated ext
> objects you need and I’ll upload it


October 8, 2007 | 4:49 pm


Viewing 66 posts - 1 through 66 (of 66 total)