Forums > MaxMSP

additive synthesis as a unified theory and system for sound design and analysis

March 17, 2007 | 11:19 pm

What work has been done (if any) toward using additive synthesis as a unified theory and system for sound design?

I know about the FT and SPEAR. SPEAR seems like a great start at creating a system for working with additive synthesis, but it seems lacking in functionality, compared to what i’d imagine an ideal system would have.

I’ve heard about Kyma, but need to learn more to see if it has more functionality than SPEAR for additive synthesis and resynthesis. Does anyone know if it does, and if so what?

It seems that most people don’t work with additive synthesis using sine waves. However, there’s no denying that additive synthesis using sine waves is the most versatile method of sound synthesis (albeit much more CPU expensive and with a bewilderingly larger number of control parameters).

Although most people don’t use it, are there any people who do use it? Perhaps in the design of waveforms to be sampled and then used in wavetables in commercial synthesizers? If so, what system for additive synthesis is used for this? Are there any functions or other sorts of rules that are used to lessen the number of parameters that the user has to deal with?


March 18, 2007 | 2:36 am

some random thoughts.

if you have a fast computer you dont need kyma for additive
synthesis when you already have max.

realtime modulation is a bit easier with kyma sounds than
with the max runtime message systems (many partials = many
paramaters!)

if you have aurora for TDM, you wonder why you ever started
programming something like that in max.

if you do not plan to modulate the partials (or "parts") of
you synthesis model, you could as well do the calculation
offline and write the result in a wavetable.
(the only thing you loose is the original _beat of the
sound when you loop it -> see the thread about "tremolo"
on the coreaudio list)

a drawbar organ is one kind of additive synthesis.


March 18, 2007 | 2:05 pm


March 18, 2007 | 2:36 pm

Quote: maxplanck735@hotmail.com wrote on Sun, 18 March 2007 00:19
—————————————————-
> However, there’s no denying that additive synthesis using sine waves is the most versatile method of sound synthesis

In theory, yes. But to get rich and interesting sounds, additive synthesis is often too much work. Even if you take an existing sound in SPEAR and tweak it from there. In fact what you want to do is directly control 3 basic dimensions of sound manually: frequency, amplitude and time. That’s a lot of data.

I think the reason why people use different kind of synthesis methods is that tools have a great influence on your results. Because of this I personally prefer to create a new weird but simple synthesizer for every different instrument I need. Whether or not it is based on a known and named synthesis method is not very important.

Mattijs


March 20, 2007 | 12:08 pm

Max Planck schrieb:
> What work has been done (if any)

ROFL…

> toward using additive synthesis as a unified theory and system for
> sound design?

Tons of papers, theories, philosopies. If you’re interested in a unified
theory and system for sound design, look at SDIF. That’s what it’s about.

> I know about the FT and SPEAR. SPEAR seems like a great start at
> creating a system for working with additive synthesis, but it seems
> lacking in functionality, compared to what i’d imagine an ideal
> system would have.
>
> I’ve heard about Kyma, but need to learn more to see if it has more
> functionality than SPEAR for additive synthesis and resynthesis.
> Does anyone know if it does, and if so what?

I’d say it does, but I have no idea what you mean with "lacking in
functionality". Your imagination unfortunately is yours and no one
else’s unless you share it…

> It seems that most people don’t work with additive synthesis using
> sine waves. However, there’s no denying that additive synthesis
> using sine waves is the most versatile method of sound synthesis
> (albeit much more CPU expensive and with a bewilderingly larger
> number of control parameters).

It’s probably the least versatile method, though it’s not very CPU
expensive. The bewilderingly larger number of control parameters, which
have only little connection to what you hear, kills its versatility.
It’s even worse than FM… ;-)

> Although most people don’t use it, are there any people who do use
> it?

I guess a lot. At CCMIX we have the famous Upic synthesizer which does
it since several decades. But you can draw different waveforms than sine
waves, which is already a big step forward… ;-)

Stefan


Stefan Tiedje————x——-
–_____———–|————–
–(_|_ —-|—–|—–()——-
– _|_)—-|—–()————–
———-()——–www.ccmix.com


March 20, 2007 | 4:08 pm

I think one of the main reasons why such a system has not evolved around additive synthesis is becuase it is, as you
pointed out, a lot work. There are easier ways to get the same
results. I would much rather be making music than managing data.
Although great advances have been made in algorithms that
can tame such huge parameter sets. I have a feeling you would
be doing more linear algebra than synthesis.


March 20, 2007 | 9:32 pm

Quote: maxplanck735@hotmail.com wrote on Sat, 17 March 2007 16:19
—————————————————-
> What work has been done (if any) toward using additive synthesis as a unified theory and system for sound design?
>

This seems like another time to plug the CNMAT Objects and Spectral Tutorials, at:

http://www.cnmat.berkeley.edu/MAX/downloads/

I’m not terrible interested in "unified theory," but we’ve been working with additive synthesis for some time now. The CPU usage is not terrible, but it is hard to find a good control mapping.

mz



_j
March 20, 2007 | 9:38 pm

DSP systems really burn out pretty quick. It’s incredible how overpriced kyma is these days… if you want to view just the hardware costs and not the value of said software. suspend that the two are married for a moment. as stated, any uber range processor will give you as many partials to work with as kyma. i remember on an athlon 3000+ I was able to get as many partials as the example to kymas power found in their brochure, and that was for a maxed out, >$10k system. anyhoo. if i were really really rich i’d buy kyma in a heartbeat since it’s fun too work with, which i guess sort of [not really] justifies the price.


March 26, 2007 | 7:23 am

I’ve worked through and understand the CNMAT spectral tutorials (thanks, they’re great!). I know the SDIF file format, what data it contains and what this data represents.

i’ve dug through and understand everything regarding additive synthesis and resynthesis that i can find, short of taking the time to learn the math behind the fourier transform.

i’m trying to find out about work that’s been done in this area that i haven’t seen yet, if it’s out there. Particularly, i’m looking for any work that’s been done toward creating simplifying models that describe sounds’ spectrums using a minimum input of data (while still retaining the sounds’ sonic character, i.e. the model sounds like the original sound, or at least similar).

i’m especially looking for any work that’s been done toward deducing mathematical relationships that would allow a spectral model of a sound to be described using a smaller amount of data than would be necessary to explicitly describe each spectral data point in the sound’s spectrum. (sorry if i’m wording this poorly, here’s an example that will hopefully make more sense:)

for example: say we have a recording of one string of an upright bass being plucked, including its decay. If we analyze this recording using SPEAR, we get a set of frequency vs. time and amplitude vs. time data points for each partial.

Now, say we find a mathematical relationship between a group of time varying parameters in our data set. Using this mathematical relationship, we can now describe this sound’s spectrogram, using less data than would be required to explicitly specify each spectral data point.

Understanding this simplifying relationship would give us a framework through which to understand that part of the sound and recognize that type of spectral relationship when we hear it (hopefully even if we hear it or a variation upon it in other sounds). We could even be clever and make modifications on this mathematical relationship to create different yet somewhat similar sounds… that would be the really interesting part for me, if this is possible.

Does this make sense? Please tell me if it doesn’t and i’ll try to describe better.

If people who know a lot more about math and physics than i do have already ruled the approach that i just described as unworkable, someone please tell me.


March 26, 2007 | 9:56 am

Quote: maxplanck735@hotmail.com wrote on Mon, 26 March 2007 09:23
—————————————————-
> i’m especially looking for any work that’s been done toward deducing mathematical relationships that would allow a spectral model of a sound to be described using a smaller amount of data than would be necessary to explicitly describe each spectral data point in the sound’s spectrum. (sorry if i’m wording this poorly, here’s an example that will hopefully make more sense:)

(..)

> Understanding this simplifying relationship would give us a framework through which to understand that part of the sound and recognize that type of spectral relationship when we hear it (hopefully even if we hear it or a variation upon it in other sounds). We could even be clever and make modifications on this mathematical relationship to create different yet somewhat similar sounds… that would be the really interesting part for me, if this is possible.

This is definitly interesting. I happen to have spent a lot of time with Fourier math on the universiy. I have gone towards your idea myself by programming a Java application that reads sdif files (in text format) and parses the data to make it ready for analysing in any desireable way. I wouldn’t want to do this in Max because I really need an object oriented system to represent the sine data in an accessible way.

I never had the time to use the program in the way I wanted but I could send you the java code if you want. What I wanted to do was analyse the sdif data to get basic chracteristics of for example the 10 most important sines (length, amount of harmonic relation, volume change, etc etc) and then find the most related sound out of a database of existing sounds. I am still quite sure that I would be able to do it, given time.

Cheers,
Mattijs


March 26, 2007 | 10:44 pm

this sounds right up my alley, i’d love to see this code and think about ways to use the output. i’ll pm you me email address, thanks a lot!

hopefully other people who have studied more math and CS than i will be interested in thinking about this too..

what method(s) have you considered using for quantifying the amount of harmonic relation between partials?


March 27, 2007 | 8:13 am

Quote: maxplanck735@hotmail.com wrote on Tue, 27 March 2007 00:44
—————————————————-
> this sounds right up my alley, i’d love to see this code and think about ways to use the output. i’ll pm you me email address, thanks a lot!

Rather mail me at the email address on our website http://www.smadsteck.nl. The address I used for this forum is no longer operational and I don’t get a response to my request to change it.. :-/

>
> hopefully other people who have studied more math and CS than i will be interested in thinking about this too..
>
> what method(s) have you considered using for quantifying the amount of harmonic relation between partials?

Partials have a harmonic relation if their frequency is a multiple of the fundamental. I can recommended Perry Cook – Psychoacoustics and of course Curtis Roads – The computer music tutorial (both MIT Press) for some thorough background info.

Cheers,
Mattijs


March 27, 2007 | 6:33 pm

> i’m trying to find out about work that’s been done in this area that i haven’t seen yet, if it’s out there. Particularly, i’m looking for any work that’s been done toward creating simplifying models that describe sounds’ spectrums using a minimum input of data (while still retaining the sounds’ sonic character, i.e. the model sounds like the original sound, or at least similar).

You might have a look at various Cepstral methods (keywords: Mel Frequency Cepstral Coefficients, although Mels make me squeamish) and "modulation spectrum". Essentially, they are data reductions of fourier analyses.

Strong work in this area is going on in the Speech Analysis Community, and is starting to happen in Music, too.

mz


March 27, 2007 | 9:09 pm

That is very interesting stuff. Running an FFT on audio data
does not really produce less data. But by reducing the
number of bands and running a DCT on the resultant FFT
coefficients, reduces the size of things. I
wonder what would happen if you took it another step and
ran another DCT on the DCT coefficients. Hmmmmm…
I think this model describes a very cool way to navigate
the sea of spectral data that programs like SPEAR create.
It would be cool to be able to draw you own 3rd or 4th
generation MFCC spectrum and then decode that an appropriate
number of times and get to a target audio spectrum.
Definitely worth investigating…


March 27, 2007 | 10:41 pm

i really like this way of thinking and was building plans of my own,
but spectral analysis and fft always suffer greatly soundwise due to
their insensitivity to (almost)non periodic sounds.
i believe there are some pd externals which simulate noise residue on
top of the spectrum and you could shove in some granular synthesis,
but until something concerning this is included, the results would
remain academic and certainly not unified.
as the original poster ; ) i’m also a bit out of my league, so i’ll
gladly stand corrected if i’m wrong.
i would be very interested in developing something i this matter. i
now reside to "hyper mapping" getting things connected at a higher
level, but this is inherently less coherent parametrizing (or
something similarly spelled)

best

isjtar

On Mar 27, 2007, at 11:09 PM, Anthony Palomba wrote:

>
> That is very interesting stuff. Running an FFT on audio data
> does not really produce less data. But by reducing the
> number of bands and running a DCT on the resultant FFT
> coefficients, reduces the size of things. I
> wonder what would happen if you took it another step and
> ran another DCT on the DCT coefficients. Hmmmmm…
> I think this model describes a very cool way to navigate
> the sea of spectral data that programs like SPEAR create.
> It would be cool to be able to draw you own 3rd or 4th
> generation MFCC spectrum and then decode that an appropriate
> number of times and get to a target audio spectrum.
> Definitely worth investigating…
>
>
>


March 27, 2007 | 11:22 pm

did anyone say METASYNTH?

Jeez


March 27, 2007 | 11:57 pm

too bad metasynth is only on the mac. Does Absynth do
something similar?

—– Original Message —–
From: "Nicholas C. Raftis III"
Date: Tuesday, March 27, 2007 5:25 pm
Subject: [maxmsp] Re: Re: additive synthesis as a unified theory and
system for sound design and analysis

>
> did anyone say METASYNTH?
>
> Jeez
> –
> -=ili!ili=- http://www.Axiom-Crux.net -=ili!ili=-
>


March 28, 2007 | 1:43 pm

not really last version i checked.

On Mar 28, 2007, at 1:57 AM, apalomba@austin.rr.com wrote:

> too bad metasynth is only on the mac. Does Absynth do
> something similar?
>
>
>
> —– Original Message —–
> From: "Nicholas C. Raftis III"
> Date: Tuesday, March 27, 2007 5:25 pm
> Subject: [maxmsp] Re: Re: additive synthesis as a unified theory and
> system for sound design and analysis
>
>>
>> did anyone say METASYNTH?
>>
>> Jeez
>> –
>> -=ili!ili=- http://www.Axiom-Crux.net -=ili!ili=-
>>


March 28, 2007 | 8:58 pm

absynth is something totally different. Metasynth is more like spear but lets you draw the frequencies, essentially making an image into a sound. Coagula lite is the pc equivilant but has way less features.


March 28, 2007 | 10:06 pm

Nicholas C. Raftis III schrieb:
> did anyone say METASYNTH?

no…

Stefan


Stefan Tiedje————x——-
–_____———–|————–
–(_|_ —-|—–|—–()——-
– _|_)—-|—–()————–
———-()——–www.ccmix.com


Viewing 20 posts - 1 through 20 (of 20 total)