The Basics of Synthesis and Sound Design
Synthesizers have been with us since the early 20th century. They started to become popular during the 60s when Robert Moog invented the famous Moog synthesizer.
The history of sound design led to the modern VST plugin versions of synthesizers (VST is a computer program or software that functions as a digital synth) which are far more popular than analog ones. Some of them emulate real analog synths (like Diva by u-he), but there’s a huge market of companies that create completely new VST synthesizers that, by mixing a lot of different techniques and algorithms, allow you to make really complex and advanced sounds.
But, what exactly is a synthesizer?
In a nutshell, a synthesizer is an electronic musical instrument that generates audio signals from scratch. These signals are modified by a chain of tools that shape the sound.
They go from very basic 8-bit style sounds, to really sophisticated dubstep growls and atmospheric pads. Knowing the basics of synthesis (combined with experimentation) allows you to create almost any kind of sound that you can imagine.
They are the base of electronic music, and they can expand your possibilities to infinity if used correctly; but, on the other hand, relying only on synthesizers can kill your creativity.
In this article, I’ll show you the basic concepts of sound synthesis, along with a very brief explanation of each one. Take it as an introduction, a beginner’s tutorial, or a course for understanding digital sound synthesis.
However, we have more in-depth articles on every concept on this list. So we recommend you check those out, where you’ll study the physical and more scientific aspect of how music works, and gain a better understanding of sound design.
Read our dedicated article about waveforms here. There you’ll find even more information and also audio examples.
As you may already know, sound is made of sound waves.
To generate sound, every synthesizer has an oscillator (generally more than one, the most extreme case is Phase Plant in which you can load up to 32). This module generates a consistent and geometric waveform that will constitute the base of your sound.
A waveform is basically a signal that, when turned into sound waves by a speaker, you can perceive as sound. They have a cycle that repeats itself a certain amount of times per second. Then the speaker pushes the air particles back and forth to achieve that effect.
This is what the most basic waveform, the sine wave, looks like:
There are some other basic waveforms that you can find in almost any synthesizer, like the square wave, triangle wave, saw wave, among others. Nevertheless, all of these waves are made up of a fundamental sine wave, plus tons of “faster” sine waves.
Now, let’s define some concepts that I find that are essential for the understanding of waveforms.
Frequency: it’s how many times the wave goes back and forth in a second (in other words: the number of cycles in a second). It is measured in Hertz (Hz). The more times it oscillates in one second, the higher we say the frequency is, and the fewer times, the lower.
Pitch: this concept is closely related to the frequency of the sound. It’s each person’s subjective perception of the sound wave. Although it cannot be directly measured, we generally say that the higher the frequency, the higher the pitch, and vice versa.
Amplitude: it is basically (not technically speaking) how loud the signal is. It’s the distance between a peak or valley and the equilibrium point:
Now that you know how waveforms work, I’ll continue to explain the fundamentals of sound synthesis.
Read our dedicated article about filters here. There you’ll find even more information and also audio examples.
A sound that consists of more than a single sine wave has more than one frequency. Some of them, like the saw wave, are very rich in harmonics. A filter enables us to filter those upper frequencies in order to obtain a certain texture.
The resulting sound will depend on the cut off frequency (which is the frequency from which the filter starts to attenuate the signal) and the resonance.
Note: to visualize the complete spectrum of the frequencies of a sound, we recommend using a spectrum analyzer plugin. SPAN by Voxengo is a really good one and completely free. If you use FL Studio, you can use Fruity Parametric EQ 2 which has a pretty good and precise spectrum analyzer. These plugins will show each frequency like this (I’m using the stock Ableton Live EQ for this example):
Each “peak” represents a single sine wave, a single frequency. They are sorted from left to right: the lower frequencies on the left, the higher ones on the right. It goes from 20 Hz to 20.000 Hz since that’s the range of frequencies that humans can hear.
Now I’ll proceed to show the most basic and common types of filters that will most likely be found on every synthesizer.
High Pass Filter
The high pass filter (also called low-cut filter) cuts out the lower frequencies and leaves only the ones that are higher than the cut off point. It’s usually used in every instrument that does not have lower frequencies. That way you cut away all that muddy stuff that is probably unnecessary.
Low Pass Filter
The low pass filter (also called high-cut filter) cuts out the higher frequencies and leaves only the ones that are lower than the cut off point. This is the typical effect that happens, for example, when you hear music coming from another room. It just doesn’t have the higher frequencies.
Band Pass Filter
The band pass filter only leaves the frequencies that are within a specific range. It’s like combining a low pass with a high pass, with the difference that there’s only one frequency that we can “choose” (only one band: hence its name).
The notch filter (also called band-cut) is basically the opposite of a band pass filter: a band of frequencies is cut while everything else is let through.
The resonance is not a filter type, but a property that most filters have. It basically allows you to increase or lower the volume of the frequencies that are near the cutoff point. It is the “Q” parameter of the filter.
High pass filter with a resonance (Q) of 3.06.
So far we’ve talked about some parameters such as amplitude, pitch, and filters. These aspects can be modulated: they can be automated in order to change over time. To modulate these parameters you can use some tools such as envelopes and LFO’s.
Read our dedicated article about envelopes here. There you’ll find even more information and also audio examples.
Envelopes are pretty much in any synthesizer. An envelope is an automation shape (meaning that it changes a parameter over time) that triggers itself every time you press a note on your MIDI keyboard.
They usually modulate the amplitude (the volume) but you can often use them to modulate the pitch and the filters. In some extreme cases, like Serum, you can use envelopes to modulate pretty much any parameter of the synthesizer.
Envelopes generally consist mainly of 4 parameters. The famous ADSR, which stands for:
The attack controls the time it takes the sound to reach its highest volume (or its highest pitch, for example; depending on what parameter you are modulating). It always starts at 0 and goes to 100.
In terms of amplitude, percussive sounds, plucks, and leads will usually have a really fast attack, while sounds like ambient pads, will most likely have a much longer attack time.
The decay controls the time it takes the sound to transition from its highest value (i.e. it’s highest volume) to the sustain level, which I’ll explain now. It goes from 100 to the value established by the sustain knob.
The sustain is the parameter that controls at which amplitude (or pitch, or whatever) the sound will remain constant while you keep pressing the note, after it has attacked and decayed.
The sustain does not control the time as the other ADSR parameters do, but instead it controls the value (from 0 to 100) at which the parameter you are modulating will stay.
Note that if the sustain is at 100, the decay will do nothing because it drops to the point the sustain is set. Since 100 is full value, the decay has nowhere to drop.
Continuing with the examples I gave earlier, percussive sounds and plucks usually have very little to none sustain. On the other hand, leads and pads will probably have a considerable sustain value.
The release controls the time it takes to the sound to fade out after the note is released (after you stop pressing it). It goes from the value in which the key is released to 0 (the note is released after it has reached the sustain point, then it will go from the value set by the sustain to 0).
It adds a little “tail” to the sound. Every sound will most likely have at least a little bit of release, since it provides some smoothness and prevents you from “clicks”. Nevertheless, leads often have really short release times, while pads and atmospheric sounds tend to have really long ones.
That’s it for the fundamentals of ASDR Envelopes. However, if you are interested in learning more about this tool, we’ve got you covered with our dedicated article on Sound Envelopes: Shape Your Sound Using ADSR.
Low Frequency Oscillator (LFO)
The LFOs are another way to modulate these parameters that we’ve been talking about (amplitude, pitch, filters, and sometimes even more).
They’re like envelopes but with the difference that they use waveforms to automate the sound (low-frequency waveforms, as the name indicates; sometimes even lower than 1 Hz, which is considered ultrasound — we talk about ultrasounds when dealing with frequencies of 20 Hz or less which are inaudible by humans) and that they trigger indefinitely rather than on every note.
The rate of the waveform can be controlled either using Hz or syncing it with the tempo of the song. The most usual waveforms used for LFOs are the basic ones, but some synths let you control exactly what waveform you want.
Read our dedicated article about effects here. There you’ll find even more information and also audio examples.
Last but not least, most synthesizers have an effects module. If the one you are using doesn’t have it, you can load external effects on your DAW (which sometimes is an even much “cleaner” way to use effects).
Although they aren’t exactly part of the synth of the sound (they are something you add, not the base of the sound) they have a huge impact on it, and using them aggressively can completely change its timbre. Here I’ll expose some of the most common effects.
Saturation & Distortion
Both saturation and distortion add higher harmonics of the sound. They make the sound fuller or warmer. They are useful on every kind of sound, especially on the foreground ones.
The Delay basically repeats the sound indefinitely, every time with less volume. It’s also called echo. It’s useful for filling up a mix, and giving it more ambient and atmosphere.
Reverb gives a sense of space to the sound, as if it was played in a specific type of room. It can be used either in small amounts (to give some sense of physical space) or in huge amounts (to create really big atmospheres).
This is one of the cases in which it’s often better to use it outside the synthesizer (I mean, in the plugin’s effect chain in your DAW) since in that way you’ll achieve a much cleaner sound. But still, it’s sometimes useful to have it included in the synthesizer.
Chorus makes copies of the sound and plays them simultaneously, but it delays and detunes them a little bit. It gives the sound some 80s vibes.
So these are the effects that you will most likely find in the synthesizer you are using. If you want to dig deeper, you can read our dedicated article on effects where you’ll find a long list of other effects that aren’t that common but that are necessary to understand.
Now that we’ve covered the essentials of sound synthesis, I’ll show you a brief definition of the most important types. You can, of course, read the dedicated article that we have covering each of them:
Subtractive Synthesis (for example, Sylenth1): it works by taking a basic waveform with several harmonics (such as the Saw wave or the Square wave, among others) and shaping it by running it through a very simple chain or systems of filters and envelopes.
FM Synthesis (Serum, Sytrus, Ableton’s Operator): it works by taking a waveform and modulating its pitch with another waveform (it’s like modulating the pitch with an LFO of a very high frequency). Therefore, you are adding more harmonics to the sound, although it usually then runs through envelopes, filters, LFOs, and effects.
Additive Synthesis: it works by adding on multiple sines to create a more complex waveform. Then it runs through envelopes, filters, LFOs, and effects.
Wavetable Synthesis (Serum, Massive, Ableton’s Wavetable): it uses a table with various switchable waveforms within which you can scroll. You can modulate the position of the wavetable with an LFO or envelope, and it will smoothly change the waveform (these waveforms are often really complex, and very useful for genres in which movement through time is a key component, such as dubstep).
Almost every synthesizer and VST plugin operates using these basic principles that we’ve explained. Knowing all of this, plus a lot of experimentation, you will eventually be able to recreate exactly the types of sound that you want.
There are different types of synthesis, and I recommend you to choose a synthesizer that uses the one that sounds better for you and start experimenting with it.
Experimentation is really important because it’s the way in which you will understand how every parameter affects the sound, and also to understand how the sounds that you hear in your favorite music are created. We have an article talking about the importance of experimentation and practice, click here to read more about this topic.
As we said earlier, you can read more in-depth articles about every concept that we’ve covered in this one if you want to dig deeper into sound design. And complementarily, there are a lot of books about sound synthesis that contain limitless amounts of knowledge and will teach you the specific physics of synthesizers.