Understanding Every Audio Effect

In our article about the basics of synthesis and sound design, we’ve mentioned the effects, which are ways to modify a sound after it’s been synthesized. They are just additional tools for shaping the sound.

Audio effects can either be included within the synthesizer, or you can load them on your mixer track in your DAW. Still, there are some effects that in some contexts work better if loaded externally from the synthesizer, even if they’re included, just as reverb and delay.

In this article, we’ll be covering the most common electronic music effects, especially in sound design, along with a brief explanation of each of them. We even have a dedicated article for some of them, so you may want to check them out.

These effects are:

Also, some synthesizers consider EQs and Filters as audio effects, but those are both huge topics on their own, so I wouldn’t call them “effects”.

The dry/wet knob

The most common parameter that you will find in almost every effect plugin is the dry/wet or mix knob. This knob determines the amount of the original/clean signal vs the amount of affected signal. It’s basically how much of the effect will be applied.

For example, if you set the mix knob to 100%, the original sound will be affected entirely by the effect, whereas if you set it to 0%, the signal will remain clean and no effect will be applied.

Almost every VST plugin includes a mix knob, but some plugins don’t. If that’s the case, you can create your own dry/wet parameter if you use Ableton.

Therefore, we say that a sound is dry when it’s “clean” and no effect is applied; on the other hand, we say a sound is wet when it has at least some amount of the effect applied.

Compression

Compression is a dynamic-based effect that is used to decrease the dynamic range of the sound.

This means that the parts of the sound that have the highest volume will be decreased, and the inverse applies: the parts with the lowest volume will be increased (in volume).

You can think of it as a [very fast] volume automation that will lower the volume of the higher points and turn it up on the lower points.

Ableton Compressor

Ableton‘s stock compressor used on a drum loop. The yellow line is the “volume automation”. Notice how it goes down during the peaks, and it goes up during the parts where the volume is low.

compression drawing pink

Uncompressed signal vs. a compressed one. Notice that it’s not only an overall change in volume: the parts that were originally at a low volume will stay at that volume. Only the high peaks will be reduced.

This is an example of a not compressed drum loop vs. a compressed one:

Note: this amount of compression is really extreme and you will probably never want to compress a drum loop like that. It’s just to make it obvious for this example. We talk about this type of experimentation in this article.

Multiband Compression

As its name says, it is the same as compression, but applying different settings for the different parts of the frequency spectrum.

For example, while mastering your song, you may want to compress differently the lower frequencies than the mids or the highs.

fl studio maximus

This is Maximus, which is par excellence the multiband compressor from FL Studio.

Distortion & Saturation

Both saturation and distortion add higher harmonics to the sound. They make the sound sound fuller or warmer. They are useful for every kind of sound, especially on the foreground ones.

This can be used as an obvious effect, like on electric guitars in genres like rock or metal, but it can also be used subtly, to add a little bit more warmth to a drum loop or a lead sound, for example.

SoundToys Devil-Loc

Devil-Loc distortion by SoundToys; or, as they call it, “audio level destroyer”. Most saturation plugins will be really simple, with a very small amount of knobs. The most important one is usually the “drive” knob (in this case they put it as “crush”) which determines the amount of saturation that will be applied.

This is an example of a clean sound vs. a highly distorted one:

Multiband Distortion

Multiband distortion is the same as multiband compression but, obviously, with distortion. This is useful, for example, if you want to distort the higher frequencies, but leave the low-end clean.

One of the best plugins for achieving this effect is Fabfilter’s Saturn:

Fabfilter Saturn

Fabfilter Saturn: the three columns represent the three different bands of frequencies that can be distorted separately with the “drive” knob.

Bitcrusher

A bitcrusher basically will downsample the sound. This means that it will reduce its quality. This effect achieves an 8-bit or lo-fi kind of sound, that sounds like the classic Nintendo signature music.

It is also sometimes useful for achieving a “vowel” sound, especially for dubstep growls; for making your sound “talk”. This is particularly useful when combined with filters before or after the bitcrushing.

Some plugins include this effect as a type of distortion, like Serum does, which is actually kind of accurate.

Ableton Bit Crusher

Ableton‘s bitcrusher, which is called Redux.

This is an example of the same drum loop clean vs. bitcrushed:

Frequency Shifter

A frequency shifter is a sound effect that is not that common to find. It basically raises or lowers the overall frequency of the sound.

Unlike a pitch shifter (which is very similar) a frequency shifter does not maintain the harmonic relationships between the multiple waves that conform the sound, which may make this effect sound very unnatural. That’s why it sounds especially good when used on non-pitched sounds such as drums.

Ableton Frequency Shifter

Ableton‘s Frequency Shifter. You can control the tuning with the “frequency” knob. It has an included LFO to variate the frequency over time.

This is the same drum loop as before; completely clean vs. pitch-shifted by +400 Hz:

Flanger

Flanger is a modulation-based musical effect that produces a characteristic and oscillating metallic sound, especially on mid and high frequencies.

This is achieved by duplicating the original sound wave: one of the waves will remain clean, while the other one will be delayed (generally less than 5 milliseconds). This de-phasing delay often varies through time, creating an oscillating effect.

This effect is mostly used in very subtle ways, and the most common parameters in a flanger plugin/module are:

  • Delay: this determines how much the duplicated wave will be delayed respect to the original sound.
  • Frequency: determines how fast the delay time of the duplicated wave will vary (how fast will be the oscillating effect)
  • Depth: determines how much of the original wave will be merged with the duplicated one.
KiloHearts Flanger

This is an example of a clean and simple chord vs. the same chord but with a flanger applied:

Chorus

The chorus is pretty similar to the flanger. It creates duplicated versions of the original sounds that, in addition to delaying them, it also slightly detunes them.

This effect is useful to achieve the classic sound used during the 80s.

For example, take a listen to this guitar, first clean and then with chorus applied:

KiloHearts Chorus

Phaser

A phaser is basically a filter that creates a series of peaks and valleys through the frequency spectrum, which moves around through time. It’s something like combining a bunch of notch filters at different frequencies that, to add more complexity, are also varying through time. This variation is often controlled by an LFO.

This effect is often used for giving real/organic instruments a synthetic feel, as if they were actually synthesized.

Sound phaser scheme chart frequency spectrum

Those upside-down peaks are the valleys that move through time and give the phaser its characteristic sound. 

Let’s take the same example as before, followed by the same chord affected by a phaser:

Stereo Imaging

Stereo imaging, stereo separation, or stereo widening is a huge topic that deserves a dedicated article.

In a few words, stereo imaging plugins control de difference between the left and right channel: the more difference there is between those, the more stereo separated or wider we say the sound is. This width can be felt if you listen to the music through headphones or speakers, but not if you listen to it through, for example, your monophonic phone speaker.

When listening to it on stereo systems, stereo width is always cool: but we’ve got to be careful because it may cause phase correlation problems if we take it to the extreme.

This is an example of a monophonic sound, vs. the same sound with stereo image (you must listen to these samples with headphones or external speakers, otherwise you won’t notice the difference):

Multiband Stereo Imaging

Just like multiband compression and distortion, you can apply different amounts of stereo imaging to the different bands of the frequency spectrum.

This is particularly useful because, as a matter of fact, the higher the frequency in the spectrum, the more stereo separation that can be applied. Also, it’s useful for making the lower frequencies monophonic, since there are often phase correlation problems down there, and this also helps to make the bass more consistent and solid.

Izotope Ozone 10

Izotope Ozone 8 Stereo Imager. You can create up to 4 bands of frequencies and stereo-treat them individually.

Delay

The delay is a type of time-based effect that basically repeats the sound over and over indefinitely, every time with less volume. It’s also called echo. It’s useful for filling up a mix, and giving it more ambient and atmosphere.

The most common parameters in a delay plugin are the delay properly said and the feedback.

The delay determines how fast the echo will be repeated. The feedback determines for how long this echo will be repeated (how long will the “tail” be).

This is an example of a plucked arpeggiated chord, first completely dry, and then with delay applied:

Reverb

Reverb gives a sense of space to the sound as if it was played in a specific type of room. It can be used either in small amounts (to give some sense of physical space) or in huge amounts (to create ambience and really big atmospheres).

The most common parameters of a reverb are the size (which determines the size of the room that it will be emulating: this can go from a huge cathedral to a small chamber room) and the decay (which determines how long this reverb “tail” will be, and it’s measured in seconds).

This is one of the cases in which it’s often better to use it outside the synthesizer (I mean, in the plugin’s effect chain in your DAW) because in that way you’ll achieve a much cleaner sound. But still, it’s sometimes useful to have it included in the synthesizer.

This is the same example as before, but with reverb applied:

So, these are the most common effects that you will most likely find during your music production journey.

These sound effects can be stacked one on top of each other, which is called an effect chain. It’s important to know that the order in which you load these will have a huge impact on the resulting sound.

As I said, this article is not intended to give an in-depth explanation of each of them.

I encourage you to read our dedicated articles or to look for additional information on Google or YouTube about the effects that you are the most interested in.

Finally, in case you haven’t already, we suggest taking a look at our article covering the basic concepts about synthesis and sound design.