Assignment 5/28-Intro to music production

Algorithmic and Convolution Reverb

I’m Will Servant and I’m an old fart who started playing music back when drums were animal skins stretched over hollowed out logs, struck by wooly mammoth bones. I am enjoying this course immensely and have learned a great deal so far. As I do not currently have a DAW to use I am presenting today’s lesson by way of text. I hope you are able to follow the lesson and comprehend it easily. If you cannot, please let me know how I can do a better job. Thanks for your help.

When it comes to reverb let me begin by saying we should all be thankful for the digital reverb plugin. Analog methods of adding reverb could be cumbersome and parameters such as delay time and pre delay could not be easily controlled.

There are two basic types of reverb plugins that give the producer the tools to create a reverberation effect that accurately represents a real space. They both use math to create reverb effects, but do so in a different manner. In describing these two types of reverb I’d like to go into a bit more depth than did the video lectures. I hope you can bear with me.

Algorithmic reverb uses mathematical formulas to apply different parameters of reverb to the signal. An algorithmic reverb allows the producer a lot of flexibility. They can manually control the functions that make up reverb, to create a highly customized reverb environment. A convolution reverb uses math formulas to analyze the impulse responses of recordings of real physical spaces, then applies the total reverberation effect of any given space to the track.

One analogy used to describe the difference between the two types of reverb plugins is that algorithmic reverb is similar to a synthesizer. It uses math to control each component separately combining them to create the reverb effect. Convolution reverb is like a sampler. It uses recordings of real spaces to create separate, complete reverb environments.

An analogy I like to use is that of making soup. The algorithmic way of making soup uses the various separate ingredients, i.e. a homemade soup. Using this method you are in control of the parameters, the ingredients, their relative amounts, the temperature and timing of the cooking. You come up with a meal that is uniquely yours. However, you need to have knowledge of how the soup is supposed to taste to avoid serving one up that nobody likes. You can also create and use recipes, which are like algorithmic reverb presets, where all the parameter settings are in memory for you to easily apply. The downside here is that these presets/recipes are only approximations of the type of soup you want. To really be creative you must have knowledge of how to cook, in order to get the meal you want.

A convolution soup is when you go to the store and buy readymade soup. There are a wide variety of soups you can buy. They are all different and the ingredients have been put together already, from a standard recipe that somebody else created. You choose the one you want to eat. Any one type of any brand of soup will be universally the same in flavor, texture and consistency. In this circumstance you need to know what kind and/or brand of soup you or your guests might like. They may like a school gym instead of a concert hall or night club, as it were. You may also want to know which soup goes best with the beverage you are serving, i.e. which reverb sounds best on which track. The downside here is that you don’t have a lot of control over what comes out of the can. To use a tired and overused bromide, it is what it is.

To use an algorithmic reverb you need to know a little about controlling the functions that comprise reverb. Most algorithmic reverb plugins have two distinct control sections, early reflections and diffuse reverb. These model how reverberation works in real spaces. Early reflections are the numerous delays caused by the sound reflecting off close surfaces. Diffuse reverb is those delays diffused, or carried out farther into the space. Elements of early reflection are pre delay, which is the length of time between the original sound and the beginning of the reverb effect, room shape and/or size, and use of the stereo soundscape. Elements of diffuse reverb are delay time, high frequency EQ, density of the reverb effect, and once again, when the effect starts, and how it is spread in the stereo width.

Using convolution reverb is essentially a function of listening to the reverb from any number of spaces, as they are applied to the track, and then choosing the one you feel works best for your purposes. It’s obviously a little simpler to use than algorithmic reverb but has fewer permutations and is therefore less flexible. It can also be overwhelming trying to listen to the sound of many different spaces, especially when the reverb from a particular space can sound different in the full mix than when heard on a soloed track.

In summary, both types of digital reverb processors have advantages and limitations, and both are equally as valuable to the producer in creating the atmosphere and sound environment they want the listener to experience. Both are very useful tools in finalizing a mix. But, as in most aspects of producing, our greatest tool is our ears, and listening is the best way to find which kind of reverb is best for your project. Through trial and error you can find that “just right” balance between the chicken and the vegetables, but remember to use your spices sparingly, so that your guests barely know the reverb is there.

Assignment 5/21 Intro to music production

Probably the most recognizable aspect of music is dynamics, the changes in amplitude that we hear as changes in loudness, subjective, perceived changes, and in volume, measured, objective changes. Manipulating dynamics is one of the most versatile and important tools the producer uses in post production to shape the sound both of individual instruments and the mix as a whole.

Dynamic processors change certain parameters of dynamics, under certain rules, to alter volume, at any gain stage point in the signal path, for any recorded track or combination of tracks. Perhaps the simplest dynamic processor is the producer himself. The producer can change the volume of the sound on a track through application of amplitude automation. He can even physically change levels on the fly, such as riding faders to balance the dynamics of a track. The human dynamic processor uses a two stage process to change dynamics in real time. First, they analyze the dynamics of the track and determine where the volume should be raised or lowered. Second, they manipulate the faders based on that analysis. This two step process carries over to dynamic processing done by outboard gear and software plugins.

All hardware and software dynamic processors have a side chain or key section, which does the analysis of the input signal and a volume fader section, which changes the volume over time. This is because all dynamic processors are acting as some type of volume control. And because they all work on the same thing, they all use the same parameters and the same means of changing those parameters. The four essential parameters of dynamic processing are, threshold, ratio, attack, and release.

Threshold represents the amplitude level at which the processor begins working. Changing the threshold can influence the sound a great deal, as the different settings vary the results of using the same processor. Ratio is the amount of processing applied to the signal once it is triggered by the volume reaching and surpassing the threshold. It is expressed as the ratio of input to output. A ratio represents how much the input changes as it passes the threshold. For example a 4:1 ratio means that for every four decibels of input that is processed, the signal will change 1 decibel. Attack is how fast the processor begins to work. Changing this parameter influences how the beginning of a signal sounds. Instruments can often begin with a transient, or a rapid change in amplitude. A snare drum is a good example of an instrument that makes a lot of transients. Changing the attack changes how much of the transient we can hear and influences how smooth or punchy the sound is. Release is the opposite of attack. It changes how long the end of the processed sound lasts before it is “released”. This will influence how choppy or slowly the processed sound finishes.

The four major types of dynamic processors are, compressors, expanders, noise gates and limiters. All four of these processors act on the same four parameters, changing those parameters to some degree, and operating under different rules. A compressor reduces the dynamic range by either reducing the loud sounds or by raising the soft sounds. The rule for compression is as the input gets louder the output gets softer. This effect shrinks the dynamic range so that the loud parts are softer and the soft parts louder. This allows the producer to raise the gain of the entire track so that it is louder in the mix without distorting. There are many different applications of compressors to different instruments, to entire tracks and for different effects. The manipulations of the four parameters are interactive, thus creating an almost limitless number of possible effects. This vast opportunity for change makes the compressor widely used but difficult to master. Even the best producers continue to learn more about compression over time.

Expanders are the opposite of compressors in that they increase the dynamic range, either by making the loud parts louder or the soft parts softer. Expansion is not used as much as compression but can be very useful in a situation such as mixing a heavily compressed recorded track to give back some of the differences in the volume of the original track. For example, an orchestra recorded with compression to reduce the chance for distortion could have some of the dynamic range returned in the mixdown through expansion. The rule for expansion is as the input gets louder the output gets louder.

Limiters and noise gates use compression in special ways and function in somewhat opposite ways. Noise gates allow only the sounds above a certain volume to pass through the “gate” by cutting off all sound under the threshold. Gates are useful in removing unwanted sounds from the mix, such as foot tapping, finger movements , squeaks from chairs etc. Limiters allow only sounds under the threshold to be heard. They are essentially compressors that operate with ratios over 10:1. They are traditionally used to prevent loud sounds from distorting but in modern usage they act as loudness maximizers, as heavy limiting can allow the signal gain to be increased a great deal, making the whole track apparently louder, while assuring the producer the signal will not distort.

Dynamic processors have a multitude of uses in the studio in both the tracking and post production stages. Knowing the basics of how each processor works and what it does gives you a good head start to the fine art of applying dynamic processing to change the emotional and spatial presentation for the listener. You will be improving your processing skills forever, as you gain experience as a producer. Enjoy.

Intro to Music Production Assignment 5/14

One of the major advantages of recording in the digital realm over the analog realm is the fact that signal processing is done by software, much of which is included within your DAW, rather than with expensive outboard gear. Digital signal processing, or DSP, is cheaper, easier to use, and has continued to become more accurate, more sophisticated and more innovative. In fact, DSP is so easy to use, and there are so many different types of third party processors available for your DAW, that instead of barely being able to scrape together the basics a big danger for the producer is using too much signal processing.

Today we will be talking about some simple but important knowledge regarding the classifications of the effects that processors apply to the signal. These effects are the same in both the digital and analog realms, because these effects relate to the basic building blocks of sound and are not exclusive to either domain.

Effects can be divided into three major categories that are applied to the three major physical characteristics of sound. First there are the dynamic effects. Dynamic effects control amplitude, which is the strength of the compression and rarification of sound waves as they move through the air. To the listener these effects affect the volume of the sound. Dynamic effects include compressors, limiters, expanders, and noise gates. We will talk about each of these effects and their applications later in the course.

Second, we have delay effects, which control the quality of the propagation of sound waves. Propagation is the measure of sound waves as they move through time and space. To the listener delay effects make certain parts of sounds appear to be three dimensional or happen at a different time than the rest of the sound. Delay effects include, reverbs, delays, phasers, flangers, and choruses. Once again, we will examine these effects individually later .

Finally we have Filter effects, which control the timbre. This is the measure of the relative balance between the amplitude and frequencies of a sound. This balancing produces the quality we often call tone in the ear of the listener. Filters are able to amplify or attenuate particular frequencies to produce the vast variety of tones we distinguish in our minds, apart from pitch or volume, that influence how we perceive sound. Filter effects include, high pass filters, low pass filters, band filters, parametric equalization and Graphic equalization.

Each of these effects are major tools available to the producer to shape and fine tune the quality of the sounds that comprise the final mix. Knowing which effects relate to which qualities of sound can tell us what effects processors to use when, where to place them in the signal path, and which tracks to group together to apply a particular effect to all of them at once. The application of each individual effect is an important piece of knowledge for the producer.

Introduction to Music Production Assignment 5/7/15

Creating efficiently compiled tracks in a DAW using multiple audio recordings

One frustrating aspect of recording any audio track is recording a performance that is perfect except for several notes, or a measure here and there. It’s very difficult, even for the best player or vocalist, to perform perfectly every time, and sometimes hardly any of the time. Something as minor as a note held too long or a slight wavering in pitch can spoil an entire performance and can cause the performer to become angry, lose confidence, scatter their focus or just plain get tired of the process. This makes it hard to capture the quality track you desire, in a small number of takes. The more takes a performer is required to track, fatigue will build and the performance from one take to the next can deteriorate rapidly, until it becomes nearly impossible to replicate anything close to the performance required for a good recording.

In analog recording, “mistakes” in an otherwise good track were “fixed” by a process called punching in. This technique, a physical rerecording over the “bad” portion of the original track, was usually used as a last resort by the producer, trying to save a nearly perfect track in the face of a diminishing likelihood of tracking a complete take as good as the one already on tape. The resultant “patched” track was rarely, if ever, totally smooth at the punch in and punch out points and was one of those parameters the analog producer had to compromise for the sake of time and budget in the project.

With the advent of digital recording and editing techniques, getting a perfect track became much easier. Through a process called comping, or compiled tracking, any number of takes of a performance can be recorded and coexist separately inside the project. Once you are satisfied you have enough performances recorded to compile one perfect track you can then begin to choose which parts of which tracks you like best. Then, when you know what you want to keep, you can begin to cut the regions where these top performances are, isolating them into smaller regions. Having discovered and cut these new regions you then create a completely new track and begin to move the preferred segments of the performance onto the new track. This can be done by dragging, cut and paste, or by using a key command, depending on your DAW and/or your preferred method.

Once you have all your pieces of the performance together on one track ideally you have your perfect performance compiled. But there is still one issue to address. Like the Frankenstein monster, the piecing together of tracks has left some ugly seams where the regions meet at the point of the cut. Where the cut was made the waveform may not be right at zero. It can jump instantaneously back to zero at the cut, creating an audible click as the resultant track is played back.

To combat this problem we use a technique called fading. We fadeout and fade in the regions at the edges of the cuts to sort of “smooth out” the seam between the cuts, and eliminate the audible click. On some DAWs this smoothing out can be done without the need for a true fade by automatically making the cut where the waveform is at zero. Other DAWs must make a fade over a selected, normally quite short, portion of the regions surrounding the cut. The zoom feature can help us find the right place to begin and end the fade. When regions overlap, a similar technique, called crossfading, is used. In the crossfade one overlapping region is faded out while the other is faded in, making an almost seamless transition.

Learning and mastering comping together tracks and fading them to eliminate any resulting clicks is an important tool of the editor and one that should be mastered if we want to capture the best performance possible. This is one area where editing in digital audio versus analog audio gives us a distinct advantage, in that we are able to keep our perfect, except for that one measure, track and “fix it” so that we don’t have to toss out the baby with the bath water. Just remember that comping a track can make you sound like a better player than you are, and this can create problems playing the parts in a live concert setting. To my mind, when used sparingly, it can greatly enhance a recording. But, like any tool, try not to use it too much.