Had it up to HERE

I usually wait longer than a week to chime in on major events, so I can get a reading of how the wind is blowing and respond in that very arrogant all knowing way I am prone to. The event in Charleston SC, though, has blown me away, and all of my above it all superiority has melted away in the fire of anger and disgust. For me this is the last straw.

This is not about Christianity. This is not about gun control. This is not about mental illness. This is not about race. This is not about isolated “lone wolves” abandoned by their society. This is not about crime as anomaly. This is not about terrorism. This is not about meting out justice. This is not about the law or government. This is not about partisanship. This is not about the Confederate flag. This is not about the death penalty. It is all of these things and none of them.

This IS about systemic violence used as a bridge to cross the gulfs created by divisions in our society; divisions created through any number of social ills; social ills created by deeply ingrained ideas of privilege and class structure; social ills created by contending norms of race and wealth and status and political ideology.

This violence is not only that of the physical. It is that of the emotional. It is that of the mental. It is that of the spiritual. That said, it is physical violence, appearing as it does in the densest plane of existence, the physical plane, that is most apparent and observable to us. Therefore it is physical violence that we most relate to and respond to when grieving and mourning the descent of civility into the morass, into the pit, into disintegration. It is physical violence that shoves our weakness as a species into our collective face.

In this culture, the American culture, more than any other, violence is an accepted means of resolving conflict. In fact it is the primary means, the most revered the most glorified means. Let me say that again. Violence is the preferred means of resolving conflict in this our America. Daddies teach their boys that to “be a man” one must learn how to fight, that the best way to settle differences with the other boys is a hay maker to the jaw. Government is made up primarily of those very boys, not far removed from the grade school playgrounds where they learned and perfected using violence as a tool to get their way. They tell us the best defense is a good offense. They tell us might makes right. They are like the husband who thinks he is strong because he can beat up his wife.

We spend an ungodly amount of money on machines of violence, so much more than on assuaging social ills and solving the many other problems that afflict us. We can read the words alright, but cannot seem to actually beat our swords into plowshares. Most of our great spectacles, professional sports, reinforce the message of violence, either overtly or covertly. We continually endorse this ideal of violent conflict resolution through the glorification of violence in all media, and in our blatant acceptance of it’s value.

The constant assault on our civilized sensibilities, at the expense of our mortal souls, and the resulting continuous and senseless destruction of those we love, this is the visible result of consciously or unconsciously applied physical violence. It is the part of the iceberg we can see. But, for me, it is the other forms of violence, the hidden violence of emotion and mentality, that cut society the deepest. Families slice each other up with focused, hurtful words. This too is violence. Businessmen step all over each other in the vicious battle we know as climbing the corporate ladder, the race to the top, rung by bloody rung. Political rivals, sporting rivals, romantic rivals, are not to simply defeat their opponents but kick their asses, to destroy them. We compete not to win but to annihilate. We do not call our rivals opponents but insist they are enemies.

We most readily use violence on ourselves. The fuel that propagates violence is hate. Hate is not the opposite of love as many may say. Hate originates within. It is the self loathing all of us experience somehow, somewhere, sometime, in that place we won’t let anybody see, that gives birth to hatred. Hatred is learned and we can only first experience it through hating something we ourselves are or do, something about our own selves that disgusts and mortifies us, something that holds us back from shining the light of our true, loving selves out into the world. Only then will we see those things in the “others” and hate them too. We begin to see anything that frightens us, or threatens us, perceived or real, and hate the “others” for it.

We use this hate of self to perpetrate violence on ourselves in myriad ways, some of them so subtle as to be nearly invisible and unreachable. These internal wars are the basis for the psychological, spiritual and/or intellectual violence that is so deadly to us and our culture, because of its ability to hide in places we can’t reach, like a virus in our bodies, waiting for that moment of weakness when it can emerge and strike swiftly and with blinding force.

As it is in the microculture of our own consciousness so it is in the macroculture of our relationship to the world. We cannot possibly be the decrepit creatures we see when we look inside. There must be some reason we fail. It must be that other, whoever that other might be. What the world teaches us is disgusting is in the other. We will assign any disgusting failure we want to the other, as long as it makes us feel better, as long as it stops the pain for just a few moments. Hatred and violence is the morphine of painful and failing lives. If we cannot shine our light then nobody can, especially the other, in whom we see ourselves mirrored so clearly. But we mustn’t let anyone know how alike we are. We must destroy the other before anyone can find out.

We need to look deep inside ourselves to find the buried vault of our hatred. We have to remove the multiple locks that bind the vault, one by one, regardless how difficult and wrenching. We must then take what we find there and search deeper yet, to find where it came from, from what decrepit fountain it poured forth. We must dive into that fountain of filth, swimming through the putrid bile of our own, hidden self hate to the source, the pump that forces the hate into our hearts. It is primordial.

It may be true, as many say, we are violent by our nature, it will never change, it’s in our DNA, it’s useless to try. But is that any good reason to give up, to stop trying, to throw up our hands and say it’s bigger than us, we can’t win. When has anything ever been bigger than a human heart full of love. If we truly believe that love conquers all then this is the time to prove it. This is the time to break the chain of violence. But it will take men and women and children with profound love and of unyielding courage, in action, the action of both warming the feet of the frightened and holding to the fire the feet of those both self righteous and only selectively human.

I speak to myself when I say we need to DO more and TALK less.

Americans believe in faith, even if it is the faith that no faith exists.

I have faith we can bury hatred and it’s weapon, violence, under a mountain of love.

Join me.

Intro to Music Production Assignment 6/4/15

Usage of the five most important synthesis modules

The five primary synthesis modules are, oscillators, filters, amplifiers, envelopes, and LCOs. Each module changes a particular element of sound and combined they comprise one complex whole. Today we will look at how we use these modules to create and modulate sound.

The first module is the oscillator. An oscillator creates sound electronically instead of mechanically. It does this by creating geometric waveforms. The main waveforms generated by oscillators are sine, square, sawtooth, triangle and noise. They are named based on the shape of the wave. Each waveform has different characteristics that produce certain types of sound. A sine wave produces a tone at a single frequency. A sawtooth wave includes a set of upper partials, or harmonics, creating a full, bright sound. A square wave produces only half of the harmonics, creating a hollow sound. A triangle wave is essentially a filtered square wave and a noise waveform is energy evenly spread over the entire frequency spectrum, creating simple white noise.

As we have said, each module modulates a specific part of the sound. In the oscillator, pitch is modulated. Because the pitch is modulated through changes in voltage another name for the oscillator module is a VCO, or voltage controlled oscillator. The other two modules concerned with the creation of the sound, the filter and the amplifier, are also controlled by changes in voltage and are called a VCF, or voltage controlled filter, and a VCA, or voltage controlled amplifier.

Next comes the filter module. The purpose of this module is similar to that of the EQ section of a mixer, removing or emphasizing certain frequencies and/or harmonics. However in a synthesizer the filtering changes over time. The main filter used in a synthesizer is the low pass filter. The waveforms generated by the oscillator are harsh, almost obnoxious. The low pass filter cuts out most of the overly bright high frequencies, which helps those waveforms sound more musical. The filter module can also use other types of filters, such as a band pass filter, to modulate other frequencies.

The filter is normally modulated by changing its cut off frequency over time. A filtered oscillator is a common phenomenon in the real world. The human voice is a filtered oscillator. The vocal cords are the oscillator and the mouth is the filter. Synthesizer filters tend to be resonant filters. All filters are delays and delays involve feedback, which can create resonance at certain frequencies. When the resonance level is raised it emphasizes the cut off frequency and makes the harmonics jump out at you as the filter sweeps through the frequencies. Increased resonance is best used when you want to hear the filter itself.

The amplifier module controls volume. A synthesizer’s amplifier, as previously said, is voltage controlled and designed to change volume very fast. The amplifier is modulated by the envelope, which is a set path that the sound takes each time the key is depressed and released. This path is defined by four controls, attack time, decay time, sustain level, and release time. Changing these parameters influences the shape of the note. The attack time determines how fast the note goes from zero to full value. Decay time is how long the volume takes to go from full value to the sustain level. Changing the sustain level determines at what volume the sound stays until the key is released. From that point until the sound reaches zero volume is the release time.

As you might imagine we can create many different envelope shapes, which greatly influence how notes sound. Different instruments have different shaped notes, and to accurately emulate them the amplitude envelope must match that of the instrument. For example an organ note goes on and off like a switch, and thus has a very short attack and release time with no decay and a high sustain. A plucked violin, a percussive sound, will have a short attack and decay with no sustain. In this case the decay time defines the end of the note, regardless of when the key is released. The amplifier envelope has a great deal to do with creating a note whose sound distinguishes itself from other notes of the same pitch and tone.

The final module is the LFO, or low frequency oscillator. The LFO is strictly a modulation module, because, in this instance, by low frequency we mean the sound generated is below the threshold of human hearing, or @ 20Hz. The output of the oscillator is therefore not heard and only controls another parameter of the sound. Most often the LFO controls the VCO. It works cyclicly and moves the pitch of the VCO up and down, over and over. This makes the LCO good for creating a vibrato, where the cyclic output of the LCO controls the frequency of the VCO output, making the pitch waver. It also controls the amount, shape and frequency of the modulation. Using different waveforms it can also create linear modulations, trills and other pitch variations. In a simple synthesizer the LFO output is often hardwired to the VCO input. But in a more complex synthesizer we can control more than one module to get a more natural vibrato that includes changes in amplitude from the amplifier and timbre from the filter.

One final thing to remember about synthesis is that we always need to be aware of the source of modulation, its destination and amount. This will help us keep balance and clarity in molding the sound we desire. Thanks for letting this old dude explain things from my point of view. I hope you have learned as much from this section of the course as I have.

Assignment 5/28-Intro to music production

Algorithmic and Convolution Reverb

I’m Will Servant and I’m an old fart who started playing music back when drums were animal skins stretched over hollowed out logs, struck by wooly mammoth bones. I am enjoying this course immensely and have learned a great deal so far. As I do not currently have a DAW to use I am presenting today’s lesson by way of text. I hope you are able to follow the lesson and comprehend it easily. If you cannot, please let me know how I can do a better job. Thanks for your help.

When it comes to reverb let me begin by saying we should all be thankful for the digital reverb plugin. Analog methods of adding reverb could be cumbersome and parameters such as delay time and pre delay could not be easily controlled.

There are two basic types of reverb plugins that give the producer the tools to create a reverberation effect that accurately represents a real space. They both use math to create reverb effects, but do so in a different manner. In describing these two types of reverb I’d like to go into a bit more depth than did the video lectures. I hope you can bear with me.

Algorithmic reverb uses mathematical formulas to apply different parameters of reverb to the signal. An algorithmic reverb allows the producer a lot of flexibility. They can manually control the functions that make up reverb, to create a highly customized reverb environment. A convolution reverb uses math formulas to analyze the impulse responses of recordings of real physical spaces, then applies the total reverberation effect of any given space to the track.

One analogy used to describe the difference between the two types of reverb plugins is that algorithmic reverb is similar to a synthesizer. It uses math to control each component separately combining them to create the reverb effect. Convolution reverb is like a sampler. It uses recordings of real spaces to create separate, complete reverb environments.

An analogy I like to use is that of making soup. The algorithmic way of making soup uses the various separate ingredients, i.e. a homemade soup. Using this method you are in control of the parameters, the ingredients, their relative amounts, the temperature and timing of the cooking. You come up with a meal that is uniquely yours. However, you need to have knowledge of how the soup is supposed to taste to avoid serving one up that nobody likes. You can also create and use recipes, which are like algorithmic reverb presets, where all the parameter settings are in memory for you to easily apply. The downside here is that these presets/recipes are only approximations of the type of soup you want. To really be creative you must have knowledge of how to cook, in order to get the meal you want.

A convolution soup is when you go to the store and buy readymade soup. There are a wide variety of soups you can buy. They are all different and the ingredients have been put together already, from a standard recipe that somebody else created. You choose the one you want to eat. Any one type of any brand of soup will be universally the same in flavor, texture and consistency. In this circumstance you need to know what kind and/or brand of soup you or your guests might like. They may like a school gym instead of a concert hall or night club, as it were. You may also want to know which soup goes best with the beverage you are serving, i.e. which reverb sounds best on which track. The downside here is that you don’t have a lot of control over what comes out of the can. To use a tired and overused bromide, it is what it is.

To use an algorithmic reverb you need to know a little about controlling the functions that comprise reverb. Most algorithmic reverb plugins have two distinct control sections, early reflections and diffuse reverb. These model how reverberation works in real spaces. Early reflections are the numerous delays caused by the sound reflecting off close surfaces. Diffuse reverb is those delays diffused, or carried out farther into the space. Elements of early reflection are pre delay, which is the length of time between the original sound and the beginning of the reverb effect, room shape and/or size, and use of the stereo soundscape. Elements of diffuse reverb are delay time, high frequency EQ, density of the reverb effect, and once again, when the effect starts, and how it is spread in the stereo width.

Using convolution reverb is essentially a function of listening to the reverb from any number of spaces, as they are applied to the track, and then choosing the one you feel works best for your purposes. It’s obviously a little simpler to use than algorithmic reverb but has fewer permutations and is therefore less flexible. It can also be overwhelming trying to listen to the sound of many different spaces, especially when the reverb from a particular space can sound different in the full mix than when heard on a soloed track.

In summary, both types of digital reverb processors have advantages and limitations, and both are equally as valuable to the producer in creating the atmosphere and sound environment they want the listener to experience. Both are very useful tools in finalizing a mix. But, as in most aspects of producing, our greatest tool is our ears, and listening is the best way to find which kind of reverb is best for your project. Through trial and error you can find that “just right” balance between the chicken and the vegetables, but remember to use your spices sparingly, so that your guests barely know the reverb is there.

Assignment 5/21 Intro to music production

Probably the most recognizable aspect of music is dynamics, the changes in amplitude that we hear as changes in loudness, subjective, perceived changes, and in volume, measured, objective changes. Manipulating dynamics is one of the most versatile and important tools the producer uses in post production to shape the sound both of individual instruments and the mix as a whole.

Dynamic processors change certain parameters of dynamics, under certain rules, to alter volume, at any gain stage point in the signal path, for any recorded track or combination of tracks. Perhaps the simplest dynamic processor is the producer himself. The producer can change the volume of the sound on a track through application of amplitude automation. He can even physically change levels on the fly, such as riding faders to balance the dynamics of a track. The human dynamic processor uses a two stage process to change dynamics in real time. First, they analyze the dynamics of the track and determine where the volume should be raised or lowered. Second, they manipulate the faders based on that analysis. This two step process carries over to dynamic processing done by outboard gear and software plugins.

All hardware and software dynamic processors have a side chain or key section, which does the analysis of the input signal and a volume fader section, which changes the volume over time. This is because all dynamic processors are acting as some type of volume control. And because they all work on the same thing, they all use the same parameters and the same means of changing those parameters. The four essential parameters of dynamic processing are, threshold, ratio, attack, and release.

Threshold represents the amplitude level at which the processor begins working. Changing the threshold can influence the sound a great deal, as the different settings vary the results of using the same processor. Ratio is the amount of processing applied to the signal once it is triggered by the volume reaching and surpassing the threshold. It is expressed as the ratio of input to output. A ratio represents how much the input changes as it passes the threshold. For example a 4:1 ratio means that for every four decibels of input that is processed, the signal will change 1 decibel. Attack is how fast the processor begins to work. Changing this parameter influences how the beginning of a signal sounds. Instruments can often begin with a transient, or a rapid change in amplitude. A snare drum is a good example of an instrument that makes a lot of transients. Changing the attack changes how much of the transient we can hear and influences how smooth or punchy the sound is. Release is the opposite of attack. It changes how long the end of the processed sound lasts before it is “released”. This will influence how choppy or slowly the processed sound finishes.

The four major types of dynamic processors are, compressors, expanders, noise gates and limiters. All four of these processors act on the same four parameters, changing those parameters to some degree, and operating under different rules. A compressor reduces the dynamic range by either reducing the loud sounds or by raising the soft sounds. The rule for compression is as the input gets louder the output gets softer. This effect shrinks the dynamic range so that the loud parts are softer and the soft parts louder. This allows the producer to raise the gain of the entire track so that it is louder in the mix without distorting. There are many different applications of compressors to different instruments, to entire tracks and for different effects. The manipulations of the four parameters are interactive, thus creating an almost limitless number of possible effects. This vast opportunity for change makes the compressor widely used but difficult to master. Even the best producers continue to learn more about compression over time.

Expanders are the opposite of compressors in that they increase the dynamic range, either by making the loud parts louder or the soft parts softer. Expansion is not used as much as compression but can be very useful in a situation such as mixing a heavily compressed recorded track to give back some of the differences in the volume of the original track. For example, an orchestra recorded with compression to reduce the chance for distortion could have some of the dynamic range returned in the mixdown through expansion. The rule for expansion is as the input gets louder the output gets louder.

Limiters and noise gates use compression in special ways and function in somewhat opposite ways. Noise gates allow only the sounds above a certain volume to pass through the “gate” by cutting off all sound under the threshold. Gates are useful in removing unwanted sounds from the mix, such as foot tapping, finger movements , squeaks from chairs etc. Limiters allow only sounds under the threshold to be heard. They are essentially compressors that operate with ratios over 10:1. They are traditionally used to prevent loud sounds from distorting but in modern usage they act as loudness maximizers, as heavy limiting can allow the signal gain to be increased a great deal, making the whole track apparently louder, while assuring the producer the signal will not distort.

Dynamic processors have a multitude of uses in the studio in both the tracking and post production stages. Knowing the basics of how each processor works and what it does gives you a good head start to the fine art of applying dynamic processing to change the emotional and spatial presentation for the listener. You will be improving your processing skills forever, as you gain experience as a producer. Enjoy.

Intro to Music Production Assignment 5/14

One of the major advantages of recording in the digital realm over the analog realm is the fact that signal processing is done by software, much of which is included within your DAW, rather than with expensive outboard gear. Digital signal processing, or DSP, is cheaper, easier to use, and has continued to become more accurate, more sophisticated and more innovative. In fact, DSP is so easy to use, and there are so many different types of third party processors available for your DAW, that instead of barely being able to scrape together the basics a big danger for the producer is using too much signal processing.

Today we will be talking about some simple but important knowledge regarding the classifications of the effects that processors apply to the signal. These effects are the same in both the digital and analog realms, because these effects relate to the basic building blocks of sound and are not exclusive to either domain.

Effects can be divided into three major categories that are applied to the three major physical characteristics of sound. First there are the dynamic effects. Dynamic effects control amplitude, which is the strength of the compression and rarification of sound waves as they move through the air. To the listener these effects affect the volume of the sound. Dynamic effects include compressors, limiters, expanders, and noise gates. We will talk about each of these effects and their applications later in the course.

Second, we have delay effects, which control the quality of the propagation of sound waves. Propagation is the measure of sound waves as they move through time and space. To the listener delay effects make certain parts of sounds appear to be three dimensional or happen at a different time than the rest of the sound. Delay effects include, reverbs, delays, phasers, flangers, and choruses. Once again, we will examine these effects individually later .

Finally we have Filter effects, which control the timbre. This is the measure of the relative balance between the amplitude and frequencies of a sound. This balancing produces the quality we often call tone in the ear of the listener. Filters are able to amplify or attenuate particular frequencies to produce the vast variety of tones we distinguish in our minds, apart from pitch or volume, that influence how we perceive sound. Filter effects include, high pass filters, low pass filters, band filters, parametric equalization and Graphic equalization.

Each of these effects are major tools available to the producer to shape and fine tune the quality of the sounds that comprise the final mix. Knowing which effects relate to which qualities of sound can tell us what effects processors to use when, where to place them in the signal path, and which tracks to group together to apply a particular effect to all of them at once. The application of each individual effect is an important piece of knowledge for the producer.