Thursday, January 31, 2013

Studio Monitors – Positioning setup

Studio Monitors – Positioning Setup
Vid courtesy of The DSP Project

Share/Bookmark

Saturday, January 26, 2013

Hmmm



Share/Bookmark

get-better-mixes-by-going-for-the-big-wins


Posted in: Mixing, Tips
| by: Graham

Want some mixing advice? Don’t spend too much time focused on the little details. Sure all the fine tuning is important, but you don’t want to overspend your time and energy there. Instead, pour the majority of your mixing brain power into these three core areas and you’ll get big results. Trust me.
Build A Killer Static Mix



Full Post Here:

Share/Bookmark

Wednesday, May 16, 2012

Zen and the Art of Strong Stereo Imaging | Audio Issues

This is a guest post from mastering engineer Barry Gardner who operates SafeandSound online mastering
From time to time I hear a mix that has dubious stereo imaging.  This can affect both acoustic or electronic mixes.
For acoustic mixes it is often the mic technique that creates problematic stereo images. For electronic mixes, there are a variety of reasons why bad stereo imaging occurs.
By dubious I mean the stereo image does not have the traits of a professional mix-down. It may be too narrow with many monophonic sources or it might be too wide sounding with possible phase problems, e.g. not mono compatible. This can be due to over use of stereo width enhancers or it may suffer from blanket application of effects across multiple mix tracks.

Make Sure It Works in Mono

When you mix your track it is important to mono the track and make sure that the track does not sound excessively different in mono.
It should maintain a similar tonal balance in mono with some sources even sounding slightly louder. If you have a serious phase issue for any sources they will tend to lose bass or drop significantly in level when summed in mono. At worst, they’ll vanish from the mix entirely. So make it a habit to check your mix in mono as it builds.
In some instances there may be a single stereo source that is out of phase between the channels and goes unnoticed. We all want to have wide, punchy sounding mixes and this can be a challenge for the beginning engineer.
After all, there are many technical aspects to learn when you’re first starting out. One common issue I have found is the application of a single effect across multiple mix tracks. Reverb is the most common stereo enhancing effect in people’s mixes. I would like to take the stereo image aspect of mixing back to the starting point and look at the sound selection. (drum hits, samples, synths, vocals, effect sweeps and other elements that make up your music)
In many instances people tend to start their track by picking sources that they like the sound of. There’s nothing inherently wrong with this, it’s what we all do. However, it is worth introducing another layer of selectivity when you choose your sound sources.

Stereo for the Electronic Musician

For electronic musicians it is important to understand whether your source samples and sounds are monophonic or stereo. If they are mono they will have exactly the same information in the left and right channels and if they are stereo they will have a sense of space.
If you are unsure. try mono-ing some sources in your DAW and see if the stereo image changes. If there is no change then the source is mono but if the source loses some depth and space then it is highly likely to be stereo.
The reason I suggest this is so that from the outset you will be building an appropriate stereo perspective into your music. Sounds that are commonly stereo (the technically correct term being pseudo-stereo) would be synth patches (pads, leads and some basses), synthetic snare drums, sweeper effects. Sounds that may more commonly be mono may be kick drums and instrumental samples. There is no hard and fast rule so use the mono-ing technique above to find out if the sources are mono or stereo. Doing this results in less problems with phase as you will be avoiding these pseudo-stereo creating techniques.

Avoid the Unnatural

One of the most unpleasant techniques some people use to artificially enhance the stereo imaging is to put a short stereo reverb on all the drums, the synths and bass line which are all from mono source samples. This produces a slight sense of extra depth. However it also produces an unnatural and unpleasant global coloration to all the sources and has a somewhat “cheap” and subtly metallic sound to the mix. So from the outset, pay attention to your choice of sounds when you are building the track.
If you want to create a pseudo stereo image for a specific mono source, you can use a few different techniques. In fact adding a little reverb is perfectly OK, but limit it to one sound source and don’t apply the same reverb to every single source you have.
  • You may wish to double up the mono source on 2 channels, pan hard left and right and delay one side by a few milliseconds. (always double checking mono compatibility by mono summing or checking on a correlation meter)
  • You can add a subtle stereo based delay to a sound which can widen the sound (often a subtle ping pong with hard left panned delays can do the trick).
  • Another technique is to double 2 mono sources panned hard left and right and apply two separate digital graphic EQ plug-ins. Create opposing EQ boosts and cuts to each signal so they don’t have the same sounds. At any given frequency the left channel gets a boost and the right gets a cut through all the available bands.
Stereo imaging enhancers rely on already available stereo information in a source. By all means use them sparingly to assist width creation but be aware in over-use since mono compatibility may fail. All these enhancements can be used with care and in moderation with actual stereo sources to give a deeper and wider mix sound. Also, do not be afraid to leave a mono source strictly mono as it all adds to fill the stereo image in a natural way.

Know Your Sources

As well as sources that are very narrow it is worth being vigilant towards overly wide sources.
For example, many factory synth patches are created to sound big wide and lush. In some instances this is overdone and when summed to mono they can sound very different. In such instances, knowing how to program your favorite synthesizer comes in handy.
When these techniques are applied with care and respect to mono compatibility, they should produce a fuller, stable, mono compatible and more euphonic stereo image for your mixes.
None of these pseudo stereo image enhancing techniques replace good source selection but they can help with adding some subtle and extra width to a mix-down that is otherwise lacking stereo imaging.
It is highly recommended that all experiments are checked for mono compatibility either through mono summing your stereo bus or checking on a freeware phase scope like “Flux stereo tool” or “Voxengo Span”. Selecting from a wide palette of sound sources helps bring a natural depth and separation to your mix-down.
Image by: pittaya


Zen and the Art of Strong Stereo Imaging | Audio Issues
Share/Bookmark

Friday, May 4, 2012

How Can You Tell If Uploading Your Cover Song To YouTube Is Infringing? You Can't | Techdirt

How Can You Tell If Uploading Your Cover Song To YouTube Is Infringing? You Can't from the broken-systems dept If there is such a thing as a functioning copyright system, one of its tenets should be that it is quite easy to know if what you're doing is infringement. Of course, as we've discovered over and over again, people infringe unkowingly all the time -- and it's not just because of negligence or ignorance. Often it's because it's simply impossible to figure it out. Take, for example, the quite common practice of uploading a cover of a song to YouTube. This happens all the time. Lots of people record themselves singing popular songs and put them on YouTube. According to Andy Baio, about 12,000 such cover songs are uploaded each day. Justin Bieber became Justin Bieber because of some YouTube videos of him singing someone else's songs.

 But is that breaking the law?

 Andy Baio dug into that question and discovered that it's almost impossible to determine that. Now, if you're merely recording a cover song for release, there are compulsory/mechanical license fees you can use. This is why you see cover songs on albums all the time. They're not done with "permission," but rather because someone paid the compulsory rate set by the government. The problem, however, is when you add video to the mix. Once you're talking about a video with music, a second license has to be secured: the sync license. And there are no compulsory rates with sync licenses -- meaning that the copyright holder can (and often does) demand exorbitant fees if they even respond to your request at all.

Now, as Baio notes, Google did sign a deal with the National Music Publishers Association to allow publishers to join a program where they get some money in exchange for allowing their songs to be played by others on YouTube. But no one knows whose publishing rights are actually covered by that agreement, meaning that it's effectively useless.

 The end result? It's likely that a rather large number of the cover song videos uploaded each day are infringing -- potentially opening up the uploaders to huge statutory fines for violating copyright law. This is a clear sign of where the law is broken. The law clearly wasn't mean for these kinds of situations, and it's easily fixable. Baio makes the point that here's an easy reform to copyright law that would decriminalize a very common behavior:

 The real question: Why is it illegal in the first place?
    Cover songs on YouTube are, almost universally, non-commercial in nature. They’re created by fans, mostly amateur musicians, with no negative impact on the market value of the original work. (If anything, it increases demand by acting as a free promotional vehicle for the track.)

   The best solution is the hardest one: To reform copyright law to legalize the distribution of free, non-commercial cover songs.
   Copyright law was intended to foster creativity by making it safe for creators to exclusively capitalize on their work for a limited period of time. Cover songs on YouTube don’t threaten that ability, and may actually prevent new works by chilling talent that could go on to do great things.

   Seems like a simple enough thing to fix... which is why it's unlikely to actually happen.

 How Can You Tell If Uploading Your Cover Song To YouTube Is Infringing? You Can't | Techdirt
Share/Bookmark

Wednesday, May 2, 2012

Bobby Owsinski's Big Picture Production Blog: 6 Tips For Balancing The Bass And Drum Mix

6 Tips For Balancing The Bass And Drum Mix

Perhaps the most difficult task of a mixing engineer is balancing the bass and drums (especially the bass and kick). Nothing can make or break a mix faster than the way these instruments work together. It’s not uncommon for a mixer to spend hours on this balance (both level and frequency) because if the relationship isn’t correct, then the song will just never sound big and punchy.

So how do you get this mysterious balance?

In order to have the impact and punch that most modern mixes exhibit, you have to make a space in your mix for both of these instruments so they won't fight each other and turn into a muddy mess. While simply EQing your bass high and your kick low (or the other way around), might work at it’s simplest, it’s best to have a more in-depth strategy, so consider the following:

1) EQ the kick drum between 60 to120Hz as this will allow it to be heard on smaller speakers. For more attack and beater click add between 1k to 4kHz. You may also want to dip some of the boxiness between 200-500Hz. EQing in the 30-60Hz range will produce a kick that you can feel, but it may also sound thin on smaller speakers and probably won’t translate well to a variety of speaker systems. Most 22" kick drums are centered somewhere around 80Hz anyway.

2) Bring up the bass with the kick. The kick and bass should occupy slightly different frequency spaces. The kick will usually be in the 60 to 80Hz range whereas the bass will emphasize higher frequencies anywhere from 80 to 250Hz (although sometimes the two are reversed depending upon the song). Shelve out any unnecessary bass frequencies (below 30Hz on kick and below 50Hz on the bass, although the frequency for both may be as high as 60Hz according to style of the song and your taste) so they're not boomy or muddy. There should be a driving, foundational quality to the combination of these two together.

A common mistake is to emphasize the kick with either too much level or EQ, while not featuring enough of the bass guitar (see the graphic on the left for a good visual of what it sounds like). This gives you the illusion that your mix is bottom light, because what you’re doing is shortening the duration of the low frequency envelope in your mix. Since the kick tends to be more transient than the bass guitar, this gives you the idea that the low frequency content of your mix is inconsistent. For Pop music, it is best to have the kick provide the percussive nature of the bottom while the bass fills out the sustain and musical parts.

3) Make sure that the snare is strong, otherwise the song will lose its drive when the other instruments are added in. This usually calls for at least some compression, especially if the snare hits are inconsistent throughout the song. You may need a small EQ boost at 1kHz for attack, 120 to 240Hz for fullness, and 10k for snap. As you bring in the other drums and cymbals, you might want to dip a little of 1kHz on these to make room for the snare. Also make sure that the toms aren't too boomy (if so, shelve out the frequencies below 60 Hz).

4) If you’re having trouble with the mix because it's sounding cloudy and muddy on the bottom end, mute both the kick drum and bass to determine what else might be in the way in the low end. You might not realize that there are some frequencies in the mix that aren't really musically necessary. With piano or guitar, you're mainly looking for the mids and top end to cut through, while the low-end is just getting in the way, so it’s best to clear some of that out with a hi-pass filter. When soloed, the instrument might sound too thin, but with the rest of the mix the low-end will now sound so much better and you won’t be missing that low end from the other instruments. Now the mix sounds louder, clearer, and fuller. Be careful not to cut too much from the other instruments, as you might loose the warmth of the mix.

5) For Dance music, be aware of kick drum to bass melody dissonance. The bass line over the huge sound systems in today's clubs is very important and needs to work very well with the kick drum. But if your kick is centered around an A note and the bass line is tuned to A#, it's going to clash. Tune your kick samples to the bass lines (or vice versa) where needed.

6) If you feel that you don't have enough bass or kick, boost the level, not the EQ. This is a mistake that everyone makes when their first getting their mixing chops together. Most bass drums and bass guitars have plenty of low end and don't need much more, so be sure that their level together and with the rest of the mix is correct before you go adding EQ. Even then, a little goes a long way.

While these aren't the only mix tips that can help with the bass and drum relationship during your mix (you can check out either The Audio Mixing Bootcamp or The Mixing Engineer's Handbook for more), they're a great place to start. Remember, go easy on the EQ, as a little goes a long way.

----------------------------------
Help support this blog. Any purchases made through our Amazon links help support this website with no cost to you.

You should follow me on Twitter for daily news and updates on production and the music business.

Don't forget to check out my Music 3.0 blog for tips and tricks on navigating social media and the new music business.


Bobby Owsinski's Big Picture Production Blog: 6 Tips For Balancing The Bass And Drum Mix
Share/Bookmark

The Three Inglorious Gangsters of EQ | Audio Issues

Written by Björgvin Benediktsson in Uncategorized - No comments
gangster-eq
share

Say hello to my little friend!

Or rather, say hello to my three little gangsters that do your dirty EQ work for you.
1. The Thug

The thug is like Joe Pesci from Casino. He’s the hired hand that does all the dirty work for the family. He doesn’t hesitate to get rid of you any way he can.

Use the thug when you need to cut unwanted frequencies from your mix. He’ll cut anything that’s causing you annoyance: snare rings, muddy bass or hissy guitars.

The thug gets rid of pests without making a mess. He likes it clean and untraceable. Like surgical EQ with a high Q. Just scoop in there and get rid of what’s annoying you.
2. The Godfather

The godfather is like Al Capone. Everybody knows he’s the boss, but the cops can’t prove it. He uses legal businesses as a front for his criminal enterprise. They all know he’s dirty, but they can’t pin it on him.

Think about the godfather when you mask frequencies. Masking is when you boost a higher frequency to hide the problematic frequency below. Say you have a really nasally vocal at 1 kHz or but you can’t cut it without making it sound unnatural. By boosting 3 kHz you mask that nasal sound by covering it up with a more flattering frequency.

Sometimes you need to hide the problematic frequencies. Mask them and none will be the wiser.
3. The Undercover Cop

Think about Tim Roth as Mr. Orange in Reservoir Dogs. When things start getting real ugly, everything’s gotta go. When you get a bunch of low-lives together in a room, there’s gonna be a stand-off and that’s never gonna end well.

Because sometimes you gotta get rid of everything. If you have problems with your low-end, you need to grab that EQ and filter everything out. Make sure that the only things left are the instruments that belong there in the first place.

The undercover cop gets rid of the criminals in the most dangerous way possible: by infiltrating their midst. The same goes for your EQ’ing. Use the filter carefully. Get rid of the scum, but don’t hurt the frequencies around them.



Maybe I’ve been watching too many gangster movies between mixing sessions, but these are the three characters that continually resurface.

Similarly, these are the three things to always keep in mind when you’re using EQ. Know when to cut, filter and boost and EQ’ing will be easy for you.

For a great guide on knowing when to use each of these thugs….I mean things, check out Understanding EQ.



The Three Inglorious Gangsters of EQ | Audio Issues
Share/Bookmark

Monday, April 16, 2012

Headphones Mixing? Speakers Mixing? Both? By Roey Izhaki | Audio Undone

Headphones Mixing? Speakers Mixing? Both?
By Roey Izhaki

   By Guest Blogger   Categories: Mixing Techniques

Headphones vs. speakers – the theory
When listening on headphones, both the left and right ears are fed exclusively with the corresponding channel. This means that the left-channel signal reaches the left ear only, and the right-channel signal reaches the right ear only.
With speakers, however, this is not the case. Sound from each speaker reaches the nearest ear first and the farther ear soon after. Effectively, each ear gets the signal from the nearest speaker blended with the slightly delayed signal from the other speaker. This results in the following:
• Sounds from one speaker can mask sounds from the other.
• Overall smearing of the image for any sounds not panned to the extremes. Most severe smearing happens with sounds panned center.
• Curvature of the sound image as center-panned sounds appears deeper due to the delay between the late arrival of sound from the far speakers.
None of this happens with headphones, but stereo was conceived with speakers in mind, and for many decades now music has been made assuming playback through speakers. Our equipment, notably the design of our pan pots (but also that of stereo effects such as reverbs), assumes the same. Mixing engineers mix using and for speak¬ers. But how do these mixes translate onto headphones?
The key difference between listening to music through speakers and headphones has to do with the way our brain localizes sounds. How this happens is based on the findings of Helmut Haas and is implemented through Alan Blumlein’s invention of stereo. It is sufficient to say that a sound from one speaker will completely mask a similar sound from the opposite speaker if the for¬mer is approximately 15 dB louder. Practically, if the signal sent to one speaker is roughly 15 dB softer than a similar sound sent to the opposite speaker, the sound will appear to be coming entirely from the louder speaker. But with headphones no such masking occurs as the sound of each channel doesn’t arrive at the opposite ear. To make a sound appear as if it is coming entirely from one ear, roughly 60 dB of attenuation is required on a similar sound for the other ear.
In the way pan pots are designed, when one pans from the center to the left, one should expect the location of the sound to correspond to the pot movement. This does not hap¬pen with headphones, where the sound seems to shift only slightly off the center first, before quickly ‘escaping’ to the extreme just before the pan pot reaches its extreme position. It is hard to place sounds anywhere between the slightly off-center position and the very extreme (in headphones). In fact, in the way standard pan pots work, it is next to impossible. Positioning instruments on the sound stage is much easier when listening through speakers. Applications such as Logic offer ‘binaural’ pan pots, which tackle this exact problem and can achieve much better localization using headphones; but the pen¬alty is that they do so by altering the frequency content of the signal sent to each ear, thereby altering the overall frequency spectrum of the instrument. Also, these types of ‘binaural’ mixes do not translate well on speakers. In addition to all that, the sound stage created by speakers is limited to the typical 60° between the listener and the two speak¬ers. With headphones, the sound stage spans 180°.
Mixing engineers work hard to create sound stages in mixes using speakers. When these mixes are played through headphones, these sound stages appear completely dis¬torted. While this does not seem to bother most listeners, most serious music buffs insist that listening to music via speakers is far more pleasing, largely due to the lack of spatial sense when using headphones.
The dominance of speaker mixes was never questioned until recently, when portable MP3 players and their integration with cellular phones became so widespread. It is a valid question to ask why we still mix using (and for) speakers when so many people nowadays listen via headphones. There is an unexploited opportunity here for record labels to produce ‘speaker’ and ‘headphone’ versions. This would make sense not only from a mixing point of view but also from mastering, consumer and label revenue points of view.
The advantages
Some recording and mixing engineers take their reference headphones to studios they are not familiar with. Headphones provide a solid reference, and their sound only alters when different headphone amplifiers are used. As previously explained, the room plays a dominant part in what we hear with a speaker setup – the sound headphones produce is not affected by room acoustics or modes.
 This is very important for rooms with flawed acoustics, such as many bedrooms and project studios. In such rooms, a good pair of headphones, together with a good head¬phone amp, can be a real aid. Having room modes out of play means that the range between 80–500 Hz can be more evenly reproduced, although studio monitors still have an advantage reproducing very low frequencies compared to most headphones. Other acoustic issues simply don’t exist when using headphones, for example the combfilter¬ing caused by early reflections, the masking between the left and right speakers and even the directivity of the tweeters. It can be generalized that, as far as frequency repro¬duction is concerned, the results we get using good headphones are more accurate than those generated by speakers in a flawed room.
For many, compression, gating and dynamic range tasks, speakers do not provide a clear advantage over headphones – as long as the processing does not affect the perceived depth of the instrument, headphones can be useful.
The disadvantages
While headphones can be great when dealing with frequencies and useful when treating certain dynamic aspects in a mix, there are also a few disadvantages in using them, and they are almost useless for some mixing tasks.
As discussed, the spatial image created by headphones is greatly distorted and con¬ventional tools make it very hard to craft appropriate sound stages on headphones. Any sound stage decisions we make, whether left/right or front/back, are better made using speakers. As depth is often generated using reverbs, delays or other time-based effects, the configuration of these effects benefits from using speakers.
Earlier it was stated that ideal mixing rooms aim to have the reverb within them decaying at around 500 ms and that anechoic chambers, where no reverb exists, are far from ideal. As we lose the room response when using headphones, we also lose the reverb, so it is as if we mix in a space similar to an anechoic chamber. Most people find this less pleasant. But the lack of room reverb together with the close proximity of the headphones’ diaphragms to our ear drums mean that ear fatigue is more likely to occur. At loud levels, headphones are also more likely to cause pain and damage to the ear drum more rapidly than speakers.
The above is an excerpt from Roey Izhaki’s book  Mixing Audio, 2e.  Roey Izhaki has been involved with mixing since 1992. He is an academic lecturer in the field of audio engineering  and  gives mixing seminars across Europe at various schools and exhibitions. He is currently lecturing in the Audio Engineering department at SAE Institute, London.


Headphones Mixing? Speakers Mixing? Both? By Roey Izhaki | Audio Undone
Share/Bookmark

Wednesday, March 28, 2012

How to Minimize Noise In Your Mixes

How to Minimize Noise In Your Mixes
Share/Bookmark

Saturday, March 17, 2012

The 30 best VST plug-in effects in the world today | MusicRadar.com

The 30 best VST plug-in effects in the world today | MusicRadar.com
Share/Bookmark

Independent Musicians on the Internet


Flickr Feed

Roy Tanck's Flickr Widget requires Flash Player 9 or better.

Get this widget at roytanck.com