Wednesday, April 17, 2013

SHG Radio Show, Episode 161

Welcome to this week's edition of Subterranean Homesick Grooves™, a weekly electronica-based radio show presented originally on CHMA FM 106.9 at Mount Allison University in Atlantic Canada (but expanded to a distribution on other terrestrial radio stations), and also distributed as a global podcast through iTunes and numerous other sites. The show is normally programmed and mixed by Jonathan Clark (as DJ Bolivia), although some weeks feature guest mixes by other Canadian DJ's. The show encompasses many sub-genres within the realm of electronic dance music, but the main focus is definitely on tech-house and techno, and a small amount of progressive, trance, & minimal. Liner notes for this episode (SHG 161) can be seen below.

Para la información en español, vaya aquí.

By the way, if you're looking for DJ mixes in styles other than progressive/tech-house, check out www.djbolivia.ca/mixes.html. That page has a number of mainstream/top40 dance mixes (the "Workout Mix" series), as well as some deep house, drum and bass, and other styles.




Here's our Podcast Feed to paste into iTunes or any other podcatcher:
http://feeds.feedburner.com/shg

Older episodes of the show are not directly available from our main servers anymore, to conserve space for more recent episodes. However, all older episodes have been posted individually on SoundCloud, and also in archives of 25 episodes apiece (convenient for bulk downloading) from DJ Bolivia's Public Dropbox folder. That Dropbox link also has folders for individual tracks and remixes, project files and stem collections for producers who want to make their own remixes, videos, and other material. You don't even need to have a Dropbox account to download files from it.


Here’s a link so you can listen to the show or download it from SoundCloud:




Up, up, and away! As I write this, I'm now out on Canada's west coast again. I'll be continuing to publish Subterranean Homesick Grooves on a regular weekly basis, but unfortunately, I won't have time to produce any more tutorial or DJ'ing videos for a couple months.


Here are Track Listings for episode 161:

01. Bass Monta, "Darling" (Oxytek Hot Beats Remix).
02. Mafu Nakyfu, "Bx8A" (Original Mix).
03. Candela & Glamsta, "In De Guetto" (DJ Chus Club Remix).
04. Gaga, "Culture" (Original Mix).
05. Hollen, "Electrocution" (Original Mix).
06. Eduy, "Memories Of Nedim Nex" (Original Mix).
07. Roberto Capuano, "Vertigo" (Original Mix).
08. Patrick M, "Bowie" (Original Mix).
09. AJ Lora, "Shahrukh Khan" (Original Mix).
10. Mihalis Safras, "Lula" (Stefano Noferini Remix).
11. Tony Lenz, "Atomic Bomb" (Original Mix).
12. Danny Garlick, "Exit 21" (Original Mix).






Here are links to either personal websites, MySpace pages, or [usually] the SoundCloud pages for a few of the original artists and remixers/producers listed above:



Bass Monta (France)
Oxytek (France)
Mafu Nakyfu (Slovenia)
DJ Chus (Spain)
Hollen (Italy)
Eduy (Bosnia & Herzegovina)
Roberto Capuano (Italy)
Patrick M (United States)
AJ Lora (Spain)
Mihalis Safras (Greece)
Stefano Noferini (Italy)
Tony Lenz (Spain)
Danny Garlick (Spain)


Subterranean Homesick Grooves is a weekly specialty EDM music show with a basic weekly audience base of about 1500 listeners per week through podcasting and direct downloads, another hundred or so listeners through SoundCloud, and an unknown number of listeners through terrestrial FM broadcast. If you're a radio station programming director, and would like to add Subterranean Homesick Grooves to your regular programming lineup, contact djbolivia@gmail.com for details. We currently release SHG as an advance download to a number of stations globally on a weekly basis (at no charge), and we welcome inquiries from additional outlets.

Go to the Mix Downloads page on the main DJ Bolivia website if you'd like to check out a number of our older shows, or visit our SoundCloud page for individual tracks and remixes. And if you're interested in learning more about DJ'ing or music production, check out Jonathan Clark's extensive and very popular series of YouTube tutorials. There's a full & organized index of all the videos at:
djbolivia.ca/videos.html

We also have a file containing complete track listings from all of DJ Bolivia's radio shows, studio mixes, and live sets. The PDF version can be viewed from within your browser by clicking directly. Both the PDF and the Excel versions can be downloaded by right-clicking and choosing the "save link as" option:

View as PDF file: http://www.djbolivia.ca/complete_track_history_djbolivia.pdf
Download Excel file: http://www.djbolivia.ca/complete_track_history_djbolivia.xlsx









Follow Jonathan Clark on other sites:
        Twitter: twitter.com/djbolivia
        SoundCloud: soundcloud.com/djbolivia
        YouTube: youtube.com/djbolivia
        Facebook: facebook.com/djbolivia
        Main Site: www.djbolivia.ca
        About.Me: about.me/djbolivia
        Music Blog: djbolivia.blogspot.ca




Tuesday, April 16, 2013

Audio Recording tutorial #07: Basic MIDI Recording


I just put part seven of my Audio Recording tutorial series online (and I have some additional study notes further down in this post). This series is more related to home studio work than it is to DJ'ing, although I'm still covering the very basics of audio engineering and production work. This series is eventually going to expand into about thirty different videos about simple recording and audio engineering, everything from the basics of recording instruments and vocals, to the use of MIDI, to the theory of sound and audio, and eventually a number of advanced editing and recording techniques.




Audio Recording Tutorial #07: Basic MIDI Recording

In this video, we start exploring basic MIDI recording. I start off with a very brief overview of MIDI, then move into a practical, hands-on tutorial where I play a song on an electronic piano keyboard and record it into Pro Tools. I then do a couple of very basic edits, so you understand how note data can be edited.





If you want to download the audio files that I was using in this video, to better hear the audio (or experiment with it) in your own home studio setup, here’s a link to a zipped folder containing the relevant files. Remember that this is TINY compared to the download files for previous videos. MIDI data takes up almost no space. This file is only 27 kilobytes, compared to the audio files for tutorials two through five which were about a thousand times larger:

www.djbolivia.ca/tutorials/audiorecording07.rar



Links about MIDI:




I also have quite a few other tutorial videos relating to DJ'ing, audio editing software, and studio equipment. I've got an organized list of those videos in the index of my "videos" page on my main website. If you're interested in any of those topics, you should bookmark this page right now:

www.djbolivia.ca/videos.html


Thanks for your interest in this series, and thanks for sharing this post or links to any of the videos.









Follow Jonathan Clark on other sites:
        Twitter:  twitter.com/djbolivia
        SoundCloud:  soundcloud.com/djbolivia
        YouTube:  youtube.com/djbolivia
        Facebook:  facebook.com/djbolivia
        Main Site:  www.djbolivia.ca
        About.Me:  about.me/djbolivia
        Music Blog:  djbolivia.blogspot.ca




Understanding Decibel Measurement Systems

Decibel-based logarithmic measurement systems are confusing. Not long ago, I wrote a post and produced an associated video to teach people about sample sizes, sample frequency, binary, and how it all relates to sound. That was part six of my Basic Audio Recording tutorial series on YouTube. I also have another post and video (numerically, the next in the series after this) that will talk about the Nyquist theorem, anti-aliasing, quantization noise, Fletcher-Munson curves, and dithering. Today, I have a post and video to delve more deeply into various decibel systems. This post is directly related to video #08 in my Audio Recording tutorial series, which is embedded below.




Although watching the video is the best way to learn about this topic, because of my illustrations on the whiteboard, I've also put a copy of the audio portion of that tutorial video on SoundCloud, for people who would like to download it to listen to in vehicles, while travelling, etc. Here's the audio-only version:




What Are Decibels?

The decibel is the unit used to measure the intensity of a sound. The human ear is incredibly sensitive. Your ears can hear everything from a light wind rustling through distant trees, to a loud jet engine, and they need to be able to process sounds appropriately. The decibel system is a logarithmic system that is appropriate for exponentially variable sound levels. Incidentally, decibels are also used to measure a large number of other logarithmic-based scales, such as power and voltage levels.

The first need for a decibel system came about many years ago, when telephone companies were trying to measure losses and gains across power grids. They decided to come up with a type of measurement that they named the Bel, in recognition of Alexander Graham Bell's work with early telephones. The decibel is one tenth of a Bel, and is abbreviated dB.

Decibels measure a change in power. Power is the change in energy in a system over time, and is best measured on a logarithmic scale. The range of difference in power levels between the quietest sound that a human can hear and the loudest sound before passing the threshold of hearing and reaching the threshold of pain is about one trillion times, or 10 to the twelfth power! That's a huge difference in scale.

Logarithmic scales are very interesting. Decibel systems are designed so that linear changes in the measurement units (ie. decibels) reflect exponential changes in power levels. Adding to the complexity is the fact that since the ear perceives different power levels on a different logarithmic scale than decibels (perhaps around Log 2, rather than Log 10), we get some very strange mathematical relationships. For example, consider these:

2x power = +3 dB = "slightly louder"
10x power = +10 dB = "about twice as loud"
100x power= +20 dB = "about four times as loud"
1000x power = +30dB = "about eight times as loud"

There are both similarities and differences in the ways that we perceive sound and light. Both are measured by our senses. However, light is a type of radio wave, and sound propagates through a medium in a wave-like pattern due to the oscillation of adjoining molecules. Therefore, light travels at an almost constant speed, whereas sound propagates more quickly when the medium that it is passing through becomes more dense.

Another interesting tidbit is that the difference in power levels between light and sound, as we perceive them, is not similar. With sounds, we can hear a difference between power levels of twelve orders of magnitude, as mentioned above. With light, the difference is only about three orders of magnitude. If you were to take the dimmest possible light that our eyes can see, and increase the power by only one thousand times, it would be at a level approaching the threshold of pain, causing retina damage.

It is also interesting that the power levels in light are much higher than in sounds. The very dimmest light that we can perceive produces about one watt of power. The very loudest sound that we can hear before approaching the threshold of pain produces about one watt of power. If you were able to instantly turn the power output from a 100-watt light bulb into purely sound energy, it would almost certainly deafen you, and possible cause serious injuries to some parts of your body.

In the embedded video (above), I cover the basics of a number of different decibel-based systems. For example, the following are all somewhat related to sound:

dB PWL - decibels power level
dB SIL - decibels sound intensity level
dB SPL - decibels sound pressure level
dBFS - decibels full scale
dBv or dBV - two types of decibels voltage, which use different reference levels
dBu - another type of voltage system
dBw - a type of power measurement






The Digital System for Decibels at Full Scale (dBFS)

Many decibel systems appear to have levels from 0 dB and upwards. However, this can be misleading, since decibels are a ratio, not an absolute quantity. So 0 dB in any system doesn't mean "nothing," it means that you're at the reference level, whatever that happens to be in that particular system. And it is possible to have negative decibel measurements in all systems. You just need to have a quantity or level that is lower than the reference level. Decibels are essentially a ratio.

In digital audio, in an audio editor system, the decibel levels are especially confusing. The dBFS scale starts with the reference level at the top, ie. the highest value. Any signal which is stronger than the reference level is a type of digital distortion. All other signals are measured in negative decibels, going down towards the noise floor.

Because of the special relationship between voltage and power, whereby voltage changes squared are in a direct relationship to power, the effect is that in a voltage system, a 6 dB increase means a doubling of the level, and a 6 dB decrease means the level is cut in half. In a 16-bit system, it is only possible to cut a signal in half sixteen times, going down 6 dB with each reduction to 50%. Therefore, a 16-bit system has a noise floor of -96 dB. In contrast, a 24-bit system has eight extra bits, and the noise floor is eight "levels" (of 6 dB each) lower, or around -144 dB. With a lower noise floor in a 24-bit system, there is a better potential dynamic range, and a more desirable signal-to-noise ratio. Of course, even that gets complicated, because you must differentiate between instrumentation noise and physical/external noise.

Does louder translate to "better" or "worse," and why? Well, humans usually perceive louder sounds to be better sounding. I don't know why. Maybe it's an evolutionary thing or an inherent biological preference - our brains just naturally prefer sounds that are easier to hear? Some audio engineers use this characteristic to their advantage, for better or for worse. In terms of producing music, an audio engineer will often try to increase the average volume level of a song through the use of compression, to make it sound "better" than other songs played around it. Unfortunately, the race to over-compress music has resulted in a loss of dynamic range in a lot of modern music.


Parting Words

Obviously, I’ve covered these subjects in a fairly superficial manner. If you watch the embedded video, I've covered all of these things in much more detail, so hopefully that will give you a lot of additional insight. Now you know the general theory behind these subjects, and why they're important to audio engineers. If you want to do further research on your own, I’ll put some links here now. Be forewarned! The physics and mathematics behind logarithmic systems can be pretty intense!



Links to other articles about Decibel systems:




If you’ve read all the way through this, you obviously want to learn more about audio recording and music production work. I don’t have a ton of written tutorials like this online, but I do have quite a few detailed YouTube videos that you might enjoy. I've got an organized list of those videos in the index of my "videos" page on my main website. If you're interested in any of those topics, you should bookmark this page right now:

www.djbolivia.ca/videos.html


Thanks for your interest in this series, and thanks for sharing this post or links to any of the videos.









Follow Jonathan Clark on other sites:
        Twitter: twitter.com/djbolivia
        SoundCloud: soundcloud.com/djbolivia
        YouTube: youtube.com/djbolivia
        Facebook: facebook.com/djbolivia
        Main Site: www.djbolivia.ca
        About.Me: about.me/djbolivia
        Music Blog: djbolivia.blogspot.ca
        MixCloud: mixcloud.com/djbolivia
        DropBox: djbolivia.ca/dropbox





If you enjoy my tutorials, and want to make a small donation to help purchase additional video equipment to use in future tutorials, here's my Bitcoin wallet address with a QR version: 19VhVFnw76Vor86SDoN2CSLcarQeZZqysE



Nyquist, Anti-Aliasing, Quantization Noise, and Dithering

If you want to produce better music, you should understand the Nyquist theorem, anti-aliasing, & dither. Not long ago, I wrote a post and produced an associated video to teach people about sample sizes, sample frequency, binary, and how it all relates to sound. That was part six of my Basic Audio Recording tutorial series on YouTube. Today, I have a follow-up post and video to delve more deeply into a couple of related topics. This post is directly related to another of the videos in my Audio Recording tutorial series (#09), which is embedded below.

As an overview, this post is going to cover topics including the Nyquist-Shannon Sample Theorem, Fletcher-Munson curves/charts, what aliasing is and how anti-aliasing is used to eliminate it, what quantization noise is, and finally, how dithering can be used in various ways, such as increasing sampling accuracy over a broad range of samples, or masking problems in audio. If you want to watch the video first, here it is:




Although watching the video is the best way to learn about this topic, because of my illustrations on the whiteboard, I've also put a copy of the audio portion of that tutorial video on SoundCloud, for people who would like to download it to listen to in vehicles, while travelling, etc. Here's the audio-only version:




Nyquist-Shannon Sampling Theorem

So why are CD’s sampled at 44.1 Hz? If film/video is often shown at between 24 and 30 frames per second, why is audio at more than a thousand times that rate? Why not sample at something like one thousand times per second or a nice round number like 10,000 Hz? Well, first of all, in movies, you aren’t sampling a frequency, you’re showing the equivalent of a photograph. Completely different situations. But as for the 44,100 Hz, we first need to understand the bare essentials of the Nyquist Theorem, which I only touched on very briefly in Audio Tutorial #06.

The Nyquist-Shannon Theorem was named first and foremost after a scientist (Harry Nyquist) who published research in 1928 about pulse samples, although that research wasn’t actually exactly about the Theorem that later bore his name. In fact, quite a few different scientists contributed to the subject. And sometimes it’s just called “The Sampling Theorem.” Personally, I’m glad that Claude Shannon got his name attached, because Shannon invented Boolean algebra, which is unquestionably the most important mathematical invention of the 20th century: without it, we would not have computers. Look him up.

The Nyquist Theorem essentially states that if you’re going to capture an audio signal (record a sound) accurately, your sample rate must be at least double what the highest frequency in the signal is. Let me break this down. We’re talking about a situation where a real-life sound (analogue) needs to somehow be converted into a digital representation (sampled). Essentially, the more frequently a sound is sampled, the more accurate the results will be: the digital waveform that is created will be closer to whatever the real waveform originally was. So Nyquist basically stated that in the search to determine what is the “minimum bare acceptable,” taking your highest frequency and doubling it gives you an accurate sample frequency.

Let me also define a term right now that is important. Whatever sample rate you pick, the “Nyquist frequency” is half that rate. So for CD audio, the Nyquist frequency is 22.05 kHz. For DVD-V, which is sampled at 48 kHz, the Nyquist frequency is 24 kHz.

Now of course, the math to back this up is complex, but I don’t want to get bogged down in higher mathematics. Think of it this way: If you don’t take enough samples, you’ll get an inaccurate representation of the original audio signal. I’ve talked about that in the accompanying video. But when you take at least two samples for every oscillation, your representation starts to become fairly accurate. Of course, even higher sample rates would be better and more accurate, but “double the highest frequency” is the bare minimum. And you don't want to go too high above the bare minimum, because that starts to consume excessive computer resources with decreasing incremental gains.

Now, think back to what is considered to be the usual range for human hearing: 20 Hz to 20,000 Hz. Since the majority of people can’t hear anything above 20 kHz, when an audio engineer is doing final mastering on a song, he/she will probably put a filter on the track to try to eliminate frequencies above 20 kHz. Why bother keeping them, if nobody can hear them? So that means that once the mastering is done, the highest frequency is supposed to be around 20 kHz. Use Nyquist, and you’ll see that double that number is 40 kHz, which should be our minimum effective sample rate to hear an accurate representation of the audio.

But wait, 40 kHz is not the same as 44.1 kHz! Well, you have to understand that high-cut filters don’t work perfectly at an exact frequency. It’s more of a roll-off. So if you’re trying to cut everything above 20 kHz, you’ll still have a bit of stuff at 21 kHz and 22 kHz coming through, although it’ll be quite diminished. So some sources say that when the people who wrote the standards for CD’s were trying to come up with a number, they picked 22.05 kHz as being the highest frequencies that really mattered. So double that was 44.1 kHz. And that became the new standard, even though it was a somewhat arbitrary number. Mind you, other sources say that it relates to the fact that video tape was originally used for digital mastering of CD’s and give a highly technical (and plausible) proof of the math as related to video standards. And some other sources point out, perhaps just for fun, that 44,100 is the product of the first four prime numbers squared (two^2 times three^2 times five^2 times 7^2).

Whatever the actual reasoning, the main thing is that people can’t generally hear frequencies above 20 kHz, so the Nyquist Theorem says that they have to be recorded with a sample rate of at least 40 kHz, and for some reason a slightly more conservative number of 44.1 kHz was picked for CD's, and remains the standard to this day.


Fletcher-Munson Curves

A Fletcher-Munson curve is used to represent ranges of "equivalent loudness" at various frequencies. This is a fairly subjective measure, since a person has to estimate the perceived volume of a sound, but tests of large samples of the population have given some fairly detailed results over time. Essentially if you pick a line on the graph, and follow it, you'll be able to see what volume for any particular frequency is required to be "equivalent" in perceived volume to a different frequency at a different actual volume. Here's a chart:






Aliasing and Anti-Aliasing

If an engineer didn’t filter out frequencies above 20 kHz, what would happen? Well, the simple answer is that those frequencies would “still be there” even though we couldn’t hear them. The problem would be that these inaudible frequencies would get sampled. Any frequencies that are at higher levels than half the sample rate don’t get sampled accurately. The equipment doing the sampling perceives a different waveform than what it’s actually looking at.

There is actually a mathematical way to predict the “fake” frequency that the A->D converter perceives. It is the sample rate minus the frequency. So if you had audio at 33.1 kHz going through something being sampled at 44.1 kHz, the converter thinks that it is hearing a waveform with a frequency of 44.1-33.1 kHz, or 10 kHz. So you get artifacts at the 10 kHz frequency in your audio. The 10 kHz frequency is thus called the “alias” of the original frequency, its false identity. To further complicate matters, consider that every sound has harmonics. So a tone at 10 kHz produces harmonics at 30 kHz (among other frequencies), so you also have to consider the affects of alias problems from those harmonics.

Anti-aliasing is very simple. It is the name for the process whereby the high frequencies are filtered out so they don’t create aliases. I referred to this already in the previous section: anti-aliasing is basically just the application of a high-cut filter to eliminate the high frequencies that aren’t needed, so they don’t create aliases (artifacts and distortion) in the good, audible part of the frequency spectrum. By the way, anti-aliasing is also used extensively in graphics, and one of the links at the bottom of this post has some good information re. the graphical applications of anti-aliasing.


Quantization Noise

When you're taking a sample of an instantaneous signal level (ie. analogue-to-digital conversation, or ADC), the difference between your recorded or stored value of the measurement and the true value of the signal is called the quantization noise. Basically, this error is causing by rounding or truncation of data during the sampling of the signal. It can also happen during signal processing and data communication. So in other words, quantization noise is the minor errors in accuracy during any of these processes. Luckily, if quantization noise becomes a problem in your audio, it might be possible to mitigate that with the use of dither.


Dithering

When calculations are performed on audio data, certain patterns arise. That’s because the calculations are all mathematically based, so the results are the same no matter how many times you try the calculation over. Through a complicated process, these calculations can produce audio artifacts in consistent parts of the frequency spectrum that the human ear can notice slightly. The process of down-sampling from 24 to 16-bit can cause those same unwanted patterns. We want to get rid of those patterns, to make the audio sound smoother. And as noted above, we can also have problems with quantization noise that occurs during the sampling process.

Dithering is a process by which a tiny bit of random “noise” is added during processing, and it has the effect of “smoothing out” anomalies. A real-world attempt at an analogy? Let’s say that you’ve got a pool of water that is perfectly still except for the fact that there is a bag of golf balls hanging over it, and a golf ball drops out of the bag into the water once every three seconds. That disturbance, where the golf balls keep hitting, is very obvious. However, if in addition to the golf ball, there are tons of small pebbles landing all over the surface randomly, the disturbance of the golf ball is a lot less obvious. The other small bits of noise help “drown out” the obvious disturbance. I guess that a more realistic analogy would be on a golf course. If you shank a ball into a water trap on a calm day, it’s easy to see it land in the water. But if there is rain disturbing the surface of the water, it’s a lot harder to notice the golf ball hitting. Think of the obvious disturbance of the golf ball as being analogous to the audio artifact that we need to mask, and the constant disturbances from the rain as being our noise for dithering.

The availability of excellent dithering algorithms on most systems today, combined with 24-bit recording capabilities (which means the noise floor in a digital system is extremely low) means that you don’t really have to worry about recording signals at a fairly low level and then having to deal with lower-resolution quantization noise, or systemic noise. So when you’re recording a multi-track project, you don’t have to try to get every single track up around -5 to -3 for best results. You can probably record everything down around -12 to -10 and give yourself lots of headroom to work with during mixing, without running into noise problems.

If you’ve done your project at one level and want to down-sample the final result (ie. converting a 24-bit session to a 16-bit track destined for CD), you take that final version of your song and convert it. There will usually be an option in your audio editor that asks if you want to apply dither when down-sampling. There are also lots of complicated options and algorithms that can be applied, with respect to dither types and noise-shaping. That’s beyond the level of discussion that we want to get into today. Just go with the defaults if you’re not sure what to pick. If things sound funny after the down-sample, try against with a different algorithm.


Parting Words

Obviously, I’ve covered these subjects in a fairly superficial manner. Baby steps. Hopefully, if you watched the video, that gave you a lot of additional insight. Now you know the general theory behind these subjects that are important to audio engineers. If you want to do further research on your own, I’ll put some links here now. Be forewarned! The physics and mathematics behind these topics can be pretty intense! Especially with dithering algorithms.



Nyquist-Shannon:


Anti-Aliasing:


Quantization Noise & Dither:


24-bit versus 16-bit sampling:




If you’ve read all the way through this, you obviously want to learn more about audio recording and music production work. I don’t have a ton of written tutorials like this online, but I do have quite a few detailed YouTube videos that you might enjoy. I've got an organized list of those videos in the index of my "videos" page on my main website. If you're interested in any of those topics, you should bookmark this page right now:

www.djbolivia.ca/videos.html


Thanks for your interest in this series, and thanks for sharing this post or links to any of the videos.









Follow Jonathan Clark on other sites:
        Twitter: twitter.com/djbolivia
        SoundCloud: soundcloud.com/djbolivia
        YouTube: youtube.com/djbolivia
        Facebook: facebook.com/djbolivia
        Main Site: www.djbolivia.ca
        About.Me: about.me/djbolivia
        Music Blog: djbolivia.blogspot.ca
        MixCloud: mixcloud.com/djbolivia
        DropBox: djbolivia.ca/dropbox





If you enjoy my tutorials, and want to make a small donation to help purchase additional video equipment to use in future tutorials, here's my Bitcoin wallet address with a QR version: 19VhVFnw76Vor86SDoN2CSLcarQeZZqysE



Saturday, April 13, 2013

Basic Mathematics of Sound: Sample Rate, Sample Size, and Binary

When I first sat down to write this post, my intent was to teach some of the people who follow me on YouTube what sample sizes and rates are all about. You may have seen reference to sample rates before: CD’s at 16/44.1. High quality studio sessions at 24/96. I figured that I could type up a few paragraphs, record a short accompanying video, and be done in under an hour.




But then I started to think about what I’d have to explain if I explained sample rates: for starters, how frequency is measured, what is considered the normal range for human hearing, and how binary works. And then I started to realize that I should probably touch on the Nyquist Theorum, which directly affects minimum sample rates required to make a recording sound good. If I got into Nyquist, it seemed that overlooking a quick explanation of aliasing and quantization noise would be criminal. And if I was going to mention anti-aliasing techniques, it would be a shame to skip over a basic explanation of dithering.

So this is going to be a story that touches as lightly as possible about some of the mathematics of sound and recording, but I promise that I will try to explain this in the most simple, common-sense, layman terms possible. I don’t want your eyes to glaze over and have you navigate to the latest episode of Breaking Bad, where the science seems more applicable to everyday life. Therefore, if you’re a professional audio engineer and you’re reading through this, and one of my explanations makes you start sweating and stuttering and your heart begins to palpitate, remember that I’m trying to make these explanations more accessible for a wide audience of people who don’t have advanced degrees in audio engineering. I’m going to explain things in ways that make simple sense to me. If you see an outright mistake, sure, go ahead and email me. But realize that sometimes I’m just trying to keep things simple. I’m sort of implying the spherical cow.

Before you go further in reading the rest of this post, here’s a link to an associated tutorial video that I put together to accompany this post:




Although watching the video is the best way to learn about this topic, because of my illustrations on the whiteboard, I've also put a copy of the audio portion of that tutorial video on SoundCloud, for people who would like to download it to listen to in vehicles, while travelling, etc. Here's the audio-only version:




Sample Rates

Alright, let’s get started. You’ve probably heard lots of things about sampling. First of all, you need to understand that I’m talking about sample rates and frequency, which relate to the way that a computer converts an analogue signal (a real-world sound) to a digital representation. The word “sampling” is also used in the music industry in reference to recording a short section of audio, perhaps from another record or song, and pasting copies or altered copies of that into a new song. I’m not referring to that kind of sampling.

When “digitizing” an audio source, the way that a computer works is that it takes a measurement of the audio many times per second, and then just plays these samples back in order very quickly. Each individual slice is called a sample of the audio. The number of times per second that the audio is sampled is called the “sample rate.”

Basically, anything that is expressed in “occurrences during a period of time” is a frequency. There was a German physicist and Nobel Prize winner named named Gustav Ludwig Hertz. Any time people refer to frequency, they refer to something that happens over and over again at a regular interview, whether it is a cyclical thing (rotation, oscillations, or waves) or a periodic thing (counts of an event). The number of occurrences per second is the frequency, and the unit it is expressed in is called the Hertz (Hz). The “period” of something, ie. the time between occurrences, is the reciprocal of the frequency.

So when something is recorded at 800 Hz, that means that a sample measurement of the sound is recorded eight hundred times a second. That seems like a lot, eh? It’s not. In today’s world of audio engineering, a typical sample rate is much faster than that. All CD’s have been standardized as having sample frequencies of 44,100 Hz, or 44.1 kHz. That’s why the default sample frequency for a lot of music is at 44.1 kHz, because it’s been conformed for CD distribution.

Having a higher sample frequency gives you a better true representation of what was happening in the underlying waveform. Let’s try to use a really simple example. Let’s say that you’re in a concert hall listening to a singer. The singer’s volume, as he/she sings, is jumping up and down a lot, from very quiet to very loud and back. If you take a “sample” once per minute, you don’t have a very good idea of how loud the singer is over the time that he/she is singing. You have no idea whether the sound is louder or softer in the other fifty-nine seconds between your samples, or maybe both, jumping up and down. But if you increase your sample rate so you can take sample once per second, you’ve got a better idea of how much the singer is changing their volume over time.

That was a coarse example. Increasing your sample frequency means that your digital interpretation of the audio is more accurate. But to get a really accurate representation in today’s world, computers sample audio at a stunning 44,100 times per second to get a really accurate representation. And that’s just for CD’s. If you can sample faster, your digital sound is going to be even better (more similar to the original). DVD’s are recorded at a slightly higher sample rate than CD’s, at 48 kHz. And in today’s recording studios, sampling audio twice as fast is quite common, at rates of 96 kHz. Of course, taking twice as many measurements (96 thousand per second instead of 48 thousand per second) means that you’re going to require twice as much storage space on your computer, and more accurate equipment, which is why many studios don’t go with rates that are higher than 96 kHz.

So now that you understand what sample frequency is, what does the bit depth mean? The simple answer is “the resolution or accuracy of each individual sample.” But in order to understand that better, I’m going to talk a bit about binary numbers. I promise, this next section about binary is the only section where I have to get fairly mathematical.




Binary Notation

How does binary work? Binary is a numbering system. It’s the simplest complex numbering system, base two. There are only two digits in this numbering system, 0’s and 1’s. We’re used to base 10, which has ten different digits. Base two should be a lot easier with only two digits to think about. And base two is also easy to deal with when you’re thinking about computers and electrical engineering. Computers can’t “think” because they aren’t sentient brains. But numbers can be represented by “simulating” the 1’s and 0’s of binary with two different power states, power-on and power-off.

In binary, a single digit is called a “bit.” Bit is basically the base-two equivalent of “digit” in the base-ten system that we’re used to.

In binary, a numerical value is called a “word.” Word is basically the base-two equivalent of “number” in base-ten.

In base ten, we don’t really use the phrase “number length” to talk about how many digits are in a number. But in base-two, we use the phrase “word-length”. Computers have to deal with electrical connections that are much more simple than the human brain, so we have to keep things simple and consistent. When computers communicate, instead of a stream of single bits, they can sometimes deal with full words, ie. a group of bits communicated simultaneously. Think of it like a highway with multiple lanes, and individual cars as being bits. Because there are multiple lanes, several bits can pass a certain point at the same time. Computers are analogous because a full “word” of bits can often be shared as a single entity. The word-length refers to how many bits that is.

In the early days, computers were simple and could only understand short binary words. By the 1980’s, the commodore 64 and the apple computers were talking with 8-bit word lengths. Soon after, PC’s with MS/DOS came out that talked in 16-bit words. In the past few years, PC’s have grown up from 32-bit operating systems to 64-bit.

In the audio world, a sixteen bit word length allows for a lot of different numbers. The number of different samples possible in binary depends on the square of the word length. If you have four-bit words, you have sixteen different choices (4^2). If you have eight-bit words, you have 256 different choices (8^2). If you have sixteen-bit words, you have 65,536 choices. If you have 24-bit words, you have TONS of choices – 16,777,216 to be exact.

Ok, enough math. What does this mean? Well, having more choices means higher resolution. What if you could measure the volume of a sound that could vary from complete silence (zero decibels) to the volume of a loud jet engine (128 dB)? And what if your scale for measuring is digital? With an analogue measurement, such as recording on magnetic tape, you can measure the exact volume. But if you have to have a digital representation, you only have certain numeric choices. If you’re limited to 4-bit sample size/resolution, then remember that 4 bits only gives you sixteen possibilities. So you have to go with some pretty rough measurements. Anything from 0 to 8 dB might have to be represented in your sample as “0”, from 8 to 16 dB as “1”, from 16 to 24 dB as “2” and so on. But there’s a lot of variation between say 8 and 16 dB. That’s not very accurate if you later see that your sample was written down as “1” and you have no idea whether the real sound was at 8dB or 16dB, or anything in between.

But what if you can increase your sample width, the number of choices. If you can measure the sound with 16-bit sample size, you have 65,536 different possible levels to choice from. That gives you a lot more choices in the scale from silence up to 128dB. You might be looking at a scale like this:
       0 = 0.000 dB
       1 = 0.002 dB
       2 = 0.004 dB
       3 = 0.006 dB

And all the way up to:

       65,534 = 127.998 dB
       65,536 = 128.000 dB

Obviously, by having more bits, you can capture/communicate more information at a higher resolution, which gives you a better representation of what the volume was in the original sound. Going from 16-bit sample size to 24-bit sample size obviously means that you can measure things with an even better resolution. By the way, note that I'm talking in generalizations here so far. If you're an experienced audio engineer, you'll know that digital audio in a DAW is treated a bit differently in that the higher sample size actually means a lower noise floor, but we'll get into that in tutorials 8 and 9. For now, let's keep things simple.

If you want a rough example of a real world analogy, think about the resolution of the camera in your cell phone. If you’ve got a 3 megapixel camera in one phone and a 13 megapixel camera in a second phone, the 13mp is obviously going to give you a better picture, right? That’s because it’s a higher resolution. You’ll get a more accurate representation of what you’re trying to record (photograph) because there are more bits used to store the information.

CD standard resolution is 16-bit. That should be the minimum sample size that you want to work with in a music production or recording environment. Anything less sounds noticeably imperfect even to untrained ears. But we have the technology to do better. If you see a sound card that is referred to as 24/96, it means that the sample size is 24-bits, and the frequency with which those samples are taken is 96,000 times per second. If you have the choice, try to work with 24-bit equipment, and make sure your computer software has your “project settings” at 24-bit instead of a lower number. The only drawback is that 24-bit recording takes up more space on your storage device.

Before I move on, let me just say something about a different type of binary. Different type? Well, in all of the above, I’m assuming that you’re using what’s called a “fixed point” notation. But there is also something called a “floating point” notation, so you’ll see things like “32-bit floating.” In such a system, the last eight bits may not be used specifically to increase resolution, but might instead be used to increase dynamic range significantly. I won’t bother trying to explain the significand/mantissa or the rest of the theory. You’ll find all kinds of discussion and debate about this on the internet, but I think the simple answer is that 32-bit floating isn’t necessary much better than 24-bit fixed, and 32-bit takes up 33% more space. Check out this link for more: http://www.bores.com/courses/intro/chips/6_precis.htm

For now, I’d suggest that you shouldn’t select 32-bit at the start of a project because your newly recorded files will be 33% larger without any improvement whatsoever in fidelity. It makes more sense to switch a session's resolution to 32-bit float later, when bouncing mixes or performing complex signal and effects processing.


Sample Rates as applied to Sound

So I started out to explain the difference between sample frequency (times per second that samples are taken) and sample depth (number of bits of data per sample). And it turned into a three thousand word essay. Can I give you anything more practical to wrap things up? I’ll try:

First, be aware that if you are saving audio files, a single STEREO audio file at 16-bit resolution and sample rate of 44.1kHz will take up approximately ten megabytes of disk space for each minute of audio. Memorize that. Once you know that, you can calculate potential storage requirements for all variations of sample size, rate, number of tracks, and project length.

Example:

Let’s say you’re recording a vocal (single mono track), an acoustic guitar (single mono track), and a piano (feeding a stereo signal to your DAW). All told, you have a total of four tracks. Mono signals count as a single track, and stereo signals count as two. Four mono track is equal to two stereo tracks. So based on what you’ve memorized of 10 megs per minute of stereo audio at CD quality (16/44.1), then you’ll need double the storage space for your project, because you have the equivalent of two tracks. So budget for 20 mb per minute of audio.

Let’s say that you’re making a recording that will be exactly eight minutes long. Multiply your 20megs by 8, and you’ll need 160megs of storage.

But wait, let’s say that a studio engineer comes in and says that he wants you to change from 16-bit to 24-bit sample sizes. Your requirement just grew by 50%, so now you need 240megs of storage instead of 160.

Then, let’s say that he also adds that the project will be for DVD with no CD equivalent, so you need to change from 44.1 kHz sampling to 48 kHz. Roughly, add 10% to your numbers, so your 240megs becomes 264megs.

Then finally, the engineer changes his mind yet again and decides to jump it up from 48 kHz to 96, just because he’s going to be working with a lot of digital effects and he wants the highest project quality possible. So double it again, and your storage requirements go from 264 to 528megs.

That kind of stuff is handy to know when you’re calculating space requirements for a project. However, to be honest, if I’m budgeting for storage space for a project, I’ll double what my calculations show me, just to be safe. So I’d want to have a full gigabyte of storage available for the example above. Things always get out of control and take up more room than you anticipate.


Oh yes, and what do I recommend/use for sample rates? I often just use 16/44.1 for projects. Face it, CD standard has been great quality for a couple decades. How can you go wrong? Unless the project is very important, using 16/44.1 saves disk space, and saves a bit of time because I don’t have to down-sample my final track at the end for compatibility with CD players. For most of my work, CD quality is just fine. However, I'll sometimes use 24/44.1 for projects. That's an odd setting, which you'll rarely see, but I'll explain why I use that in tutorials 8 and 9. You'll also see most studios use 24/96 for their projects. The advantage of 24/96 is that when you save it as an archive, if you need to go back to it ten years from now, computers will probably have advanced so much that it’ll probably even be possible for cell phones to be used to edit projects of that complexity.


Alright, that’s enough for today. I’ll save the Nyquist Theorem, Quantization Noise, Anti-Aliasing, and Dithering for future tutorials. Thanks for reading. I hope you now understand a lot more about the basic mathematics of audio.


If you’ve read all the way through this, you obviously want to learn more about audio recording and music production work. I don’t have a ton of written tutorials like this online, but I do have quite a few detailed YouTube videos that you might enjoy. I've got an organized list of those videos in the index of my "videos" page on my main website. If you're interested in any of those topics, you should bookmark this page right now:

www.djbolivia.ca/videos.html


Thanks for your interest in this series, and thanks for sharing this post or links to any of the videos.









Follow Jonathan Clark on other sites:
        Twitter: twitter.com/djbolivia
        SoundCloud: soundcloud.com/djbolivia
        YouTube: youtube.com/djbolivia
        Facebook: facebook.com/djbolivia
        Main Site: www.djbolivia.ca
        About.Me: about.me/djbolivia
        Music Blog: djbolivia.blogspot.ca
        MixCloud: mixcloud.com/djbolivia
        DropBox: djbolivia.ca/dropbox




Thursday, April 11, 2013

SHG Radio Show, Episode 160

Welcome to this week's edition of Subterranean Homesick Grooves™, a weekly electronica-based radio show presented originally on CHMA FM 106.9 at Mount Allison University in Atlantic Canada (but expanded to a distribution on other terrestrial radio stations), and also distributed as a global podcast through iTunes and numerous other sites. The show is normally programmed and mixed by Jonathan Clark (as DJ Bolivia), although some weeks feature guest mixes by other Canadian DJ's. The show encompasses many sub-genres within the realm of electronic dance music, but the main focus is definitely on tech-house and techno, and a small amount of progressive, trance, & minimal. Liner notes for this episode (SHG 160) can be seen below.

Para la información en español, vaya aquí.

By the way, if you're looking for DJ mixes in styles other than progressive/tech-house, check out www.djbolivia.ca/mixes.html. That page has a number of mainstream/top40 dance mixes (the "Workout Mix" series), as well as some deep house, drum and bass, and other styles.




Here's our Podcast Feed to paste into iTunes or any other podcatcher:
http://feeds.feedburner.com/shg

Older episodes of the show are not directly available from our main servers anymore, to conserve space for more recent episodes. However, all older episodes have been posted individually on SoundCloud, and also in archives of 25 episodes apiece (convenient for bulk downloading) from DJ Bolivia's Public Dropbox folder. That Dropbox link also has folders for individual tracks and remixes, project files and stem collections for producers who want to make their own remixes, videos, and other material. You don't even need to have a Dropbox account to download files from it.


Here’s a link so you can listen to the show or download it from SoundCloud:



This week's show is a bit of a unique one. I dusted off the turntables and played a bunch of tracks that are about ten years old, on average. I also recorded the production of the show on video, so you can check that out at the bottom of the page. There are two versions available: the first is the one that most people will want to listen to, because it's the audio that the audience normally hears. The second video is designed for a limited audience ... beginning DJ's who want to try to understand beat-mixing better. The audio in that second video incorporates what I hear as a DJ when I'm mixing. Not very enjoyable to listen to for relaxation, but useful if you're trying to understand beat-mixing better.


Here are Track Listings for episode 160:

01. DJ Nukem feat Jamie Wong-Li, "Secrets" (Deep Mix).
02. Peroxide, "Metropolis."
03. Nick K, "Space Dough."
04. Seragaki, "Ryukyu Underground" (Quivver Remix).
05. Tony Thomas & Flow, "Bugged" (Cheech Mix).
06. Midnight Star, "Freakazoid" (Adam Nathan & J-Groove Mix).
07. Sphere, "Barrier" (Original Nu Breed Mix).
08. Chris Fortier, "Despegue" (Dub).
09. Yunus vs Subsky, "Erotic Sumo."
10. Element N, "On A Mission" (Freeloader Mix).






Here are links to either personal websites, MySpace pages, or [usually] the SoundCloud pages for a few of the original artists and remixers/producers listed above. Not many this week, since these are pretty old tracks:



DJ Nukem (Switzerland)
Quivver (United States)
Tony Thomas & Flow (United Kingdom)
Adam Nathan (Canada)
Chris Fortier (United States)
Subsky (Turkey)


Subterranean Homesick Grooves is a weekly specialty EDM music show with a basic weekly audience base of about 1500 listeners per week through podcasting and direct downloads, another hundred or so listeners through SoundCloud, and an unknown number of listeners through terrestrial FM broadcast. If you're a radio station programming director, and would like to add Subterranean Homesick Grooves to your regular programming lineup, contact djbolivia@gmail.com for details. We currently release SHG as an advance download to a number of stations globally on a weekly basis (at no charge), and we welcome inquiries from additional outlets.

Go to the Mix Downloads page on the main DJ Bolivia website if you'd like to check out a number of our older shows, or visit our SoundCloud page for individual tracks and remixes. And if you're interested in learning more about DJ'ing or music production, check out Jonathan Clark's extensive and very popular series of YouTube tutorials. There's a full & organized index of all the videos at:
djbolivia.ca/videos.html

We also have a file containing complete track listings from all of DJ Bolivia's radio shows, studio mixes, and live sets. The PDF version can be viewed from within your browser by clicking directly. Both the PDF and the Excel versions can be downloaded by right-clicking and choosing the "save link as" option:

View as PDF file: http://www.djbolivia.ca/complete_track_history_djbolivia.pdf
Download Excel file: http://www.djbolivia.ca/complete_track_history_djbolivia.xlsx









Follow Jonathan Clark on other sites:
        Twitter: twitter.com/djbolivia
        SoundCloud: soundcloud.com/djbolivia
        YouTube: youtube.com/djbolivia
        Facebook: facebook.com/djbolivia
        Main Site: www.djbolivia.ca
        About.Me: about.me/djbolivia
        Music Blog: djbolivia.blogspot.ca
        MixCloud: mixcloud.com/djbolivia
        DropBox: djbolivia.ca/dropbox




This show was actually recorded on video, and there are two versions available on YouTube. The one that most people will be interested in is this video, which features an audio feed exactly the same as the publicly released radio show episode:





Saturday, April 6, 2013

Audio Recording Tutorials #03 to #05 - Layered Multi-Track Recording


Videos #03 through #05 of my Audio Recording tutorial series are now online (and I have some additional study notes further down in this post). These three particular videos explain how to go about making a multi-track recording when you must record the tracks one after another in layers, rather than being able to perform everything simultaneously. My overall Audio Recording tutorial series is related more to home studio work than it is to DJ'ing, although I'm still covering the very basics of audio engineering and production work. The series is eventually going to expand into about thirty different videos about simple recording and audio engineering, everything from the basics of recording instruments and vocals, to the use of MIDI, to the theory of sound and audio, and eventually a number of advanced editing and recording techniques.





Audio Recording Tutorial #03: Layered Multitracking part 1

In this video, we use Adobe's Audition software to record the tracks that we're going to be working with on this project. I recorded a Neil Young song (After The Gold Rush) with four parts: piano, strings, bass, and acoustic guitar. This video described the process of setting up the session, setting up individual tracks and arming them, recording the audio, and making sure the project was ready for editing.








Audio Recording Tutorial #04: Layered Multitracking part 2

In this video, I started to explain basic editing tasks such as using the razor/slice tool to cut a track up into clips, making changes to track volumes and panning, and adding volume and panning automation to individual clips. I also talked about Signal-To-Noise Ratios, the use of subtractive EQ'ing to give your instruments more space in a mix, and archiving.








Audio Recording Tutorial #05: Layered Multitracking part 3

We finished editing the individual tracks, I talked about snapping and zero crossings and cross-fade techniques, and then we bounced the edited tracks, did some EQ'ing, added reverb, and adjusted panning and volumes again. Finally, we bounced all the tracks to a single audio files, did some additional reverb and hard limiting/amplification work on it, and saved the final result to disk.








The Final Product: the song that was recorded

This is a very short video, just over three minutes long. It's the final edited copy of the song that I recorded, "After The Gold Rush." This song was originally written by Neil Young, and was the title track to his third album, released in 1970.








If you want to download the audio files that I was using in these videos, to better hear the audio (or experiment with it) in your own home studio setup, here’s a link to the two zipped folders containing the relevant files:

www.djbolivia.ca/tutorials/audiorecording03.rar

www.djbolivia.ca/tutorials/audiorecording04and05.rar


Once you've watched the two videos above, I'd recommend that you spend some time learning a bit more about a few of the things that I covered in this video:


Computer Technology: SSD's vs HDD's:


Fundamentals and Harmonics:


Zero Crossings & Snapping:




I also have quite a few other tutorial videos relating to DJ'ing, audio editing software, and studio equipment. I've got an organized list of those videos in the index of my "videos" page on my main website. If you're interested in any of those topics, you should bookmark this page right now:

www.djbolivia.ca/videos.html


Thanks for your interest in this series, and thanks for sharing this post or links to any of the videos.









Follow Jonathan Clark on other sites:
        Twitter:  twitter.com/djbolivia
        SoundCloud:  soundcloud.com/djbolivia
        YouTube:  youtube.com/djbolivia
        Facebook:  facebook.com/djbolivia
        Main Site:  www.djbolivia.ca
        About.Me:  about.me/djbolivia
        Music Blog:  djbolivia.blogspot.ca




Thursday, April 4, 2013

SHG Radio Show, Episode 159

Welcome to this week's edition of Subterranean Homesick Grooves™, a weekly electronica-based radio show presented originally on CHMA FM 106.9 at Mount Allison University in Atlantic Canada (but expanded to a distribution on other terrestrial radio stations), and also distributed as a global podcast through iTunes and numerous other sites. The show is normally programmed and mixed by Jonathan Clark (as DJ Bolivia), although some weeks feature guest mixes by other Canadian DJ's. The show encompasses many sub-genres within the realm of electronic dance music, but the main focus is definitely on tech-house and techno, and a small amount of progressive, trance, & minimal. Liner notes for this episode (SHG 159) can be seen below.

Para la información en español, vaya aquí.

By the way, if you're looking for DJ mixes in styles other than progressive/tech-house, check out www.djbolivia.ca/mixes.html. That page has a number of mainstream/top40 dance mixes (the "Workout Mix" series), as well as some deep house, drum and bass, and other styles.




Here's our Podcast Feed to paste into iTunes or any other podcatcher:
http://feeds.feedburner.com/shg

Older episodes of the show are not directly available from our main servers anymore, to conserve space for more recent episodes. However, all older episodes have been posted individually on SoundCloud, and also in archives of 25 episodes apiece (convenient for bulk downloading) from DJ Bolivia's Public Dropbox folder. That Dropbox link also has folders for individual tracks and remixes, project files and stem collections for producers who want to make their own remixes, videos, and other material. You don't even need to have a Dropbox account to download files from it.


Here’s a link so you can listen to the show or download it from SoundCloud:





Here are Track Listings for episode 159:

01. Tim Xavier feat George Rontiris, "Space Jockey" (Original Mix).
02. Luca Morris, "Senorita Fly" (Original Mix).
03. Fred Hush & Noseda, "A Joke That Kills" (Green Velvet Remix).
04. Julian Garces, "Next Step" (Original Mix).
05. Yann Solo & Karl Jefferson, "Want To See" (Original Mix).
06. PanPot, "Kepler" (Julian Jeweil Remix).
07. Hoxton Whores, "Stand Myself" (Kevin Andrews Mix).
08. Pirupa & Hollen, "El Cambio Politico" (Original Mix).
09. Steve Self, "Bease Knees" (Vortex Remix).
10. Caballero, "Let's Groove" (Original Mix).
11. Roger Shah, Sian, Kosheen, "Hide U" (Jerome Robins Vocal Mix).
12. Kadoc, "The Night Train" (Amo & Navas Rework).






Here are links to either personal websites, MySpace pages, or [usually] the SoundCloud pages for a few of the original artists and remixers/producers listed above:



Tim Xavier (Germany)
Luca Morris (Italy)
Fred Hush (Belgium)
Noseda (Belgium)
Green Velvet (United States)
Julian Garces (United States)
Karl Jefferson (France)
PanPot (Germany)
Julian Jeweil (France)
Hoxton Whores (Britain)
Kevin Andrews (Britain)
Pirupa & Hollen (Italy)
Steve Self (Greece)
Jerome Robins (Canada)
Amo & Navas (Spain)


Subterranean Homesick Grooves is a weekly specialty EDM music show with a basic weekly audience base of about 1500 listeners per week through podcasting and direct downloads, another hundred or so listeners through SoundCloud, and an unknown number of listeners through terrestrial FM broadcast. If you're a radio station programming director, and would like to add Subterranean Homesick Grooves to your regular programming lineup, contact djbolivia@gmail.com for details. We currently release SHG as an advance download to a number of stations globally on a weekly basis (at no charge), and we welcome inquiries from additional outlets.

Go to the Mix Downloads page on the main DJ Bolivia website if you'd like to check out a number of our older shows, or visit our SoundCloud page for individual tracks and remixes. And if you're interested in learning more about DJ'ing or music production, check out Jonathan Clark's extensive and very popular series of YouTube tutorials. There's a full & organized index of all the videos at:
djbolivia.ca/videos.html

We also have a file containing complete track listings from all of DJ Bolivia's radio shows, studio mixes, and live sets. The PDF version can be viewed from within your browser by clicking directly. Both the PDF and the Excel versions can be downloaded by right-clicking and choosing the "save link as" option:

View as PDF file: http://www.djbolivia.ca/complete_track_history_djbolivia.pdf
Download Excel file: http://www.djbolivia.ca/complete_track_history_djbolivia.xlsx









Follow Jonathan Clark on other sites:
        Twitter: twitter.com/djbolivia
        SoundCloud: soundcloud.com/djbolivia
        YouTube: youtube.com/djbolivia
        Facebook: facebook.com/djbolivia
        Main Site: www.djbolivia.ca
        About.Me: about.me/djbolivia
        Music Blog: djbolivia.blogspot.ca




By the way, I've been working on a number of tutorial videos lately about various aspects of DJ'ing and music production. Here's one that I completed recently:





Monday, April 1, 2013

Audio Recording Tutorial #02 - Basic Multi-Track Recording


I just put part two of my Audio Recording tutorial series online (and I have some additional study notes further down in this post). This series is more related to home studio work than it is to DJ'ing, although I'm still covering the very basics of audio engineering and production work. This series is eventually going to expand into about thirty different videos about simple recording and audio engineering, everything from the basics of recording instruments and vocals, to the use of MIDI, to the theory of sound and audio, and eventually a number of advanced editing and recording techniques.





Audio Recording Tutorial #02: Basic Multi-Track Recording

In this video, we start exploring multi-track recording in a single pass.  I talk about external soundcards, which usually connect to your computer via USB or firewire, and which give you better quality of your audio signals flowing in and out of the computer.  I talk about types of audio signals & cords, and the plugs that you'll commonly encounter (ie. XLR, 1/4", RCA/phono).  I discuss basic information about microphones, including dynamic and condenser mikes, which are the most common types.  Working with Abode Audition, I record a simple piano performance using a dynamic and a condenser microphone, and then do some basic editing to make the track sound better.  The song I played was an instrumental cover of "Wasted Time," written by Glenn Frey and Don Henley of the Eagles.





If you want to download the audio files that I was using in this video, to better hear the audio (or experiment with it) in your own home studio setup, here’s a link to a zipped folder containing the relevant files:

www.djbolivia.ca/tutorials/audiorecording02.rar





Audio Recording Tutorial #01: Basic Recording

This was the first video in the series, which you might want to watch before you watch #02. This one deals with very simple audio recording using the microphone on a HD video camera, and using a portable audio recording device.  It talks about some of the basic types of processing that a studio engineer would put on an audio track (particularly on vocals), which includes reverb, delay, chorus, and EQ’ing, although it only goes into any detail on the last of those four topics.  I predominantly use Audacity to illustrate some basic concepts, with just a bit of use of Audition and VLC to help out with some other tasks.  Essentially, I’ve recorded a song (a cover of Pearl Jam's "Elderly Woman" on acoustic guitar and singing) and then extracted the audio from the recording devices, then I did some very simple processing in order to come up with a better quality audio file.








Once you've watched the two videos above, I'd recommend that you spend some time learning a bit more about a few of the things that I covered in this video:


Microphones:


Phantom Power:


Signal Cords:


USB Condenser Microphones:


Mono versus Stereo:


Background Noise:




I also have quite a few other tutorial videos relating to DJ'ing, audio editing software, and studio equipment. I've got an organized list of those videos in the index of my "videos" page on my main website. If you're interested in any of those topics, you should bookmark this page right now:

www.djbolivia.ca/videos.html


Thanks for your interest in this series, and thanks for sharing this post or links to any of the videos.









Follow Jonathan Clark on other sites:
        Twitter:  twitter.com/djbolivia
        SoundCloud:  soundcloud.com/djbolivia
        YouTube:  youtube.com/djbolivia
        Facebook:  facebook.com/djbolivia
        Main Site:  www.djbolivia.ca
        About.Me:  about.me/djbolivia
        Music Blog:  djbolivia.blogspot.ca