<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>sound theory Archives - The OC Recording Company Blog</title>
	<atom:link href="https://ocrecording.com/blog/tag/sound-theory/feed/" rel="self" type="application/rss+xml" />
	<link>https://ocrecording.com/blog/tag/sound-theory/</link>
	<description>Recording Studio, Record Label, Audio School and Music Publisher in Orange County, California founded by Asaf Fulks in 2005</description>
	<lastBuildDate>Sun, 12 Apr 2026 19:46:47 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Sound Theory for Engineers: The Physics Behind Every Knob You Turn</title>
		<link>https://ocrecording.com/blog/sound-theory-for-engineers/</link>
					<comments>https://ocrecording.com/blog/sound-theory-for-engineers/#respond</comments>
		
		<dc:creator><![CDATA[Asaf Fulks]]></dc:creator>
		<pubDate>Sun, 12 Apr 2026 19:45:10 +0000</pubDate>
				<category><![CDATA[In The Studio]]></category>
		<category><![CDATA[Asaf Fulks]]></category>
		<category><![CDATA[audio engineer]]></category>
		<category><![CDATA[decibels]]></category>
		<category><![CDATA[Fletcher-Munson curves]]></category>
		<category><![CDATA[Frequency spectrum]]></category>
		<category><![CDATA[sound theory]]></category>
		<category><![CDATA[The OC Recording Company]]></category>
		<guid isPermaLink="false">https://ocrecording.com/blog/?p=1131</guid>

					<description><![CDATA[<p>&#8220;If you want to find the secrets of the universe, think in terms of energy, frequency and vibration.&#8221; — Nikola Tesla This is the chapter that separates engineers&#8230;</p>
<p>The post <a href="https://ocrecording.com/blog/sound-theory-for-engineers/">Sound Theory for Engineers: The Physics Behind Every Knob You Turn</a> appeared first on <a href="https://ocrecording.com/blog">The OC Recording Company Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><em>&#8220;If you want to find the secrets of the universe, think in terms of energy, frequency and vibration.&#8221; — Nikola Tesla</em></p>



<p>This is the chapter that separates engineers who push buttons from engineers who understand what the buttons do. When you boost 3 kHz on a vocal, you are making a decision about frequency. When you move a microphone six inches closer and the bass gets fuller, you are hearing the proximity effect. When you notice that your mix sounds different at low volume than at high volume, you are experiencing the Fletcher-Munson curves in action. The physics is not separate from the music. It is the music.</p>



<h2 class="wp-block-heading">What Sound Actually Is</h2>



<p>Clap your hands. What just happened? Your palms collided, the impact pushed the surrounding air molecules together, and that disturbance rippled outward in every direction until it reached your eardrums. Sound is not a thing you can hold — it is an event. A disturbance rippling outward from a source, carrying energy from one place to another.</p>



<p>Every sound starts with something vibrating: a speaker cone, a vocal cord, a drum head, a guitar string. That vibration pushes air molecules together (compression) and pulls them apart (rarefaction), and this push-pull pattern travels outward like rings on a pond. If those pressure variations repeat fast enough and strong enough to fall within 20 Hz to 20 kHz, your brain interprets them as sound.</p>



<p>Here is the part that matters for engineering: energy cannot be created or destroyed, only converted. When a vocalist records a take, breath becomes vibration in the vocal cords (mechanical energy), which becomes pressure waves in the air (acoustic energy), which a microphone converts to voltage in a cable (electrical energy), which a converter encodes as data on a hard drive (digital energy). Your entire job as an engineer is managing these energy conversions. The better you understand them, the better your recordings sound.</p>



<h2 class="wp-block-heading">The Frequency Spectrum: Your Road Map</h2>



<p>Every sound occupies a range of the frequency spectrum. Knowing what lives where is fundamental to making EQ, mic selection, and arrangement decisions. Here is a practical breakdown of the ranges you will work with every day:</p>



<p><strong>20–40 Hz</strong> is rumble. You feel it more than hear it. Essential for subwoofers, often needs to be removed from instruments that have no business down there.</p>



<p><strong>40–80 Hz</strong> is sub-bass. This is where kick drums and bass guitars live at their lowest. Difficult to reproduce on small speakers, which is why your mix sounds different in the car.</p>



<p><strong>80–200 Hz</strong> is warmth. Upper bass frequencies that add body to a sound. Too much here and your mix sounds boomy and muddy. On smaller speakers, this is the lowest range you will actually hear.</p>



<p><strong>200–750 Hz</strong> is where muddiness accumulates. When your mix sounds cloudy and undefined, this is usually the culprit. Cutting here on individual tracks clears up the mix without losing warmth.</p>



<p><strong>750 Hz–1.5 kHz</strong> is the telephone range. Essential for intelligibility, but too much energy here makes things sound cheap and honky.</p>



<p><strong>1.5–5 kHz</strong> is the presence range. This determines how &#8220;in your face&#8221; a vocal or instrument sounds. The human ear is most sensitive around 3–4 kHz, which is why boosting here makes things cut through a mix — but too much causes ear fatigue fast.</p>



<p><strong>5–20 kHz</strong> is air, brilliance, and sparkle. Sibilance lives here, as do the shimmer of cymbals and the breathiness of a vocal. Boosting adds clarity and openness; too much creates harshness.</p>



<h2 class="wp-block-heading">Decibels: Why We Think in Logarithms</h2>



<p>The decibel is a logarithmic unit, and that matters because human hearing is logarithmic. We do not perceive loudness in a straight line — we perceive it in ratios. A 3 dB increase represents a doubling of acoustic power, but it takes roughly a 10 dB increase for something to sound twice as loud to your ears.</p>



<p>This has direct practical consequences. When a client says &#8220;turn the vocal up a little,&#8221; they usually mean about 1–2 dB — not 6 dB, which would be a dramatic change. When you are gain staging your signal chain, the difference between −12 dBFS and −6 dBFS on your input meter is 6 dB — a fourfold increase in power. Understanding the scale prevents you from making moves that are too large or too small.</p>



<h2 class="wp-block-heading">Fletcher-Munson: Why Your Mix Changes at Different Volumes</h2>



<p>The Fletcher-Munson curves (technically the ISO 226 equal-loudness contours) describe a phenomenon you have probably noticed: your mix sounds bass-heavy at high volumes and thin at low volumes. This is because human hearing is not flat — we are less sensitive to low and high frequencies at lower listening levels.</p>



<p>The practical takeaway: always check your mix at multiple listening levels. If you mix exclusively at high volume, you will underestimate the bass and overestimate the treble, and the mix will sound thin on earbuds. If you mix only at low volume, you will overbake the lows. Professional mixing engineers calibrate their monitors to a reference level (typically 85 dB SPL for film, lower for music) and check at both louder and softer levels before committing.</p>



<h2 class="wp-block-heading">Phase: The Invisible Mix Killer</h2>



<p>When two signals are in phase, their peaks and troughs align, reinforcing each other. When they are 180 degrees out of phase, their peaks meet troughs and they cancel. Complete cancellation is rare in practice, but partial phase issues are everywhere — and they are one of the most common reasons mixes sound thin, hollow, or lifeless.</p>



<p>The most frequent cause is multiple microphones on the same source at different distances. If you have a close mic and a room mic on a guitar amp, the sound arrives at each mic at a different time, creating a phase offset. The result: certain frequencies cancel, and the combined sound is weaker than either mic alone. The 3:1 rule — placing microphones at least three times as far apart as they are from the source — minimizes this, but always check by flipping the polarity of one mic and listening for what sounds fuller.</p>



<h2 class="wp-block-heading">Protect Your Hearing</h2>



<p>Your ears are your most important tools, and they do not regenerate. Exposure to levels above 85 dB SPL for extended periods causes permanent hearing loss. A live concert hits 100–110 dB SPL. Headphones at full volume can exceed 100 dB. Once the damage is done, it is done.</p>



<p>Invest in high-fidelity earplugs that attenuate evenly across the spectrum — not the foam ones from the hardware store that cut all the highs. Take breaks during long sessions. Monitor at reasonable levels. The engineer with the longest career is not the one with the best ears at 25 — it is the one who still has good ears at 55.</p>



<h2 class="wp-block-heading">Why This Matters</h2>



<p>Every concept in this post has a direct application behind the console. The frequency spectrum tells you where to reach with an EQ. Decibels tell you how far to push it. Fletcher-Munson tells you to check your work at different volumes. Phase tells you why two mics can sound worse than one. And hearing conservation tells you how to keep doing this for the next thirty years.</p>



<p>The engineers who understand the physics make faster decisions, catch problems earlier, and deliver better recordings. The ones who skip it spend their careers guessing.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This post is adapted from Chapter 2 of <a href="https://ocrecording.com/blog/book/">In the Studio: Audio Engineering &amp; Music Production Techniques</a> by Asaf Fulks — a 468-page textbook covering the complete recording, mixing, mastering, and business workflow. Coming soon from The Forum Press.</em></p>
<p>The post <a href="https://ocrecording.com/blog/sound-theory-for-engineers/">Sound Theory for Engineers: The Physics Behind Every Knob You Turn</a> appeared first on <a href="https://ocrecording.com/blog">The OC Recording Company Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://ocrecording.com/blog/sound-theory-for-engineers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
