Legacy of Adam (2019-)

Title: Legacy of Adam

Animation series created and directed by Roger Ghilemoen. Journey through time with The Legacy of Adam. Discover the captivating stories of biblical characters and events, from the dawn of Creation to the ultimate Redemption. The series has been dubbed into Swahili, African English, Somali and Norwegian. Norwegian voice actors include Stig Henrik Hoff, Line Verndal, Trond Høvik, Lisa Børud, Svein Tindberg, Maria Engås Halsne and Sondrey Mulongo Nystrøm.

We were engaged during the second season to help the already established audio team raise the bar and make the audio production more developed, both by providing finalised material and by offering educational video lectures to train the crew. To prepare for dubbing into other languages, we also made best practice instructions to guide the abroad crew to secure holistic results and even results.

Director and Writer: Roger Ghilemoen

The Multichannel Downmix Integrity Challenge: EQ, Phase Coherence, and the 7.1.4 → 2.0 Collapse

The goal of an immersive mix in a high-channel count format like Dolby Atmos (e.g., 7.1.4) is to create a captivating sonic world. However, the true test of engineering skill lies in ensuring that this dense, spatial soundfield translates seamlessly when mathematically reduced, or “folded”, to 5.1 and finally to stereo (2.0).

When individual channel processing (EQ, dynamics) is applied, the downmix becomes a minefield of phase cancellation, comb filtering, and loudness errors that can render a pristine mix unpredictable and degraded on a consumer-grade system if not done with care.

The Downmix Algorithm: A Summation with Consequences

A downmix is not a simple average; it is a weighted summation of all channels into a reduced channel count, governed by a downmix matrix defined in the metadata. For a standard stereo downmix, the Left channel (Lmix​), for instance, is calculated by summing Left (L), Center (C), Surround Left (Ls), Rear Surround Left (Lrs), and a portion of the Height channels (Ltf,Ltr).

The channel fold-down is critical, and often (but not always) follows a standard setting:

Lmix​=L+(C×−3 dB coefficient)+(Ls×−3 dB coefficient)+…

This is the mathematical root of the problem: anytime two signals are summed, even with ideal amplitude coefficients, if their phase relationship is not identical, you introduce comb filtering, which results in frequency-dependent amplitude cancellation.


Multichannel EQ Strategies and Phase Rotation

The decision to apply EQ and dynamics globally (all channels linked to the same settings) or individually is a choice between safety and creative risk.

1. Linked Processing: The Safe Approach

When EQ or dynamics are applied identically to all channels (L, C, R, Ls, Rs, etc.), the relative phase relationship between those channels is maintained (although the actual sonic content will differ between the channels). This is the simplest way to adhere in a more controlled manner to phase coherence, but it severely limits creative control. For example, if the entire soundfield requires a slight high-shelf boost, a linked EQ preserves phase better, ensuring that when the Ls and L channels sum in the stereo fold-down, the boosted frequencies do not cancel each other out much more than the sum of the differences in the original signals.

2. Individual Channel Processing: The Phase Risk

The risk occurs when a channel is EQ’d differently from its neighbours, such as applying a high-pass filter (HPF) to the dialogue-only Center (C) channel while leaving the Left (L) and Right (R) channels untouched, or when rolling of the high freqiencies in the surrounds to improve, or creatively alter, the spatial imaging.

Standard digital EQs are Minimum Phase filters. While powerful, these filters inherently introduce phase rotation, a shift in phase that is non-linear (dependent on frequency) near the corner frequency (ωc).

  • If the C channel has 45º of phase rotation at 2 kHz from a 150 Hz HPF, and the L and R channels have 0º shift, when the C channel is summed into Lmix​, the 2 kHz region will partially cancel against the identical, unshifted frequencies in L and R.
  • Consequence: The boosted or “clarified” dialogue on the C channel may become thinner and recessed in the 2.0 downmix. The creative decision can be undermined by the phase shift.

The Physics of Filtering: Preringing vs. Phase Rotation

For the professional engineer, the choice of filter type is paramount, directly determining whether the downmix integrity is compromised by time-domain issues (pre-ringing) or frequency-domain issues (phase rotation).

Minimum Phase Filters (Standard EQs)

Most analogue-emulating and standard digital EQs (like Butterworth and Bessel) are minimum phase. They are efficient and fast, but they introduce phase rotation.

  • Trade-Off: Excellent transient response (no pre-echo).
  • Downmix Risk: High, due to frequency-dependent phase differences between channels that are summed.

Linear Phase Filters

These specialised filters (often used in mastering) introduce a uniform, fixed time delay (latency) across the entire frequency spectrum. This better eliminates the frequency-dependent phase rotation (ϕ(ω)=constant).

  • Trade-Off: Preringing: To achieve a linear phase response, the filter must “look ahead” in time. This non-causal processing manifests as pre-ringing, a subtle pre-echo artefact before sharp transients. While often masked, it can become audible in highly dynamic material.
  • Downmix Benefit: Low downmix risk. Because all frequencies are shifted by the same time, the relative phase difference between two signals remains constant, minimising cancellation upon summation.

The Downmix Conclusion: Monitoring Lo/Ro and 2.0

The only way to manage these risks is to continuously monitor the stereo fold-down (2.0 Lo/Ro) while mixing in 7.1.4, while making informed, careful decisions.

The final integrity of the stereo downmix is contingent on three factors:

  1. Strict adherence to Loudness Standards: The downmix must still meet the target LUFS/LKFS without excessive limiting.
  2. Strategic EQ Choice: Using Linear Phase EQs on critical individual channels (like C) that fold heavily into L/R may prevent cancellation, provided the resulting latency and pre-ringing are acceptable. It is not a matter of one-setting-fixes-all, but weighing pros and cons while monitoring in a good listening environment.
  3. Phase Metering: Routinely checking the phase relationship between the L/R channels in the 2.0 downmix using a phase scope (vectorscope) to immediately spot frequency-specific energy loss.

Mastering the downmix is a sophisticated exercise in managing mathematical coefficients, filter physics, and the time-frequency domain simultaneously. It is the defining skill that ensures the highest quality results across all distribution platforms.

The Silent Killer of Content: Why Bad Audio Is Costing Your Brand

You’ve spent hours crafting the perfect message, your visuals are stunning, and the script is a masterpiece. You hit publish, expecting the likes and shares to roll in. But the engagement is low, and your message isn’t landing. What went wrong?

The answer is often the unseen enemy of great content: bad audio.

While we can forgive a shaky camera or a slightly imperfect graphic, our brains are hardwired to reject poor sound. It’s a subconscious cue that screams “unprofessional” and “low quality.” Here’s how a lack of attention to audio details can single-handedly diminish your returns and erode your brand’s credibility.

The Problem Isn’t Just Annoyance—It’s a Loss of Trust

Consider these common audio mistakes and the immediate impact they have on your audience:

1. Unwanted Noise and Harsh Tones

Have you ever tried to listen to a podcast filled with a constant hissing noise, a distracting hum from an air conditioner, or a voice that sounds harsh, tinny, or “boxy”? These are not just minor irritations; they are physical discomforts for the listener. Your audience subconsciously translates this discomfort into a negative feeling about your brand. If the content is physically unpleasant to consume, the listener will not stick around to hear your message, no matter how valuable it is.

2. The Disorienting Technical Glitch

Nothing breaks immersion faster than uneven loudness. A quiet voice is suddenly drowned out by blaring background music, forcing the listener to frantically adjust the volume. This constant fumbling with the controls is a direct interruption of their engagement.

Similarly, messed-up stereo imaging, like a narrator’s voice coming only from the right speaker, is disorienting and shows a fundamental lack of technical care. It’s a clear signal that the production was rushed, and the details were overlooked. These glitches don’t just feel like a mistake; they feel like disrespect for the viewer’s time and attention.

3. The Perception of Carelessness

When content is too quiet, it implies a lack of confidence, as if the creator is afraid to be heard. When it’s too loud, it feels aggressive and poorly controlled. Both extremes, along with the other issues mentioned, create a perception of carelessness.

This is where the real damage to your brand occurs. In a competitive market, professionalism is a non-negotiable. If you can’t be trusted to deliver a polished and technically sound piece of content, what does that say about your commitment to quality in your products or services?

Bad Audio Leads to Diminished Returns

The direct consequences of poor audio are tangible and measurable:

  • Lower Engagement: Listeners click away faster. They are less likely to finish a video or a podcast, which hurts your view duration and audience retention metrics.
  • Reduced Shareability: Will people share content that is difficult to listen to? Bad audio becomes a barrier to word-of-mouth marketing and organic growth.
  • Weakened Brand Authority: High-quality audio makes a message feel more authoritative and credible. Poor audio undermines this, making your brand seem less expert and trustworthy.

The resources you’ve invested in visual production, marketing, and distribution are essentially wasted if your audience can’t stand to listen to your content.

A Question of Trust

This all leads to one critical question: Can a service provider who doesn’t care about the details be trusted with your money?

In today’s digital landscape, attention to detail is a currency. A brand that invests in professional audio production, ensuring clean, crisp, and balanced sound, is sending a powerful message. It says, “We care about our craft. We respect our audience. And we stand behind the quality of our work.” This is the kind of brand that earns trust, fosters loyalty, and ultimately sees a return on its investment.

Don’t let bad audio be the silent killer of your brand’s credibility.

Directors, Producers, Editors: Dare to Be Bold: Why Your Score Needs to Fight for Its Feelings

You’ve poured your soul into the visual narrative. You’ve meticulously shaped performances, designed compelling sound effects, and crafted impactful dialogue. Then comes the music; that transcendent layer meant to elevate emotion and immerse your audience. You listen to the score on its own, and it’s magnificent: powerful, nuanced, and deeply moving.

But then, it happens.

The score is added to the picture, blended with dialogue and sound design, and suddenly, something gets lost. The punch softens. The presence recedes. That vibrant, emotional track, once so potent on its own, seems to fade into the background, struggling to connect with the audience as actively as you’d hoped.

At solidskillsy. in Kristiansand, Norway, we understand this phenomenon intimately. It’s not a flaw in the score itself, but a universal challenge of sound and picture integration. And we’re here to propose a solution: to truly move your audience, your score needs to dare to be bold (and you need to let it be so), to be composed and mixed to survive the fierce competition of the complete sonic and visual canvas as an equal agent.

The Softening Effect: Why Great Scores Can Lose Their Way

When a musical score is combined with visuals, dialogue, and sound effects, the audience’s brain is processing a tremendous amount of information simultaneously. This creates a kind of “perceptual competition”:

  • Dialogue Masking: As we’ve discussed before, dialogue occupies crucial mid-range frequencies, often directly competing with musical elements.
  • Sound Design Immersion: Realistic or stylised sound effects demand attention, building an acoustic space that can absorb or distract from musical nuance.
  • Visual Dominance: The moving image itself is a powerful draw, often making the audience less consciously aware of subtle musical cues.
  • Cognitive Load: The sheer volume of incoming sensory data can lead to the brain “down-prioritising” elements that don’t immediately jump out.

What sounds perfect in isolation can, therefore, become a beautiful but passive layer in the final mix. Of course, that may be exactly what the narrative needs, but when it should claim space, let it do so.

The Solution: Cultivate Boldness for Active Emotional Response

To ensure your score actively cuts through this competition and fulfils its potent role in emotional storytelling, it needs to be designed from the ground up for impact within context:

  1. Exaggerate Every Gesture: This doesn’t mean “make it louder,” but “make it clearer.” Each emotional arc, thematic statement, and rhythmic pulse within the score needs to be slightly exaggerated, its emotional intention pronounced enough to register even amidst the sensory richness of the film. Subtle gestures can get lost; bold ones resonate.
  2. Cultivate Punchy Transients: The initial “attack” of instruments (the snap of a snare, the pluck of a string, the start of a horn blast) is crucial for presence and rhythmic drive. By emphasising these transients appropriately, the music maintains its energy and clarity, allowing it to “grab hold” without needing to be excessively loud or bombastic.
  3. Craft Airy Highs: Carefully managed high frequencies can add sparkle, clarity, and a sense of “air” that helps the score breathe and cut through the mix without becoming harsh. These bright, clear elements can provide definition and emotional lift, even in dense scenes.
  4. Ensure a Clean Low End/Sub: A powerful, defined bass foundation and sub-frequency content provides emotional weight and impact. When meticulously crafted to be clean and controlled (so as to not compete with sound design LFE-elements), it delivers visceral power without muddying dialogue or creating unwanted rumble. It’s the silent force that grounds the emotional experience.

By consciously pushing these elements in a balanced and strategic manner, the score becomes an active participant in stimulating the audience’s emotional responses, rather than merely a background accompaniment; thus, making the music more rhythmically defined, tonally vibrant, and dynamically impactful

Our Appeal to You: Trust the Whole

Dear Directors, Producers, and Editors, we understand the immense pressure to perfect every element. Our appeal to you is this:

  • Dare to be bold with your score’s integration. Allow your composer and mixing engineer the creative license to make the music contribute to the narrative on its own rights, rather than simply underscoring it.
  • Resist judging the score in isolation. The raw score file, while a beautiful piece of art, is only one ingredient. Its true power is unleashed when it’s precisely woven into the full tapestry of visuals, dialogue, and sound design. Judge its impact in context.
  • Embrace the symbiotic relationship. When music, dialogue, and sound design are meticulously balanced, each contributing its part with conviction in a dialogue rather than existing in parallel, the result can be a lovely immersion and evoke emotional resonance that moves your audience profoundly.

At solidskillsy., we are dedicated to ensuring your score not only sounds incredible but performs its critical emotional role actively and powerfully within the complete picture. Let’s create an auditory experience that truly activates your audience.

Ready to make your score sing with unparalleled impact? Let’s discuss your next project in Kristiansand.

The Sonic Signature: Building Character Through Musical Gestures, Air, Transients & Sub-Bass

In the realm of visual storytelling, a truly resonant original score doesn’t just provide background music; it becomes an integral character itself, breathing life, emotion, and profound identity into the narrative. As we explored in “The Unseen Heartbeat: What Makes an Oscar-Winning Original Score Resonate,” such scores possess profound emotional resonance, distinctive thematic identity, seamless narrative integration, and bold originality. But how do composers craft this deep connection, building the very essence of a character through sound?

At solidskillsy. in Kristiansand, Norway, we believe in the power of orchestrating musical gestures to forge these intricate sonic identities. A masterful score defines characters not just through memorable melodies, but through a nuanced manipulation of gestures, air, transients, and sub-bass, creating a unique auditory blueprint for each personality.

Beyond Melody: The Architecture of a Character’s Sonic Soul

A compelling character theme is rarely just a hummable tune. It’s an entire sonic signature; a unique blend of texture, rhythm, and harmonic language that speaks volumes about their inner world, their struggles, and their triumphs. This is where the intentional application of specific musical features becomes key:

  1. Orchestrating Gestures: The Character’s Expressive Language
    • The Concept: As discussed in our previous post, a musical gesture is an intentional statement, a melodic contour, a rhythmic pattern, a dynamic shift, or a unique timbre. For character building, these gestures become a character’s inherent sonic “rhetoric.”
    • Application: Consider a character’s relentless determination, often conveyed by a driving, insistent rhythmic pulse in the low strings or percussion. Or, conversely, an expansive, yearning swell from the brass, portraying a character’s emotional depth or longing. A character’s vulnerability might be expressed through a sparse, sustained single note gesture from a unique solo instrument, while their power might be conveyed by a massive, descending chord gesture. These distinct gestures contribute directly to the score’s distinctive thematic identity.
  2. Airy Highs: The Character’s Ethereal Qualities, Vulnerabilities, or Grandeur
    • The Concept: “Airy highs” provide sparkle, perceived spaciousness, and often a sense of purity, mystery, or emotional fragility.
    • Application: Composers frequently use high-register strings (often with evocative reverb), ethereal synth pads, or soaring vocal lines to represent spiritual, contemplative, or vast emotional landscapes tied to a character’s journey or inner state. This “air” can evoke a character’s isolation, their hope, or the sheer scale of their aspirations, adding emotional resonance. It gives sonic space for characters to breathe and for their internal thoughts to float.
  3. Punchy Transients: The Character’s Resolve, Action, or Internal Conflict
    • The Concept: “Punchy transients” refer to the sharp, defined attacks of sounds that provide clarity, drive, and impact.
    • Application: For characters of action or strong will, as an obvious example, scores often employ powerful, percussive elements (whether orchestral drums or custom-designed hits), sharp brass stabs, or aggressive string pizzicatos. These crisp attacks embody a character’s determination, their physical force, or the sudden, jarring nature of their internal conflict, reinforcing their thematic identity with undeniable energy.
  4. Clean Low End/Sub: The Character’s Weight, Power, or Unseen Threat
    • The Concept: A “clean low end/sub” provides powerful, defined bass and sub-bass frequencies without muddiness, delivering visceral impact.
    • Application: Deep, resonant, and often textured sub-bass elements are frequently used to establish a character’s immense presence, their underlying power, or the foundational dread they inspire. This visceral low end creates a physical connection, a profound emotional resonance that bypasses conscious thought and embeds the character’s weight directly into the audience’s being. It’s the grounding force, whether menacing or majestic.

The Unseen Heartbeat of Character: Blending for Narrative Impact

Just as “The Unseen Heartbeat” emphasised, success lies in narrative integration and subtlety. When these sonic elements, that is, expressive gestures, airy highs, punchy transients, and resonant subs, are meticulously woven together, they create a character’s sonic identity that is both distinctive and seamlessly integrated into the story. It’s a form of originality and innovation in character development.

This is the power of a score that is not merely heard but felt, shaping the audience’s perception of a character in a way that visuals or dialogue alone cannot. It’s an investment in the very soul of your film, ensuring that your high-budget rhetoric translates into truly unforgettable performances.

At solidskillsy., we are not just musicians; we are sonic storytellers. We specialise in crafting bespoke scores that delve deep into character, translating their essence into powerful musical gestures that resonate with unparalleled impact and immersion.

Ready to give your characters a truly unforgettable sonic signature? Let’s discuss your vision.

The Unseen Architect: Mastering Ambience and Room Tone for Believable Worlds

In the grand tapestry of film, television, and video games, much attention is rightly given to dynamic dialogue, soaring musical scores, and impactful sound effects. Yet, often the most profoundly influential, yet least consciously noticed, element is the ambience; the pervasive, subtle background sound that defines a space, builds atmosphere, and grounds your audience in a believable world. Along with its quiet cousin, room tone, ambience acts as the unseen architect, shaping the very foundation of your project’s acoustic space.

At solidskillsy. in Kristiansand, Norway, our sound design philosophy recognises that true immersion isn’t just about what you hear, but what you feel. We meticulously sculpt these foundational sonic layers, transforming mere background noise into a powerful narrative tool that underpins your project’s premium quality and high-budget rhetoric.

Beyond Background Noise: Why Ambience Matters

Ambience is far more than just “filler.” It is the sonic fingerprint of an environment, communicating vital information that visuals alone cannot convey:

  • Defines Physical Space: Is it a vast, echoing cave or a cramped, intimate attic? A bustling city street or a desolate, windswept plain? Ambience immediately tells the audience about the size, materials, and nature of the location.
  • Establishes Atmosphere & Mood: A subtle hum can convey tension, distant birdsong can evoke peace, and the cacophony of a marketplace can create a sense of vibrant life or overwhelming chaos. Ambience directly manipulates emotion.
  • Grounds the Narrative: By creating a consistent, believable sonic environment, ambience helps to suspend disbelief, making characters and events feel more real and impactful within their surroundings.
  • Psychological Impact: From the unsettling silence of a deserted hallway to the comforting drone of a familiar room, ambience evokes psychological states, such as comfort, unease, isolation, and wonder.

The Craft of the Unseen: Techniques for Mastering Ambience

Mastering ambience is a nuanced art, requiring meticulous attention to detail:

  1. Layering for Complexity: Rarely is a single sound enough. Professional ambience is built by carefully layering multiple elements: a base bed (e.g., city distant), specific elements (e.g., closer traffic, pedestrian chatter/walla), and occasional punctuating sounds (e.g., a car horn, a distant siren). Each layer contributes to a richer, more believable tapestry.
  2. Dynamic Shaping: Real-world environments are not static. Ambience must breathe and evolve with the narrative. This involves subtle volume automation (ducking for dialogue, swelling for intensity), panning/placement and even gentle compression or expansion to make the environment feel alive and responsive.
  3. Spatialisation: Using panning, depth cues (e.g., judicious reverb and pre-delay), and in immersive formats like Dolby Atmos, object-based placement, sound designers accurately position ambient elements. This creates a convincing three-dimensional acoustic space, making a distant car sound truly distant, or a rattling vent appear overhead.
  4. EQ & Filtering: Just like with music, ambience needs to be sculpted with EQ. This ensures it doesn’t mask dialogue or musical elements, and it can enhance realism (e.g., filtering an exterior sound to simulate it being heard from indoors).
  5. “Sweetening” with Spot FX: Adding specific, intermittent sound effects (e.g., a creaking floorboard, a distant dog bark, a clock chime) that punctuate the broader ambience, adding micro-details that contribute immensely to realism and immersion.

The “Silence” That Speaks Volumes: Room Tone’s Crucial Role

Often confused with ambience, room tone is the unique sonic fingerprint of a specific interior space when all intended sound sources are quiet. It’s the subtle hum of electricity, the distant rumble of the building, the air conditioning, or just the sound of “nothing” in that particular acoustic environment.

  • Continuity Across Edits: Room tone is critical for seamless dialogue editing. Without it, edits between takes or scenes would result in jarring silences, pulling the audience out of the experience.
  • Prevents “Dead Air”: Its subtle, consistent presence prevents the mix from feeling artificial, lifeless, or like it’s floating in a vacuum. It provides the natural “bed” upon which dialogue and other elements rest.

Seamless Integration: Supporting Dialogue and Music

Mastered ambience and room tone work in concert with all other sonic elements:

  • They provide a natural and believable backdrop for dialogue, making voices sound grounded and connected to their environment.
  • They can complement the musical score, by filling out the sonic spectrum without competing, or by subtly shifting in mood to mirror the emotional shifts in the music.

At solidskillsy., we painstakingly craft every layer of your project’s sonic environment. From the imperceptible hum of room tone to the grand sweep of a dynamic atmosphere, we build worlds that are heard, felt, and believed, aiming at delivering the highest caliber of audio post-production from our studio in Kristiansand.

Ready to make your project’s world truly come alive through sound? Let’s discuss the unseen details.

Deconstructing the Immersive Canvas: Strategic Object-Based Mixing Workflows for Dolby Atmos & Beyond

The soundscape of modern media is no longer confined to stereo or even traditional surround. With the advent of technologies like Dolby Atmos, we’ve entered the era of immersive audio, where sound is sculpted in a three-dimensional space, enveloping the listener. For content creators aiming for true immersion and high-budget rhetorics, understanding object-based mixing workflows is no longer optional, it’s transformative.

At solidskillsy. in Kristiansand, Norway, we don’t just mix sound; we engineer 3D acoustic spaces, guiding you through the strategic decisions that unlock the full potential of next-generation audio experiences.

Beyond Channels: The Power of Objects

Traditional mixing (stereo, 5.1, 7.1) is channel-based. You mix to a fixed number of speakers. Object-based audio works differently:

  • Audio Objects: Individual sounds (e.g., a specific character’s dialogue, a single explosion, a bird flying overhead) are treated as “objects.” Each object has associated metadata that describes its position (X, Y, Z coordinates), size, and movement over time.
  • Beds: These are traditional channel-based mixes (e.g., 7.1.2 for Atmos) that provide the foundational, often ambient or musical, elements that don’t need discrete positioning. They ensure a fallback layer if objects can’t be rendered.
  • The Renderer: The magic happens in the Dolby Atmos Renderer (or similar immersive rendering engines). It takes the combination of beds and objects and, using the metadata, calculates in real-time how to play those sounds back on any speaker configuration, from a full-blown cinema to a home theater, soundbar, or even headphones (via binaural rendering).

Strategic Workflow Decisions for Immersive Mixing:

The power of object-based mixing lies in strategic decision-making:

  1. Bed vs. Object: The Foundational Choice:
    • Beds for Foundation & Ambience: Music, sustained atmospheric sounds, or background ambiences often work best as beds. They provide a stable, consistent foundation across all playback systems.
    • Objects for Discreteness & Movement: Sounds that need to be pinpointed in space, move dynamically, or have a distinct presence (e.g., a specific prop sound effect, a character’s voice from a particular direction, a vehicle passing by, a rain drop hitting the roof) are ideal candidates for objects.
  2. Thinking in 3D: Elevating Your Narrative:
    • Height Channels: Objects allow for precise placement and movement in the height dimension, crucial for creating truly immersive environments (e.g., a helicopter overhead, rain on a roof, sounds from an upstairs window).
    • Depth & Distance: Careful object placement and distance attenuation (how sound changes with distance) can create a profound sense of depth and scale within the acoustic space.
    • Focus & Direction: Guiding the audience’s attention by placing key sonic elements precisely in relation to the visual frame.
  3. Mixing for Scalability & Downmixes:
    • A critical aspect of object-based mixing is that you are creating one master file (the Atmos master file or ADM BWF) from which all other formats (5.1, stereo, binaural) can be derived automatically by the renderer.
    • This requires careful attention during the mix to ensure that these automatic downmixes translate well, preventing phase issues or elements disappearing. The engineer must constantly monitor how the mix sounds in various speaker configurations.

Deconstructing the immersive canvas requires not just technical skill, but a holistic understanding of how sound can tell a story, evoke emotion, and draw the audience deeper into your content without confusing them or drawing their attention away from the narrative on screen. It’s about designing an acoustic space that transcends traditional boundaries.

At solidskillsy., our state-of-the-art immersive mixing suite in Kristiansand, Norway, is designed for object-based workflows. We love crafting captivating 3D audio experiences that define premium quality and elevate your project’s rhetoric to new spatial dimensions.

Ready to sculpt your sound in three dimensions? Let’s discuss your immersive audio project.


The Silent Art of Support: Composing & Orchestrating Complex Cues for Intelligible Dialogue

In the powerful tapestry of film, television, and video games, music elevates emotion, sets pace, and defines atmosphere. Yet, there’s an invisible line composers and orchestrators must never cross: obscuring the dialogue. A brilliant score that buries critical lines is a missed opportunity, frustrating audiences and undermining the very narrative it’s meant to support.

At solidskillsy. in Kristiansand, Norway, we believe in the harmonious coexistence of all sonic elements. Our approach to bespoke composition and orchestration for picture is rooted in a deep understanding of how to craft even the most intricate musical cues, ensuring dialogue remains front and centre, pristine and intelligible.

The Unseen Conflict: Why Music Can Mask Dialogue

Dialogue primarily occupies the crucial mid-range frequency spectrum (roughly 500 Hz to 4 kHz), a space also highly populated by many musical instruments. When musical elements share this space, two main forms of masking occur:

  1. Frequency Masking: When frequencies in the music directly compete with the frequencies of the human voice.
  2. Dynamic Masking: When the sheer loudness or density of the music overwhelms the dialogue.

The art lies not in silencing the music, but in making it a supportive, rather than competitive, force.

Strategic Composition & Orchestration for Dialogue Clarity

For busy, complex cues, ensuring dialogue intelligibility requires a multi-faceted approach, woven into the very fabric of the composition:

  1. Frequency Management: Creating Dialogue Space
    • Targeted EQ in the Score: Proactive equalisation is paramount. When dialogue is present, composers should mentally (or even explicitly, in their mix template) consider subtle cuts in the 1kHz-4kHz range for instruments like strings (especially violins, violas), dense synth pads, sustained brass, or busy percussion. Think of it as carving out an “acoustic window” for the voice.
    • Avoid Fundamental Conflicts: Be mindful of instruments whose fundamental pitches fall directly within the dialogue range. For example, a sustained clarinet note in its mid-range might conflict more than a bass clarinet or a flute in its upper register.
    • Harmonic Richness, Not Clutter: While harmonic excitation can add perceived loudness and richness, it needs to be carefully managed in dialogue passages to avoid building up competing frequencies.
  2. Dynamic & Density Control: Less Is More (When It Matters)
    • Orchestral Thinning: For crucial dialogue, resist the urge to use a full orchestra. Instead, thin out the orchestration. Focus on textures, sparse chords, or singular melodic lines that complement, rather than dominate.
    • Micro-Dynamics: Music can swell around dialogue, providing emotional punctuation, but can subtly dip during the most critical lines. Think of it as a gentle breath, a soft rise and fall that supports the natural rhythm of speech.
    • Pacing & Phrasing: Use musical phrasing to create moments of relative quietude for dialogue. Build intensity before a key line, then pull back to allow it to land, reserving full orchestral power for non-dialogue moments or climactic reveals.
  3. Instrument Choice & Register: The Right Tool for the Job
    • Avoid Dialogue-Range Instruments: During dialogue, be cautious with instruments that naturally sit in the primary vocal range or have a lot of sustain/resonance there.
    • Utilise Low & High Registers: Often, music can provide emotional depth by operating in the lower (low strings, low brass, deep synths) or higher (high winds, high strings, ethereal pads) registers, leaving the crucial mid-range open for voices.
    • Percussion & Rhythmic Elements: Percussion can drive energy without necessarily masking dialogue, especially if its transient attacks are sharp and its decay is short.
  4. Mix Perspective & Depth: Pushing Music Back
    • Reverb & Delay: Strategic use of reverb and delay can subtly push musical elements further back in the perceived acoustic space, creating a clearer foreground for dialogue. Too much reverb on dialogue itself can, of course, blur it.
    • Volume Automation: This is the most direct tool. Meticulous volume automation of individual instrument groups or the entire music bus (often automated by the re-recording mixer in post-production) is essential. Composers should think in terms of “stems” to allow the mixer precise control.

The Collaborative Imperative: Composer as Storyteller

Ultimately, composing for dialogue intelligibility is a collaborative art. The composer is part of a larger storytelling team.

  • Early Communication: Discuss dialogue density and importance with the director and sound designer early in the process.
  • Awareness of Mix Stages: Understand that the re-recording mixer is the final arbiter of dialogue levels. Delivering musical stems that allow flexibility (e.g., separate stems for melodic lines, pads, percussion, bass) is incredibly helpful.
  • Listen Critically: When reviewing rough cuts or pre-mixes, listen specifically for dialogue clarity alongside musical impact.

At solidskillsy., we understand the delicate balance required to create emotionally resonant scores that flawlessly integrate with dialogue. From initial concept to final mix, we prioritise the clarity of your story’s voice, ensuring your high-budget rhetoric is always heard, from our studio in Kristiansand.

Ready to compose a score that supports, rather than overwhelms, your narrative? Let’s discuss your next project.

The Art of Silence: Advanced Noise Reduction & Restoration for Pristine Audio

In the world of audio, silence can be as impactful as sound. But unwanted noise – hums, hisses, clicks, traffic, room tone – can compromise that silence, detracting from the sonic purity and professionalism of your content. While basic noise gates offer a crude solution, true audio restoration is an art that requires a deep understanding of signal processing to surgically remove noise without introducing distracting artifacts.

At solidskillsy. in Kristiansand, Norway, we specialise in transforming challenging source material into pristine audio. Our expertise in advanced noise reduction and restoration techniques can salvage otherwise unusable recordings, preserving the rhetoric and texture of your original vision.

True noise reduction aims to remove noise within the audible signal, or to smoothly reduce it, maintaining the naturalness of the sound.

The Arsenal of Advanced Restoration Techniques:

  1. Spectral Noise Reduction: This is the most common and powerful method.
    • How it works: The software “learns” the fingerprint of the unwanted noise (e.g., a constant hum or hiss) during a silent passage. It then intelligently identifies and removes that specific noise profile from the entire audio, even when the desired signal is present.
    • Applications: Removing broadband noise (hiss, static), constant hums (50/60 Hz), fan noise, air conditioning rumble.
    • Art of Balance: The key is finding the sweet spot between sufficient noise reduction and avoiding “musical noise” (gargling or swishy artefacts) or thinning out the desired signal.
  2. De-Clickers & De-Cracklers:
    • How it works: Algorithms identify short, impulsive noises (clicks, pops, crackles from vinyl, digital glitches) based on their waveform characteristics and interpolate to smoothly fill the gap.
    • Applications: Cleaning up old recordings, repairing digital dropouts, and removing microphone bumps.
  3. De-Essers (Advanced):
    • How it works: While often a mixing tool, advanced de-essers function as a form of dynamic spectral noise reduction, targeting harsh sibilance (‘s’ and ‘sh’ sounds) in vocals or dialogue without dulling the overall sound.
    • Applications: Improving vocal clarity, taming harsh cymbals or bright guitars.
  4. De-Reverb & De-Bleed:
    • How it works: These are more sophisticated tools that attempt to reduce unwanted room ambience (reverb) or microphone bleed (e.g., drums bleeding into a vocal mic) by analysing the spectral and temporal characteristics of the interfering sound.
    • Applications: Cleaning up dialogue recorded in live rooms, making individual instruments more isolated for mixing, and fixing recordings with excessive early reflections.
  5. Forensic Audio Techniques: For extreme cases, highly specialised tools and manual spectral editing (visualising the spectrogram and painting out unwanted sounds) can be used to remove specific, complex noises like phone rings, sirens, or dog barks from dialogue.

The Engineer’s Touch: Maintaining Naturalness

The true skill in noise reduction isn’t just about making the noise disappear; it’s about making the process disappear. A poorly restored track sounds artificial and lifeless. A masterfully restored track retains its natural dynamics, its original acoustic space (if desired), and its emotional authenticity, only now free from distraction. And perhaps that last statement is the true key. While not all artefacts are possible to remove entirely, we can make our best effort in making them much less distracting.

At solidskillsy., we combine cutting-edge software with a sensitive ear and years of experience. We understand the delicate balance of removing noise while preserving the integrity and sonic identity of your original recording. From salvaging challenging field recordings to perfecting dialogue for high-budget rhetorics, our advanced restoration services deliver unparalleled sonic purity.

Have a challenging recording that needs a professional touch? Let’s discuss if and how we can restore its brilliance.

The Loudness Paradox: Calibrating for Impact Across Streaming, Home Entertainment & Cinema

In today’s fragmented media landscape, your meticulously crafted audio might be consumed on a vast array of devices; from a high-end cinema sound system to a pair of earbuds, or even a tiny smartphone speaker. The challenge for the modern audio engineer is immense: how do you ensure your project’s sonic identity and rhetoric translate effectively without sacrificing impact or causing listener fatigue? The answer lies in mastering the complex interplay between loudness targets (LUFS) and speaker calibration standards.

At solidskillsy. in Kristiansand, Norway, we navigate this intricate loudness paradox with precision. We understand that a single master is rarely sufficient; true premium quality often demands strategic optimisation for each specific delivery platform.

Understanding LUFS: The Language of Perceived Loudness

Gone are the days of solely relying on peak meters. LUFS (Loudness Units Full Scale) has become the industry standard for measuring integrated loudness, an algorithmically derived metric that more accurately reflects how humans perceive loudness over time.

  • Streaming Platforms (e.g., Spotify and YouTube): Typically target an integrated loudness of around -14 LUFS. They then apply loudness normalisation, sometimes turning up quieter tracks and often turning down louder ones to hit this target. This aims to create a consistent listening experience for the consumer.
  • Broadcast & Home Entertainment (e.g., EBU R128 for European broadcast and Netflix’s own standards): Often target integrated loudness around -23 LUFS (EBU R128) or -27 LUFS Dialogue gated (Netflix, etc.). This allows for significantly more dynamic range than streaming music, crucial for dialogue clarity and impact in film and television.
  • Cinema: This is where it gets less standardised. While there are recommendations, there isn’t a universally mandated LUFS target. It is more about mixing in a correctly calibrated audio system and space. This lack of a strict standard has historically contributed to the “loudness war” in cinema, where mixes (especially trailers) often push dynamic range to the extreme for maximum impact, sometimes leading to fatiguing and excessively loud experiences.

The Cornerstone: Speaker Calibration Standards (SPL)

LUFS targets dictate how loud the content should be. Speaker calibration standards (measured in SPL – Sound Pressure Level) dictate how loud your playback system should be when playing content at a specific reference level. This is where the translation puzzle truly begins:

  • 79 dB SPL (for Broadcast & Home Entertainment): For mixing rooms and mastering studios producing content for television and home video, the international standard dictates that pink noise at -20 dBFS per channel should produce 79 dB SPL (C-weighted, slow) at the listening position. This calibration level is directly correlated with the -23 LUFS/-24 LUFS targets, ensuring that dynamic content is heard at its intended reference volume in a typical living room.
  • 85 dB SPL (for Cinema): For film post-production and cinema mixing, the SMPTE standard recommends that pink noise at -20 dBFS per channel should produce 85 dB SPL (C-weighted, slow) at the listening position. This is a significantly louder reference point, designed to provide the necessary headroom and dynamic impact for the large scale of a cinema environment.
  • Music Production & Streaming: The Missing Link: Crucially, there is no universally agreed-upon speaker calibration standard for music production or for consumer streaming playback. While many mastering engineers use internal calibration methods, there isn’t a universal target SPL that correlates with the -14 LUFS streaming targets. This absence is a significant factor in the loudness paradox. Although, there are theories that can apply in terms of how different volumes of spaces handle sound pressure and how the SPL produced affects the highs and lows, but that is for a future post.

The Loudness Paradox in Action:

This disconnect between LUFS targets and calibration standards creates significant challenges:

  1. A Streaming Mix in a Cinema: Painful Impact. Imagine a music track perfectly mastered to -14 LUFS for streaming, intended to be heard through headphones or a home stereo. If this track is then played through a cinema’s system calibrated to 85 dB SPL, the result can be excruciatingly loud and dynamically crushed. The content, optimised for a quieter, more compressed average, is suddenly amplified to a level far beyond its design, becoming harsh and fatiguing.
  2. A Cinema Mix on a Smartphone: Unhearable Dialogue. Conversely, consider a film mix meticulously crafted for cinema, with a wide dynamic range and dialogue mixed to shine at 85 dB SPL. If this mix is then normalized to -14 LUFS and played on a smartphone speaker, the dialogue and quieter moments might become hard to hear and loud dynamics may be unpleasantly distorted. The dynamics, intended for a vast and controlled environment, are too broad for a tiny, uncalibrated device, leading to a frustrating listening experience.

The loudness war within cinema exacerbated this, pushing dynamic peaks to the absolute limit, assuming maximum playback volume. While that might impress in specific moments, it often leads to listener fatigue and poor translation across varied cinemas. This often leads to cinemas turning down the volume because of audience complaints, which makes the topic even more complex as the cinema then deviates from the theatre calibration standard and dialogue then often ends up too quiet and, perhaps, even hard to hear.

solidskillsy.’s Solution: Intelligent Mastering for Every Destination

Navigating this loudness paradox requires expertise. At solidskillsy., our approach involves:

  • Platform-Specific Mastering: We can, according to agreement, deliver multiple, optimised masters tailored to the unique LUFS targets and True Peak requirements of streaming services, broadcast platforms, and cinematic distribution.
  • Calibrated Environments: Our studio in Kristiansand is meticulously calibrated to Home Entertainment industry standards (79 dB SPL), and we can book a local cinema to deliver for that format as well, depending on the project type. This provides a predictable and reliable environment for making critical loudness and dynamic decisions.
  • Comprehensive Translation Checks: We rigorously test our mixes on selected consumer devices and reference systems, ensuring that your audio’s impact and clarity are preserved across the entire spectrum of playback possibilities.

Understanding the relationship between LUFS, SPL calibration, and the diverse playback landscape is fundamental to achieving truly professional audio. It’s about ensuring your high-budget rhetorics sound as intended, reaching every ear with precision and power.

Ready to ensure your audio sounds pro? Let’s discuss your project’s unique distribution needs.