Tonmeister

Beyond Digital Purity

Harmonics, Streamers, and a Mature View of Fidelity

44 years of cable design from the Netherlands

Looking back at 44 years of cable design & OEM cables from the Netherlands

"As long as we are concerned with the realistic reproduction of sound, the original sound must stand as the criterion by which the reproduction is judged!"

For decades, high-end audio chased a singular ideal: purity. Lower distortion, flatter response, greater resolution. The goal was always a perfectly transparent conduit, a "straight wire with gain." In digital circles, that ideal settled squarely on the DAC, the supposedly neutral translator that should neither add nor subtract, only turn data into voltage.

That assumption is not being discarded; it is being refined. The conversation today is more nuanced. Some distortion, it turns out, may not be a flaw but a conscious variable. The modern DAC is judged not only by how little it changes the signal, but by how intelligently it manages the inevitable collisions of mathematics, electronics, and perception. Transparency remains the goal, but it is now better understood.

The Primacy of the Acoustic Space

Before a signal ever leaves your DAC, its fate has already been shaped by your room. The listening environment, not the front end, determines how real a system will sound. Modest rigs in well-treated spaces routinely outshine cost-no-object setups left at the mercy of bare walls and reflective ceilings. Reflections, standing waves, and resonances dominate what reaches the listener, often masking the subtle virtues of excellent electronics.

The first and most important investment in fidelity is the room itself. Speaker placement, absorption, and diffusion matter more than brand names or specifications. This foundation defines everything that follows. (A separate companion piece will explore acoustic fundamentals in detail.)

Equalization: Hierarchy and Discipline

When acoustic compromises remain, equalization can be useful, but only when applied in the correct order. The room comes first. Only after physical treatments and placement have been optimized should corrective tools enter the picture. Even then, the process should begin with high-quality, transparent hardware.

Digital signal processing should be the last resort, used only when truly needed and reserved primarily for low-frequency issues where physical solutions become impractical. The guiding principle is simple: equalization should reduce, not boost. Its purpose is to calm what the room exaggerates, not to inflate what the system lacks.

This restraint preserves dynamics and tonal balance, avoiding the tense, overworked character that excessive boosting creates. Properly applied, EQ does not impose a sound; it reveals what was already there.

Harmonics as Structure, Not Decoration

To understand the shift in digital design philosophy, we must revisit harmonics, not as embellishment, but as the structural fabric of music itself. Every musical tone consists of a fundamental frequency accompanied by harmonics, integer multiples that define body, color, and character. The ear does not hear these elements separately but perceives them as a unified event.

A 440 Hz tone is never just a single frequency. Its second harmonic at 880 Hz reinforces pitch through octave symmetry. Its third at 1320 Hz introduces tension through the perfect fifth. These relationships form the mathematical basis of musical consonance.

Human hearing instinctively recognizes this order. Low-order, harmonically related distortions integrate easily into the sound, adding density and warmth. High-order, unrelated distortions do the opposite. They draw attention to themselves and cause fatigue. This is why small amounts of second- or third-order distortion can sound natural, even pleasing, while far smaller quantities of high-order distortion feel intrusive.

Harmonics in Digital Systems: Intent Matters

In the analog world of tubes, tape, and vinyl, harmonic coloration is inherent. In digital systems, linearity is the default. When harmonics appear, they are the result of deliberate design decisions, such as biased output stages or DSP algorithms intended to emulate analog behavior.

The distinction is crucial. Analog coloration is unavoidable; digital coloration is chosen.

Applied gently, typically between 0.1 and 1 percent total harmonic distortion, this shaping acts as tonal seasoning rather than degradation. It alters the balance between fundamentals and overtones without restoring lost information. The process is interpretive, not documentary. It curates presentation rather than correcting errors.

Sonic Memory and the Analog Reference

Our attraction to harmonic flavor is rooted partly in collective sonic memory. For generations, music was experienced through analog chains. Vinyl introduced groove resonance and surface noise. Tape contributed hiss and compression. Tubes added even-order harmonics and elastic dynamics.

None of these characteristics existed in the original performance, yet decades of exposure embedded them into our internal reference for musical authenticity. When a modern DAC reproduces similar structures, it is not repairing a digital deficiency. It is invoking familiarity. That distinction matters.

Analog and Digital: Accuracy Versus Aesthetics

Analog recording is continuous but nonlinear. Noise, distortion, and drift are inseparable from its nature. Digital, when properly implemented, is linear within defined limits. It offers immense dynamic range, pitch stability, and freedom from generational loss.

When the objective is documentation, capturing an event as accurately as possible, digital still holds the advantage. Analog's appeal lies not in what it preserves, but in what it contributes.

Early digital sometimes struggled with low-level subtlety, but modern high-resolution formats have largely closed that gap. The underlying truth remains unchanged. Analog color is unavoidable; digital color is optional.

This same principle applies to reconstruction filters. Linear-phase, minimum-phase, and apodizing designs each balance phase accuracy, transient behavior, and temporal clarity differently. These are interpretive choices, not defects. They shape how attack, decay, and spatial information are perceived.

Streamers as Transports: Separating Signal from Story

Few components provoke more debate than streamers. When used strictly as bit-perfect transports, delivering data via USB, S/PDIF, or AES/EBU, their sonic influence is minimal.

Jitter, once a legitimate concern, has largely been solved. Modern DACs buffer incoming data and reclock internally, isolating conversion from source timing errors. Asynchronous USB places clock control entirely within the DAC. Even S/PDIF, when competently implemented, reduces jitter to inaudible levels.

Residual differences usually stem from electrical noise, such as power leakage or RF interference entering analog stages through shared grounds. Well-designed streamers address this with galvanic isolation, low-noise power supplies, and careful shielding. Most contemporary DACs include these protections internally, rendering them effectively immune.

Once bit-perfect transmission is achieved, meaningful sonic differences between well-engineered streamers vanish. Controlled listening tests consistently show that modest transports perform on par with expensive ones. A digital transport is infrastructure, not a tone control.

The equation changes when the streamer incorporates its own DAC. In that case, converter topology, analog stages, power supply design, and filtering define the sound. The component is no longer a conduit but an interpreter.

The Systemic Pursuit of Realism

The pursuit of realism follows a clear hierarchy. Begin with the acoustic foundation. Build a coherent, well-matched system. Only then should equalization or harmonic enhancement be considered, and only as finishing touches.

These tools cannot compensate for structural weaknesses. The weakest link, most often the room, will always dominate the listening experience.

Conclusion, Choice, Not Dogma

This is where a mature perspective emerges.

Does a well-designed system require added harmonic color? For accuracy, no. For enjoyment, that remains a personal choice.

Do bit-perfect streamers possess a sonic signature? Not when they perform correctly.

The true progress in modern digital audio lies not in novel distortions or exotic filters, but in the freedom of choice and the discipline to apply it wisely. Understand the room first. Build system synergy second. Only then consider creative interpretation.

Audio maturity is the balance between knowledge and preference, between purity and pleasure. Fidelity still matters, but the deepest connection to music arises from the entire chain: the recording, the equipment, the room, and ultimately, the listener.

Questions about Digital Audio

Does a USB streamer or music server affect sound quality? +

In a correctly implemented digital chain, a USB streamer or music server is not a tone control. When operating in bit-perfect mode, the data arriving at the DAC is identical, regardless of the source. The decisive work is done inside the DAC, where incoming data is buffered and re-clocked, decoupling conversion from upstream timing variations.

Differences between sources are rarely due to the data itself. Electrical noise traveling along ground connections or power rails can enter sensitive analog stages and subtly influence the output (the better system the more obvious). Well-engineered systems address this with galvanic isolation, clean power, and careful circuit layout. When these factors are controlled, any differences between streamers are negligible.

What is jitter and is it still a problem in modern DACs? +

Jitter refers to tiny variations in the timing of digital sample playback. In early or poorly designed DACs, this could translate into audible distortion or smearing of low-level detail.

Modern DACs handle jitter internally: incoming data is buffered and then re-clocked using a precise local oscillator. This asynchronous reconstruction isolates the conversion from upstream timing errors, making jitter at the source practically irrelevant. Only in poorly implemented systems does jitter remain noticeable.

Why does analog audio sound 'warm' compared to digital? +

Analog audio imparts low-order harmonic distortion, primarily second and third order, which the ear perceives as tonal richness and density. Tape adds gentle compression and saturation, smoothing transients and rounding peaks. Vinyl playback introduces groove resonance, subtle tracing distortion, and surface texture.

These are not corrections to the signal, but consistent alterations. Over decades, these characteristics have become culturally associated with musical authenticity and "warmth." Digital audio strives for transparency, linear and low-noise, without added harmonic color. Its neutrality may be perceived as clean or lean, but it is closer to the original electrical signal.

Is analog or digital audio more accurate? +

Digital audio has the clear advantage for documentation and preservation: extremely low noise, stable pitch, consistent frequency response, and no generational loss. Every playback reproduces the captured signal with high fidelity.

Analog introduces inherent deviations: tape hiss, compression, vinyl surface noise, and mechanical limitations. While often pleasing, these effects are departures from the original recording rather than faithful reproduction. Accuracy is straightforward: digital preserves, analog colors. Choosing between them depends on whether the priority is neutrality or a preferred sonic signature.

What is the difference between linear-phase, minimum-phase, and apodizing DAC filters? +

Reconstruction filters determine how a DAC converts discrete samples into a continuous waveform, affecting both phase and time-domain behavior.

Linear-phase filters preserve phase relationships across the spectrum, maintaining waveform symmetry. They introduce pre-ringing before transients. Minimum-phase filters shift ringing to after the transient, avoiding pre-ringing, but with phase deviations across frequencies. Apodizing filters remove pre-ringing artifacts already embedded in the recording, altering the temporal signature of the signal.

These choices are interpretive, not corrective. Linear-phase is closest to the original signal, while the others reflect deliberate shaping. The key decision is whether the goal is strict fidelity or a subjective presentation preference.

Does the quality of a digital cable (USB, S/PDIF, AES/EBU) affect sound? +

Digital cables do not alter the data when operating bit-perfectly. Differences in sound arise from the electrical environment around the data. Poor shielding or impedance mismatches can allow high-frequency noise to reach sensitive analog stages.

Well-constructed cables and appropriate connectors maintain the correct characteristic impedance, 75 Ω for S/PDIF, 110 Ω for AES/EBU, 90 Ω for USB, and provide effective shielding. The goal is neutrality, preventing degradation without adding or subtracting anything. In a properly implemented system, the cable should be transparent.

What total harmonic distortion level is acceptable in digital audio? +

Technically, the lower the THD, the better. Modern DACs often achieve figures well below 0.001%, far beneath audibility.

Most of the constructors and designers deliberately introduce low-order harmonic distortion, typically 0.1–1%, to evoke the "warmth" associated with analog systems. This is an aesthetic choice, not a technical necessity. Above ~1%, distortion becomes audible degradation rather than character. Neutral systems strive to minimize it.

Should the room or the equipment be the first priority in a high-end audio system? +

The room is always the first priority. Acoustic behavior, including reflections, modal resonances, and low-frequency decay, dominates what reaches the listener. It affects tonal balance, transient clarity, and spatial imaging far more than any component.

No upgrade in DAC, amplifier, or loudspeaker can compensate for untreated acoustic problems. A modest system in a well-treated room often outperforms an expensive system in a poor environment. Proper acoustic treatment establishes a neutral foundation, allowing the equipment to perform transparently.

Does DSP improve audio playback? +

Digital signal processing can be useful for correcting room-induced anomalies, especially in the low frequencies. However, DSP is a tool, not a substitute for good design. It cannot repair fundamental issues in the listening environment or poorly implemented analog stages.

The philosophy is neutrality: the goal is to reproduce the recording faithfully, without artificial enhancement. DSP should be applied only when it resolves measurable issues transparently. Overuse risks masking subtle musical details or introducing artifacts. Physical room treatment and analog integrity remain the foundation of accurate playback.

Does higher sample rate or bit depth improve sound quality? +

Higher sample rates extend theoretical bandwidth and simplify reconstruction filters. For human hearing, 44.1 kHz or 48 kHz already capture the audible spectrum. Any perceived improvements are usually due to filter behavior or mastering choices, not the extra bandwidth.

Bit depth affects dynamic range. 16 bit already exceeds the noise floor of most listening environments. 24 bit is beneficial in production for headroom and precision, but in playback, audible differences are minimal if the system is gain-structured and noise is controlled.

Does upsampling improve sound quality? +

Upsampling increases the digital sample rate before conversion. It does not create new information; it interpolates existing samples to simplify the DAC's reconstruction filter.

When implemented carefully, upsampling can reduce aliasing and allow gentler filter slopes, improving transient reproduction and minimizing subtle high-frequency artifacts. The audible effect depends entirely on the quality of the algorithms and the DAC's handling of the data. Poor upsampling can introduce artifacts, while high-quality implementations are essentially transparent.

As with all digital processes, the benefit is determined by execution, not the process itself. Neutral systems treat upsampling as a technical aid, preserving fidelity without introducing tonal coloration.

What is the difference between lossy and lossless audio, and where does MQA fit in? +

Lossy compression (such as MP3 or AAC) reduces file size by discarding audio information. Lossless compression (such as FLAC or ALAC) preserves the original audio data exactly, allowing perfect reconstruction.

MQA (Master Quality Authenticated) is a lossy format that compresses high-resolution audio while claiming to correct time-domain errors. In practice, MQA's core is a filtering approach similar to apodizing filters found in conventional DACs. The same improvements in transient clarity can be achieved with well-designed standard filters.

With today's inexpensive storage and fast broadband, the space-saving benefit of lossy formats is negligible. Any lossy compression introduces a variable that adds no value to playback quality. For listeners who value certainty and archival integrity, there is no reason to accept it.

DSD versus PCM: Which is better? +

DSD (Direct Stream Digital) and PCM (Pulse Code Modulation) are two different encoding methods. DSD uses a 1-bit stream at very high sample rates, while PCM uses multi-bit samples at lower rates. In modern playback, neither is inherently superior when competently implemented.

The sonic differences listeners report usually stem from reconstruction filters and noise-shaping behavior inherent to each format. A well-designed DAC can make both sound essentially indistinguishable. PCM's long-standing ubiquity gives it a slight edge in flexibility and simplicity.

Does volume control method affect sound quality — digital or analog? +

The volume control stage is a critical point in any audio chain. Digital volume control works by reducing bit depth before conversion. With modern 24-bit DACs offering 144 dB of dynamic range, even a 40 dB reduction still preserves over 100 dB — beyond most playback systems. However, greater reductions can compromise depth and presence.

A high-quality analog volume control preserves full bit depth but can introduce coloration through resistive losses, channel imbalance, or contact distortion.

Significant gains in transparency can be achieved by maintaining full digital resolution to the output stage and allowing level adjustment in a dedicated analog domain. Lipinski Sound is developing a hybrid passive/active volume solution designed to preserve signal purity while offering precision control.

What is the role of asynchronous USB in digital audio? +

Asynchronous USB allows the DAC, not the computer, to control timing. The DAC's internal high-precision clock requests data as needed, isolating audio conversion from the computer's variable timing and electrical noise.

This design effectively removes source-induced jitter and makes the choice of computer or cable largely irrelevant from a timing perspective. Asynchronous USB has become the standard for high-performance DACs.

What is galvanic isolation, and why does it matter? +

Galvanic isolation physically separates electrical connections between circuits using transformers or optical couplers, blocking any direct current path. In digital audio, it prevents high-frequency noise from computers or streamers from reaching sensitive analog sections of the DAC.

When properly implemented, galvanic isolation can make source or cable differences inaudible. It is a key engineering feature in neutral systems that let the DAC operate independently of upstream electrical conditions.

Do software players and OS settings affect sound quality? +

Software players and operating system settings affect sound only if they alter the bitstream sent to the DAC. The goal is bit-perfect playback — delivering audio data unchanged. Many default settings apply sample-rate conversion or normalization that reduces fidelity.

When configured properly using exclusive mode (ASIO, WASAPI, or Core Audio), players become indistinguishable. Claims of differences between bit-perfect players are unsubstantiated — identical output means identical sound.

What is a word clock, and is external clocking beneficial? +

In digital audio, the word clock governs timing — ensuring samples are converted and received in perfect step. Jitter, or slight timing deviations, can blur transients or soften spatial cues. However, modern asynchronous DACs achieve extremely low jitter internally, far below audibility.

External word clocks still have legitimate roles in professional studios with multi-device rigs requiring tight synchronization. In a single-DAC home setup, they rarely offer measurable or audible improvement unless the system suffers from poor clock management.

The best contemporary converters deliver precision internally. Before investing in an outboard clock, it is worth asking whether your system actually needs one.

Does up-converting to DSD improve sound quality? +

Up-converting PCM to DSD converts the stream to a high-rate DSD signal before playback. Advocates say this shifts noise above the audible band and allows gentler analog filtering.

When done with high-quality algorithms, the result can sound indistinguishable from a well-implemented PCM chain. The DSD character often comes from noise-shaping and filter design, not recovery of new detail. It can tune system character, but cannot restore what was not in the original recording.

What is the audible impact of power supplies in digital components? +

Every subsystem in a digital component depends on a low-impedance, noise-free voltage environment. Noise can manifest as jitter in clock circuits, contamination of DAC reference voltages, or residual switching artifacts in the analog output.

Poorly executed switch-mode supplies inject high-frequency noise that standard filtering may not fully remove. The audible result can be blurred spatial information, thinner harmonic texture, or slight hardness. A clean, well-regulated supply preserves timing integrity and dynamic subtlety across the audio band.

How do I know if my digital system is bit-perfect? +

A bit-perfect system passes the original audio data unchanged to the DAC. You can verify this using test files such as HDCD or dedicated bit-perfect test signals in tools like REW, checking if the DAC recognizes them correctly.

Operating systems and players often interfere by resampling or processing. Exclusive mode output (ASIO, WASAPI Exclusive, or properly configured Core Audio) bypasses this. Once confirmed, any remaining differences between sources or cables are electrical, not digital.

Do different USB or network cables affect digital sound? +

In bit-perfect playback, cables cannot change the data itself. However, they can influence noise coupling and grounding, potentially affecting analog stages in sensitive DACs. Proper galvanic isolation and shielding minimize these effects.

Cable impedance is also important — USB requires 90Ω differential, Ethernet requires 100Ω. Deviation can cause signal reflections, increase noise, or compromise clock stability. Well-constructed cables maintain consistent impedance, ensuring the digital signal arrives cleanly.

Is higher sample rate always better? +

Not necessarily. Higher sample rates allow gentler filters and can simplify certain DSP operations, but they also increase file size and processing load. Beyond 96 kHz, audible benefits are rare, and poorly designed conversions between rates can cause more harm than good. The recording and mastering quality matter far more.

Can network streamers sound different if all output is digital? +

Theoretically no, since the digital data is identical. But in practice, noise coupling through shared grounds or un-isolated interfaces can influence the DAC. Streamers with robust clocking, isolation, and low-noise designs lower this risk. Again, differences are electrical, not digital.