• A-Grade

Article : Audio Engineering : Why the loudness war is OFFICIALLY over in 2020 

Throughout our years of working with different independent talents from around the world, something has become very apparent to us throughout that time. Artist's had an inherent preference for hearing their music clearly and noticeably in recent years; 'loudly'. For a long time, people had debated over what is the practical ideal for loudness and different producers/engineers had different formulas (sometimes bizarre) to make their music as loud as possible.

It was because of this very spirit (and a very poorly mixed/mastered set of tracks we recently had to fix from scratch for a client) that we wrote this article to support the progress of artists, engineers, studios and home studios worldwide. Ideally better music out there should be great for us all "in theory" so whilst sharing this information undoubtedly arms industrial rivals with the same knowledge currently. giving us an edge; honestly we don't care. We are tired of terrible sounding music just as much as you.

There is a debate which ties both Audio Engineering, Music Production and even aspects of physics together in order for professionals to develop a practical hypothesis for what works out to be both the most practical level of loudness and at the same time maintains a healthy ratio of dynamic range (difference between the lowest volume and highest volume) in the song.

RMS metres

Traditionally Engineers used what was known as RMS meter to determine a tracks relative overall loudness (different to the standard decibel meter used to track general volume) and until recently there was no technical right or wrong to how loud people made songs. Yet it is worth knowing that since the 60's all the way to the 2000's, music has progressively gotten louder and louder throughout the ages with volume peaking over the last 15 years.

Big Tech

It was at this point that the Tech companies who are relatively new to the music industry in context to the industries history at some point agreed to create new standards for volume/loudness control as they're was just too dramatically a difference between different music/artists coming from different studios and mix/mastering engineers that were being uploaded to their platforms while adhering to their own preferences on loudness. Sometimes overall loudness was used in excess leading to digital clipping, and sometimes volume was far too low to be watched comfortably leading to dramatic changes whilst switching between content.

User Experience : Built-in Compression

This was recognised in the format as a loss of consistent quality in "User Experience" in terms of people consuming content using these tech companies digital platforms so a resolution was sought after which could universally create a standard for Digital Platforms that would not disrupt the quality of their platforms because of below standard or excessively loud (or quiet) engineering. In practical terms their solution was to no longer punish themselves with poor user experience or dwindling numbers and instead punish (figuratively speaking) the creatives who overstep the boundary of "loudness" by compressing their music to fit their loudness standards (which would naturally reduce the natural range of the music)

LUFTS Metre : The New Digital Standard

In the case of music and all other digital content uploaded to digital streaming platforms such as YouTube, Spotify, Apple, Deezer etc. There have been new guidelines set out over the last 3 years about overall loudness but it has not necessarily been explained what the consequences are for Engineers who do not wish to adhere to the new norm.

Each platform has its own standard of loudness but we can safely say that it is ranged between -16LU and -13LU (Lufts standard terms) depending on the platform. This is actually great for disciplined Engineers that track their metres throughout their mix to ensure they have both their desired dynamic range as well as the desired overall loudness. But will become a nightmare for undisciplined engineers. Allow us to explain.

Built-in Compression - A solution provided by Tech Platforms

There is a different effect produced based on whether your track is too high or too low, but from a technical perspective, you are safest if your music is -1/-2LU "lower" than the mark rather than decibels higher because of the effect produced by each. Allow A-Grade to explain; If the track is lower: When you have mixed and mastered your song, it is inevitable that on some of the digital platforms, the optimum LU level is going to vary by 1-3LU and this creates what could be considered a vacuum for sound (or window for error) any of these platforms, if the Volume is "lower" in these circumstances and set to -17 to -15LU, for example the compressors built into YouTube will instantly boost (give or take) -1-2LU in order to bring the track to its channels standard.

Whilst naturally ALL compression impacts a song, the effects on the transients would be minimal and perhaps the overall dynamic range of the track will be altered between 0.2-1LU. To compensate for that gap in sound as it made the song louder to fill the void. In terms of changes to your song when being converted onto their digital platforms, you can safely assume that as long as built-in compression is present and in excess of the usual norms of compression (exceeding 1.5-2db) in gain reduction, some form of subtle change to your original song is inevitable.

In this case it will just a subtle layer of saturated compression which in turn could "add" to your song feeling a little more "together" depending on how loose your compression was to begin with. Now here's the difference and the core purpose for the article, if by chance, your track is LOUDER than the desired/ideal LU levels of platforms in question; once again the platform (YouTube in particular) would come into play to do their job of bringing your content back down to the level of what they have set as a digital standard.

The problem here which any experienced Audio Engineer should be able to notice, the moment you begin to compress a loud piece of audio, immediately the transients (tip/peak points of loudness in the song) are killed, or to be more accurate, made blunt. This may matter less so on a Rock Song but on any song where there are drums or a vocal as "prevalent" in the song (e.g Hip-hop, EDM, Afrobeats, Dancehall, Reggae, Techno etch), the transients of the song will be crushed beyond recognition, particularly if the reduction exceeds -2.5LU in order to achieve.

In Conclusion:

Whilst YouTube and others have not given any clear definitive way of how the compression techniques work precisely, from a forensic point of view, as an Engineer it is possible to reconstruct (or reverse engineer) much of what would be required to do in such a problem and this is why we are certain that BOOSTING volume (overall loudness) past their digital standards will dramatically impact the quality of the music you upload.

For artists, have this in mind whenever you book for studio time again; it's a tricky and technical topic but far too often we have been receiving stems from other studios in which the Engineers have not been made aware of these changes and are still mixing their music to be "as loud as possible" unaware that it's simply going to be punished when you release your music.

For engineers; we know many enjoy the nostalgic idea of mixing by ear and it's great to be a passionate Engineer, just be aware that as consumers change which platforms they are consuming their music from, new standards will apply for how things are done. Don't fall victim to a lack of education and begin losing clients to Studios/Engineers which are taking the time to educate themselves to be up to speed with CURRENT digital quality standards.

Much love and thanks for reading;


  • Instagram Social Icon

©2020 BY A-GRADE

54 Alexandra Road

Enfield EN3 7EH

Greater London, UK