By: Harrison Nelson
December 3, 2021
In relation to the history of music, the recording and playing process is new. While many engineers have worked on new ways of recording and producing music, musicians are left here today with a dichotomy: analog versus digital.
The first recording of music dates back to the 1850s in France. Since then many different styles of recording have been implemented,from Thomas Jefferson’s phonograph to tape recorders. In the 1940s, the German army was looking to advance tape recording technology to make it clearer and easier to use. The unintended discovery was that this style of recording sound was perfect for music.
The tapes used to capture the sound were plentiful, meaning that the use of multiple takes was easy and inexpensive. Before this new recording system, all the performers had to play close together into a cone, which then recorded the sound using a needle and wax disk. The disk then had to go through a long process before vinyl records could be made from it. Tape was not only clearer and easier to use, but it also allowed songs to be longer.
The greatest advancement in tape technology came with the multi-track recorder. A few years after the invention of tape recorders, multi-track users had the ability to play multiple parts without all of the performers playing at the same time. This meant that the rhythm section of a band could record on one track while the lead instruments could play on another, and finally, the singer could have their own track. This allows songs to be tested and manipulated more easily.
The process of manipulating tapes and recorders cannot all be covered here, but the amazing effects can be heard from artists like The Beatles or Pink Floyd. But the sound was all real. The players had to perform well and put their soul into their work.
However, analog is only one side of the aforementioned dichotomy, leaving digital recording as the other option. Unlike analog recording processes, digital recording began far into the life of tape. Around the late 70s, Sony released the first digital recorder into the market. At this time, the main difference between analog and digital recording processes was how the music is stored.
This process of storing music digitally was one step closer to digital editing software. While tapes can be layered, slowed down, played in reverse or cut, there is not much else that could be done to edit taped recordings other than mixing. Mixing can add and subtract elements like bass and treble as well as control volume. However, if musicians want to edit more than just the basics, digital editing software is capable of doing a lot more. WIth endless manipulation possibilities, it’s hard to imagine some music without these effects.
Digital Audio Workstations were conceived in the late 1970s. However, the mass market development of DAWs did not occur until the late 1980s with ProTools. Creating these devices eliminated the large, expensive mixing desks that are common in recording studios, now condensing all those functions into a computer. And that’s only scratching the surface of what DAWs could eventually do.
Within DAWs, you can now record virtual instruments from a library within the app. This means any user of that specific DAW can use the exact same sounding instrument as someone else. Not only that, but features such as quantization allow the editor to take any recorded music and shift the timing so that it is perfectly on the beat. This is in total contrast to live musicians who will inevitably play out of time and never be perfect. But often, good music isn’t good because it’s perfect.
Self-expression is incorporated into every piece of art—especially music. When editing music, it is unfortunate to see processes that take away from this self-expression in tone. For example, a band can record a song with lyrics, guitar riffs, bass lines and drums. If the drummer’s timing is a quarter second off the beat in a particular part in the song, an editing software can take that recording and correct it. Same goes for guitar. Maybe the player bent a note a little too much. In that case, a simple lowering of that “incorrect” note makes it flawless.
Perfection is different for many people. Some may agree to take that pesky note down a notch. To others, perfection is hearing unique art by their favorite artist. They can hear the self-expression and soul behind their playing. Sometimes that means what they are also hearing are mistakes.
A google search of “mistakes in popular songs” will bring up a pretty long list of songs. For example, in “Hey Jude” by The Beatles, John Lennon famously swears in that song because he messed up the lyrics. Now, because this song was recorded on tape, that mistake could not be removed without scrapping an otherwise perfect take and starting over. This imperfection is what makes humans human, and it’s why perfection should not always be sought after in music.
There is a moral dilemma when it comes to editing a piece of music. It feels like finishing the Mona Lisa or smoothing out a Van Gogh painting. The little imperfections of music make up what music is. The more music is made perfect, it becomes less authentic and more of a demonstration of technology.
Harrison Nelson is a fourth year undergraduate student with a major in professional and public writing and a minor in entrepreneurship and innovation. He has been playing guitar for twelve years and enjoys classic cars.