From digitally enhancing beats to creating completely new songs, machine learning is well and truly transforming the music industry. A lot of artists these days are using ML to enhance their songs and add elements to their albums that were unthinkable before.
Researchers at the University of Michigan are also using machine learning to leave their own imprint on the digital era of music. They are changing the way we understand, create and interact with music.
Four research teams, that utilize machine learning and deep learning tools and techniques for the study of music theory, performance, social media-based music making, and the connection between words and music, will be given support by experts. They will be funded and this funding will be provided under the Data Science for Music Challenge Initiative through the Michigan Institute for Data Science (MIDAS).
The major focus of these projects will be to use ML techniques to automate musical accompaniment of text and data-driven analysis of music performance. Each project will be given $75,000 over a period of a year. Below are the projects:
The researchers who are selected for this project will be tasked with developing a platform for crowdsourced music making and performance. They will need to use data mining techniques to discover patterns in audience engagement.
This is probably the most fascinating project of the lot. Researchers will attempt to develop and analyse digitized performances of Bach’s Trio Sonatas. They will be required to produce algorithms that study the music structure from the perspective of data science. The end goal is to understand what makes performers so good artistically as well as figure out common mistakes they make.
The aim of this project is to develop a data science framework that will connect music with language. The researchers will need to develop tools that produce musical interpretations of texts, based completely on emotion and content. As the name suggests, the end goal is to create a tool that can transform any piece of text into music.
This project aims to combine computational analysis and music theory. This will be done to compare music across six cultures, including Indian songs, in order to identify common ground in how music is generated and structured in different cultures.
You can read more about the MIDAS challenge here for further details.
This goes to show how far machine learning has penetrated the music industry and how far it still has to go. These projects are just the beginning, or the tip of the iceberg, that have the potential to start a revolution. Assuming these projects are successful, they will broaden and deepen the current horizon in the digital music world.
The results can be applied to other interactive settings as well, including developing new educational tools. What are the use cases you can think of for these projects? Let us know in the comments section below!
Suggested use case:: record text and poetry readings with music generated from the text