With the advent of ChatGPT, the last few months have highlighted how powerful and influential AI technology can be. In particular, how it has the potential to be used by students to cheat on essays or exams. That same question can be applied to Google’s new Music AI, MusicLM. For music students studying composition, could this technology be used to fake assignments? Here is how MusicLM could affect universities in the future.
What Is MusicLM?
Similarly to ChatGPT, MusicLM takes the form of a chatbot where users can input queries and receive responses. You can type in a description of the music you would like to hear, and it will generate an audio clip to match. It currently has a library of over 5000 music-text pairs. It has several different settings that can be used: a melody conditioning mode where it will generate a piece based on a whistled or hummed tune, a story mode where it can take an input of several text prompts, long generation, where it can produce a clip five minutes in length, and even a mode where it generates music based off of image captions.
How Will MusicLM Affect Students Studying Composition?
With the level of ease that MusicLM’s text prompt interface provides, it is not inconceivable that it could be used by music composition students to create a passable piece of music for an assignment without having to compose it themselves. All it would require would be for the student to copy and paste the brief into the textbox.
However, this would only work in scenarios where a student was required to submit nothing but an audio file. In a curriculum that involves workshopping or editing music based on critique, the AI would not be so useful. Instead, it could perhaps be used as a tool in classrooms, generating a baseline track that students could be challenged to edit, subvert or remix in a variety of ways. This would allow them to develop those higher level skills without having to build a piece from the ground up.
Still, whether its effects would be positive or negative, the impacts that MusicLM will have are decidedly in the future. The technology has not yet been released to the public, with its team bringing up ethical concerns that it could generate music that is too similar to the source material it is trained on. So for now, all we can do is speculate.