With tens of thousands of new songs emerging daily, streaming services and radio stations face a colossal challenge. They must decide which tunes are destined for the hit charts.
They’ve employed a variety of tactics to make these predictions. However, the results have remained elusive.
Whether by enlisting human reviewers or by deploying artificial intelligence, they can only average a 50% hit rate on new songs. However, a revolutionary technique could change that.
A team of US researchers has utilized an advanced machine learning technique, coupled with brain responses, to identify potential hit songs with an astonishing 97% accuracy.
This breakthrough approach to forecasting music trends uses ‘neuroforecasting.’ This is a methodology that uses brain activity data to anticipate widespread trends.
“By applying machine learning to neurophysiologic data, we could almost perfectly identify hit songs,” shared Paul Zak, a professor at Claremont Graduate University and senior author of the study, which was published in Frontiers in Artificial Intelligence.
He continued, “That the neural activity of 33 people can predict if millions of others listened to new songs is quite amazing. Nothing close to this accuracy has ever been shown before.”
The study entailed outfitting participants with off-the-shelf sensors and having them listen to a set of 24 new songs. Researchers gathered data points on their preferences.
They also used demographic information. Most importantly, the team registered the participants’ neurophysiological responses to the music.
“The brain signals we’ve collected reflect activity of a brain network associated with mood and energy levels,” Zak explained.
This insight was pivotal in predicting how successful a song might be in the market. It even allowed researchers to estimate the number of streams a song might accumulate.
Once the data collection phase was complete, the team applied different statistical methods to evaluate the predictive accuracy of the neurophysiological variables. This facilitated a direct comparison of the models.
To enhance predictive accuracy, a machine learning model was trained to test various algorithms, aiming for the highest prediction outcomes over which new songs would become hits.
The findings were compelling. A linear statistical model pinpointed hit songs with a 69% success rate. But when machine learning was applied to the same data, the success rate soared to 97%. When used on just the first minute of a new song’s neural responses, the success rate remained high at 82%.
“This means that streaming services can readily identify new songs that are likely to be hits for people’s playlists more efficiently, making the streaming services’ jobs easier and delighting listeners,” Zak said.
In envisioning the future, Zak suggested that emerging wearable neuroscience technologies could streamline entertainment options for audiences. It will be able to tailor new songs and other offerings to individual neurophysiology.
This approach could potentially reduce the overwhelming abundance of choices to a manageable two or three. By doing so, it would enable quicker and more satisfying choices for listeners.
While these results are promising, the researchers caution that there were limitations to the study. For instance, the small number of new songs analyzed and a lack of diversity in study participants’ ethnicity and age groups could skew results.
Despite these limitations, Zak and his team remain optimistic about the potential applications of this technique.
“Our key contribution is the methodology. It is likely that this approach can be used to predict hits for many other kinds of entertainment too, including movies and TV shows,” Zak concluded.
In essence, this groundbreaking ‘neuroforecasting’ technique could redefine the entertainment industry’s approach to identifying the next big hit.
Machine learning is a subset of artificial intelligence (AI) that focuses on giving machines the ability to learn from and make decisions or predictions based on data. It allows computers to improve their performance over time without being explicitly programmed to do so.
There are three main types of machine learning:
In this approach, a machine is taught to learn the relationship between given inputs (features) and outputs (labels or targets). For instance, a simple supervised learning task could be predicting the price of a house given features like the number of rooms, the location, and the age of the house. During the training phase, the model learns from a labeled dataset, which is a dataset that has both input data and the corresponding correct output.
Unlike supervised learning, unsupervised learning algorithms operate on datasets without labels or targets. The model learns the underlying patterns or structures in the data without any prior knowledge of what these might look like. Common tasks for unsupervised learning include clustering (grouping similar data points together) and dimensionality reduction (reducing the number of variables in a dataset while maintaining its structure).
In this type of machine learning, an agent learns how to behave in an environment by performing certain actions and receiving rewards or penalties in return. The agent’s goal is to learn a policy, which is a strategy for choosing actions that maximize its total future reward.
Apart from these, there are other types of machine learning methods like semi-supervised learning (a combination of supervised and unsupervised learning) and active learning (where the model can request labels for certain instances to improve its performance).
Machine learning has a wide range of applications including, but not limited to, image and speech recognition, medical diagnosis, stock market trading, natural language processing, and self-driving cars.
A feature is an individual measurable property or characteristic of a phenomenon being observed.
A machine learning algorithm is a set of rules and statistical techniques used to learn patterns from data and draw significant information from it.
A model is the output of a machine learning algorithm trained on a dataset. It represents what was learned by a machine learning algorithm.
The process of adjusting a model’s parameters to improve its predictions, by minimizing the difference between the predicted and the actual output (this difference is called the error or the loss).
Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. Underfitting occurs when a model is too simple to learn the underlying structure of the data.
Machine learning is an exciting and rapidly evolving field. However, it does come with challenges and considerations, particularly in the areas of data privacy, security, and ethics. For instance, bias in training data can lead to unfair or discriminatory predictions.