X

Spotify promotes online safety for at-risk young audience

Featured image for Spotify promotes online safety for at-risk young audience

Spotify has joined the “Tech Coalition”, which aims to promote and preserve online safety. The audio streaming company has assured that creating a safer online environment for young people is its top priority.

Spotify will help connect vulnerable users to mental health resources

Marcelina Slota, Head of Spotify’s Platform Integrity, has stated the company remains committed to creating a safe online environment for its listeners. The popular streaming service added that it intends to make it easier for its young audience and parents to better understand the digital world.

Spotify mentioned that it will have relevant processes in place to help users safely navigate the digital world. As part of the commitment to a safer online presence, Spotify has joined the Tech Coalition.

Apart from working with the Tech Coalition, Spotify has indicated it would monitor the content consumption patterns on its platform to spot potential issues. The company will help connect vulnerable users to appropriate mental health resources.

Specifically speaking, Spotify will look out for users who regularly search for content related to suicide, self-harm, and eating disorders. If it spots a pattern, the platform will suggest users reach out to mental health platforms. These include the US National Suicide Prevention Lifeline, the TREVOR Project, National Eating Disorders Services, and others.

Spotify to add staff and deploy Machine Learning for online safety

Spotify has previously clarified that it has a zero-tolerance policy towards exploitative content. The company already bans illegal or abusive behaviors, especially aimed at young people.

Moving forward, Spotify has stated it will add dedicated teams to review and promptly remove potentially violating or explicit content. Additionally, the company is deploying Machine Learning (ML) to quickly spot signs of potential trouble.

The company may have ML algorithms hunting for patterns in user reports to detect policy violations and content with questionable legality. Usually, automated systems monitor, detect, and flag a user or content. Thereafter, human reviewers review the content and take necessary actions, such as censoring the content, issuing warnings, or blocking a user.