Spotify is Making it Easier to Find Misinformation on their Podcasts

Mitchell Clark writing for The Verge:

Spotify is acquiring Kinzen, a startup that specializes in using machine learning to analyze content and report potentially harmful statements to human moderators. In a press release, Spotify says the acquisition is meant to help it deliver a safe, enjoyable experience on our platform around the world,”

Spotify has already been working with Kinzen, claiming that it’s been partnered with the company since 2020 and that the startup’s tech has been critical to enhancing our approach to platform safety.” According to Kinzen’s site, its tech is capable of analyzing audio content in several languages, and it uses data from the internet and human experts to figure out if certain claims are harmful. (It even claims to be able to spot dog whistles, seemingly innocuous phrases that actually refer to something with a darker meaning.)

It’s interesting that there is indeed software that not only spots misinformation, but also finds dog whistles (a term I didn’t know about until today). To add to this, I can’t help but think that Kinzen is forever going to be adding to the database of misinformation to ensure it’s the most up-to-date it can be.

According to their website, Kinzen uses a blend of human expertise and machine learning to provide early-warning of the spread of harmful content in multiple languages.”

My issue here is: I’m not sure just how effective it will be to notify, or even eliminate, misinformation on Spotify. One of the biggest being Joe Rogan, who has come under fire after multiple instances of misinformation and dog whistling. Since all he got was a virtual slap on the wrist for his antics before, I doubt they will turn up the heat on their cash cow.

Twitter and Facebook have added misinformation notifications since Covid-19, but I’m honestly not sure how well they have thwarted people from believing the lies and deceit they see on their timelines. In fact, I think it may have caused anti-vaxxers and QAnon followers to flock to the flagged information.

The option of doing nothing, which is what Substack does, has become the go example of what not to do as a platform. While it gave Substack millions, it also created the ongoing problem of allowing harmful information to be shared as fact.

There isn’t an easy answer to deal with misinformation on media platforms, but I am interested to see what comes of this acquisition (if anything).