Instagram is expanding its security measures by developing artificial intelligence that’s capable of detecting bullying in videos and comments. The company’s new A.I. will look through users’ posts, live videos, and comments, analyzing them to later send them to human moderators for review. These people will determine if the content is hateful and if it should be taken off the site.
The Verge reports that the new security feature will roll out in October, just in time for National Bullying Prevention Month. This marks Instagram’s CEO Adam Mosseri’s first product announcement since the company’s former heads, Kevin Systrom and Mike Krieger, abruptly left last month.
Instagram is one of the few social media sites that has somehow kept its reputation untarnished, even despite the fact that it’s owned by Facebook. Utilizing A.I. to filter out bots and negative comments can only help the company in the long run, especially when other social media sites are plagued by negativity and harassment from online strangers.
While AI is notoriously poor at noting the difference between hateful comment and human subtext, Instagram promises to grow and develop this technology into something that works and that improves the app’s overall user experience.
In the coming months, Instagram plans to launch new filters including a kindness camera effect which aims to spread positive messages and boost user engagement. The filter will make hearts appear on your face and it will also post messages of kindness in all sorts of languages. It sounds a little cheesy but ultimately harmless. It’s also sweet that Instagram is trying to do something nice instead of just offering up a fresh batch of dog filters.