Could artificial intelligence solve cyber bullying?

The AFL football code has revealed that it's trialling artificial intelligence as a way to protect players from online abuse. This is a great move given that a recent study from the Australian Institute of Criminology has found more than a quarter of young people were the victim of cyber abuse in the past 6 months alone.

Using AI to mitigate harmful messages will have a positive impact but what keeps coming up in interviews with me is that people fear loss of freedom. They want to opportunity to use social media to give feedback, express their thoughts- sometimes which may not be positive and supportive. They worry that critical feedback will be culled leaving only sanitised comments online. And good point too!

Content moderation is about finding that balance between the freedom of sharing ideas with making sure these online platforms are places that are positive and empowering. Getting this right is going to be the the difficult aspect of using AI in this way.

It also depends on the standards we use to decide what posts should be cullled. What one football club may decide as acceptable may be different to what a workplace using AI to stop staff cyber bullying might.

The issue is what do we see as acceptable content and not? where is that fine line?