How AI can fight discrimination online - Episode 354 - by cryptohunt.it
Manage episode 354711508 series 3330746
How AI can fight discrimination online
Welcome to the Cryptohunt Jam, where you learn – in just a minute or two a day – what is happening in crypto and other game-changing ideas. As always: In plain English.
Yesterday, we talked about how AI has schools and universities in full panic mode since artificial intelligence has been used to write essays and take exams…in place of the student who’s supposed to be tested on what they’ve learned.
While AI used to do someone’s homework is not a great use of AI…what about a positive use for the groundbreaking technology?
One of our picks: Content moderation.
Rather than having a team of individuals reading through pages of flagged posts, what if a platform installed a stellar AI to analyze the language used in posts?
In theory, this could be an insanely good use of the technology. Why?
AI models, like GPT3, are master analyzers of language. They have to be. Otherwise, they wouldn’t be able to produce those fantastic responses to the simplest of questions.
If someone is harassing another user with blatant discriminating language…an AI can be geared to remove the post and (if needed) the person who posted it.
Here’s the catch. AI isn’t great at moderating yet. It’s full of unfair biases and mistakes that could get the wrong person kicked off of a social media platform.
We’re not ones to shy away from a civilized, well-reasoned debate, and we’d hate to see someone who is respectfully voicing their opinion get unfairly locked out from a profile.
That said, with more training, you could see fewer of those toxic arguments fill your timeline thanks to the hard work of an AI.
I hear you back here tomorrow. As always at 11am CET, 2am PST. Thanks for listening!
This podcast is produced by Cryptohunt.it, the easiest place to learn all about Web3.
--- Send in a voice message: https://podcasters.spotify.com/pod/show/cryptohunt/message373 afleveringen