Reddit will now use AI models to combat harassment (APK Teardown)

reddit stock photos

Edgar Cervantes/Android Authority

long story short

  • A teardown by Android Authority reveals that Reddit will use artificial intelligence models to detect harassment.
  • The model was trained on content that had previously been flagged as violating Reddit’s terms.

Over the past year or so, we’ve seen large language models (LLMs) used for a variety of functions, from text/image generation to virtual assistants and more. Now, with Reddit, it looks like we can add one more use case to the list.

APK teardowns help predict future features that may appear on the service based on ongoing code. However, such predicted functionality may not be publicly released.

A teardown of version 2024.10.0 of the Reddit app for Android revealed that Reddit is now using LLM to detect harassment on the platform. You can view the relevant strings below.

code

<string name="hcf_answer_how_model_trained">The harassment model is an large language model (LLM) that is trained on content that our enforcement teams have found to be violating. Moderator actions are also an input in how the model is trained.</string>
<string name="hcf_faq_how_model_trained">How is the harassment model trained?</string>

Reddit also updated its support page a week ago to mention using artificial intelligence models as part of its harassment filter.

“This filter is powered by a large language model (LLM) trained on moderation actions and content removed by Reddit’s internal tools and enforcement teams,” reads an excerpt from the page.

Regardless, moderators appear to have another tool to combat objectionable content on Reddit. But does this really mark up content well? We’ll just have to wait and see.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *