#
toxicity-classification
Here are
59 public repositories
matching this topic...
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.
Updated
Jul 29, 2025
Python
Updated
Apr 2, 2025
Python
A web-app to identify toxic comments in a youtube channel and delete them.
Updated
Dec 24, 2023
Jupyter Notebook
Updated
Feb 21, 2025
Python
This repository contains the code for the paper: "DeToxy: A Large-Scale Multimodal Dataset for Toxicity Classification in Spoken Utterances"
Updated
Oct 13, 2022
Jupyter Notebook
AntiToxicBot is a bot that detects toxics in a chat using Data Science and Machine Learning technologies. The bot will warn admins about toxic users. Also, the admin can allow the bot to ban toxics.
Updated
Apr 10, 2024
Jupyter Notebook
A revolutionary AI-powered platform to help you solve doubts instantly, make learning easy, and achieve academic success.
Updated
Nov 1, 2024
TypeScript
A supervised learning based tool to identify toxic code review comments
Updated
Sep 15, 2025
Python
NLP deep learning model for multilingual toxicity detection in text 📚
Updated
Aug 10, 2020
Jupyter Notebook
Module for predicting toxicity messages in Russian and English
Updated
Jun 6, 2023
Python
Toxformer is an attempt at using transformers to predict the toxicity of molecules from their molecular structure using the T3DB database.
Updated
Dec 11, 2024
Jupyter Notebook
Offensive Language Identification Dataset for Brazilian Portuguese.
Updated
Mar 13, 2023
Jupyter Notebook
Classifying users on social media, using text embeddings from OpenAI and others
Simple Multi-Language HTTP Server Text Toxicity Detector
Updated
Jul 10, 2025
Python
An AI to Scan for Toxic Tweets
Updated
Aug 18, 2024
Python
Fast text toxicity classification model
Updated
May 31, 2024
Python
Using Language Models to Identify and Classify Toxicity Inside In-Game Chat
Updated
Jun 27, 2023
Python
It is a trained Deep Learning model to predict different level of toxic comments. Toxicity like threats, obscenity, insults, and identity-based hate.
Updated
Oct 7, 2022
Jupyter Notebook
Gold Standard for Toxicity and Incivility Project
A Simple PoC (Proof of Concept) of Hate speech (Toxic comment) Detector API Server. A core dependency of nostr-filter-relay.
Updated
Jul 8, 2024
Jupyter Notebook
Improve this page
Add a description, image, and links to the
toxicity-classification
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
toxicity-classification
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.