This repository contains a functional AIMMS example model for Bias in AI. It demonstrates how to connect AIMMS to a Python machine learning service and expose potential bias in AI-driven toxicity classification.
The combination of machine learning and everyday applications is at the heart of modern tech advancements. But hidden beneath its brilliance is a complex issue — bias within these algorithms.
This example illustrates bias by creating an AIMMS front-end to an existing Python application based on Kaggle's AI Ethics course. The application:
- Accepts a user-entered comment and evaluates its toxicity.
- Reads in a training dataset of comments with toxicity labels.
- Passes both the training data and the new comment to a Python service.
- Returns whether the comment is considered toxic — revealing that the underlying model may treat identical concepts differently depending on the demographic group referenced.
Note: Bias can be observed in practice by entering words like
black(marked toxic) versuswhite(marked not toxic).
This is also a relevant concern for Decision Support applications: basing decisions on data that is not representative of your market leads to poor outcomes.
To get the most out of this model, we highly recommend reading our detailed step-by-step guide on the AIMMS How-To website:
👉 Read the Full Article: Bias in AI
- AIMMS: You will need AIMMS installed to run the model. An AIMMS Community License is sufficient. Download the Free Academic Edition here if you are a student.
- Python 3.11: Required to run the Python toxicity classification service. PyCharm is recommended but not required.
- WebUI: This model is optimized for the AIMMS WebUI for a modern, browser-based experience.
- Download the Release: Go to the Releases page and download the
.zipfile from the latest version. - Open the Project: Launch the
.aimmsfile. - Start the Python Service: Follow the article instructions to start the Python backend locally or on AIMMS Cloud.
- Explore Bias: Use the WebUI to import the dataset, enter comments, and observe the toxicity classification results.
This example is maintained by the AIMMS User Support Team.
- Found an issue? Open an issue.
- Questions? Reach out via the AIMMS Community.
Maintained by the AIMMS User Support Team. We optimize the way you build optimization.