Skip to content

aimms/bias-in-ai

Repository files navigation

Bias in AI

Downloads AIMMS Version WebUI Version AimmsDEX Version

This repository contains a functional AIMMS example model for Bias in AI. It demonstrates how to connect AIMMS to a Python machine learning service and expose potential bias in AI-driven toxicity classification.

🎯 Business Problem

The combination of machine learning and everyday applications is at the heart of modern tech advancements. But hidden beneath its brilliance is a complex issue — bias within these algorithms.

This example illustrates bias by creating an AIMMS front-end to an existing Python application based on Kaggle's AI Ethics course. The application:

  • Accepts a user-entered comment and evaluates its toxicity.
  • Reads in a training dataset of comments with toxicity labels.
  • Passes both the training data and the new comment to a Python service.
  • Returns whether the comment is considered toxic — revealing that the underlying model may treat identical concepts differently depending on the demographic group referenced.

Note: Bias can be observed in practice by entering words like black (marked toxic) versus white (marked not toxic).

This is also a relevant concern for Decision Support applications: basing decisions on data that is not representative of your market leads to poor outcomes.

📖 How to Use This Example

To get the most out of this model, we highly recommend reading our detailed step-by-step guide on the AIMMS How-To website:

👉 Read the Full Article: Bias in AI

Prerequisites

  • AIMMS: You will need AIMMS installed to run the model. An AIMMS Community License is sufficient. Download the Free Academic Edition here if you are a student.
  • Python 3.11: Required to run the Python toxicity classification service. PyCharm is recommended but not required.
  • WebUI: This model is optimized for the AIMMS WebUI for a modern, browser-based experience.

🚀 Getting Started

  1. Download the Release: Go to the Releases page and download the .zip file from the latest version.
  2. Open the Project: Launch the .aimms file.
  3. Start the Python Service: Follow the article instructions to start the Python backend locally or on AIMMS Cloud.
  4. Explore Bias: Use the WebUI to import the dataset, enter comments, and observe the toxicity classification results.

🤝 Support & Feedback

This example is maintained by the AIMMS User Support Team.


Maintained by the AIMMS User Support Team. We optimize the way you build optimization.

About

Shows how machine learning bias can be illustrated by creating an AIMMS front-end for a Python application that teaches about ethics-related bias using data from Kaggle.

Topics

Resources

License

Stars

Watchers

Forks

Contributors