# Create Python environment and install MLCube Docker runner
virtualenv -p python3 ./env && source ./env/bin/activate && pip install mlcube-docker
# Fetch the boston housing example from GitHub
git clone https://github.com/mlcommons/mlcube_examples && cd ./mlcube_examples
git fetch origin pull/39/head:feature/brats && git checkout feature/brats
cd ./brats/metrics/mlcubevirtualenv -p python3 env && source ./env/bin/activate
git clone https://github.com/mlcommons/mlcube && cd ./mlcube
git fetch origin pull/223/head:feature/singularity_with_docker_images && git checkout feature/singularity_with_docker_images
pip install semver spython && pip install ./mlcube
pip install --no-deps --force-reinstall ./runners/mlcube_singularityTo run locally, clone this repo:
git clone https://github.com/mlcommons/medperf.gitGo to the server folder
cd serverInstall all dependencies
pip install -r requirements.txtCreate .env file with your environment settings
Sample .env.example is added to root. Rename .env.example to .env and modify with your env vars.
cp .env.example .envCreate tables and existing models
python manage.py migrateStart the server
python manage.py runserverAPI Server is running at http://127.0.0.1:8000/ by default. You can view and experiment Medperf API at http://127.0.0.1:8000/swagger
The Medperf CLI is a command-line-interface that provides tools for preparing datasets and executing benchmarks on such datasets.
To install, clone this repo (If you already did skip this step):
git clone https://github.com/mlcommons/medperf.gitGo to the cli folder
cd cliInstall using pip
pip install -e .The MedPerf CLI provides the following commands:
login: authenticates the CLI with the medperf backend server
medperf logindataset ls: Lists all registered datasets by the user
medperf dataset lsdataset create: Prepares a raw dataset for a specific benchmark
medperf dataset create -b <BENCHMARK_UID> -d <DATA_PATH> -l <LABELS_PATH>dataset submit: Submits a prepared local dataset to the platform.
medperf dataset submit -d <DATASET_UID> dataset associate: Associates a prepared dataset with a specific benchmark
medperf associate -b <BENCHMARK_UID> -d <DATASET_UID>run: Alias forresult create. Runs a specific model from a benchmark with a specified prepared dataset
medperf run -b <BENCHMARK_UID> -d <DATASET_UID> -m <MODEL_UID>result ls: Displays all results created by the user
medperf result lsresult create: Runs a specific model from a benchmark with a specified prepared dataset
medperf result create -b <BENCHMARK_UID> -d <DATASET_UID> -m <MODEL_UID>result submit: Submits already obtained results to the platform
medperf result submit -b <BENCHMARK_UID> -d <DATASET_UID> -m <MODEL_UID>mlcube ls: Lists all mlcubes created by the user. Lists all mlcubes if--allis passed
medperf mlcube ls [--all]mlcube submit: Submits a new mlcube to the platform
medperf mlcube submitmlcube associate: Associates an MLCube to a benchmark
medperf mlcube associate -b <BENCHMARK_UID> -m <MODEL_UID>The CLI runs MLCubes behind the scene. This cubes require a container engine like docker, and so that engine must be running before running commands like prepare and execute