Skip to content

Commit 857caa2

Browse files
authored
change name to ParEval (#24)
* change name to ParEval
1 parent 921ebc1 commit 857caa2

4 files changed

Lines changed: 21 additions & 13 deletions

File tree

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Copyright (c) 2022-2024, Parallel Software and Systems Group, University of
1+
Copyright (c) 2023-2024, Parallel Software and Systems Group, University of
22
Maryland.
33

44
Permission is hereby granted, free of charge, to any person obtaining a

README.md

Lines changed: 18 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,28 @@
1-
# PCGBench
1+
# ParEval
22

3-
This repo contains the Parallel Code Generation Benchmark (PCGBench) for
3+
[![arXiv](https://img.shields.io/badge/arXiv-2401.12554-b31b1b.svg)](https://arxiv.org/abs/2401.12554) [![GitHub license](https://badgen.net/github/license/parallelcodefoundry/ParEval)](https://github.com/parallelcodefoundry/ParEval/blob/develop/LICENSE)
4+
5+
6+
This repo contains the Parallel Code Evaluation (ParEval) Benchmark for
47
evaluating the ability of Large Language Models to write parallel code. See the
5-
[PCGBench Leaderboard](https://pssg.cs.umd.edu/blog/2024/pareval/) for
6-
up-to-date results on different LLMs.
8+
[ParEval Leaderboard](https://pssg.cs.umd.edu/blog/2024/pareval/) for
9+
up-to-date results on different LLMs.
10+
711

812
## Overview
913

1014
The organization of the repo is as follows.
1115

12-
- `prompts/` -- the prompts in PCGBench alongside some utility scripts
16+
- `prompts/` -- the prompts in ParEval alongside some utility scripts
1317
- `generate/` -- scripts for generating LLM outputs
1418
- `drivers/` -- scripts to evaluate LLM outputs
1519
- `analysis/` -- scripts to analyze driver results and compute metrics
1620
- `tpl/` -- git submodule dependencies
1721

1822
Each subdirectory has further documentation on its contents. The general
19-
workflow is to use `generate/generate.py` to get LLM outputs, run
23+
workflow is to use `generate/generate.py` to generate LLM outputs, run
2024
`drivers/run-all.py` to evaluate outputs, and `analysis/metrics.py` to
21-
postprocess the results.
25+
post-process the results.
2226

2327
## Setup and Installation
2428

@@ -30,7 +34,7 @@ and AMD GPUs alongside their respective software stacks.
3034
First, clone the repo.
3135

3236
```sh
33-
git clone --recurse-submodules https://github.com/pssg-int/llms-for-hpc.git
37+
git clone --recurse-submodules https://github.com/parallelcodefoundry/ParEval.git
3438
```
3539

3640
Next, you need to build Kokkos (if you want to include it in testing).
@@ -56,7 +60,7 @@ outputs.
5660
pip install -r requirements.txt
5761
```
5862

59-
## Citing PCGBench
63+
## Citing ParEval
6064

6165
```
6266
@misc{nichols2024large,
@@ -68,4 +72,8 @@ pip install -r requirements.txt
6872
archivePrefix={arXiv},
6973
primaryClass={cs.DC}
7074
}
71-
```
75+
```
76+
77+
## License
78+
79+
ParEval is distributed under the terms of the [MIT license](/LICENSE).

generate/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Generate
22

3-
This subdirectory contains scripts for generating the LLM outputs for PCGBench.
3+
This subdirectory contains scripts for generating the LLM outputs for ParEval.
44
The main script is `generate.py`. It can be run as follows.
55

66
```sh

prompts/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Prompts
22

3-
This directory contains the PCGBench prompts.
3+
This directory contains the ParEval prompts.
44
The prompts for the generation task are contained in `generation-prompts.json`
55
and the prompts for the translation task are in `translation-prompts.json`.
66

0 commit comments

Comments
 (0)