Skip to content

feat(cudf): Add GPU "and" and "or"#16913

Open
simoneves wants to merge 18 commits intofacebookincubator:mainfrom
simoneves:simoneves/cudf_logical_function
Open

feat(cudf): Add GPU "and" and "or"#16913
simoneves wants to merge 18 commits intofacebookincubator:mainfrom
simoneves:simoneves/cudf_logical_function

Conversation

@simoneves
Copy link
Copy Markdown
Collaborator

@simoneves simoneves commented Mar 25, 2026

This PR implements GPU operators for boolean "and" and "or" in Function Mode.

It was split out of Decimal Part 2, as it has nothing to do with the rest of the decimal implementation, but is required for an all-GPU processing of TPC-DS.

Per @jinchengchenghh be aware that unlike the Velox CPU implementation, logical operations with multiple column inputs cannot short-cut once a terminating case (e.g. AND false or OR true) is found, so performance in the case of a large number of such inputs may not be optimal. I don't know how much that's worth worrying about in practice. Note that such a short-cut IS implemented for literal inputs (i.e. <any number of columns> AND false or <any number of columns> OR true will not evaluate the columns at all).

Note that in most cases, such expressions are handled by AST, especially after the fix in #17236, but a Function Mode fallback is still required as a backup.

@netlify
Copy link
Copy Markdown

netlify Bot commented Mar 25, 2026

Deploy Preview for meta-velox ready!

Name Link
🔨 Latest commit e42e791
🔍 Latest deploy log https://app.netlify.com/projects/meta-velox/deploys/69eb9ddda87e330008df54aa
😎 Deploy Preview https://deploy-preview-16913--meta-velox.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@simoneves simoneves marked this pull request as draft March 25, 2026 01:52
@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 25, 2026
@simoneves simoneves force-pushed the simoneves/cudf_logical_function branch 2 times, most recently from 5a7edb8 to 9c3473c Compare March 30, 2026 16:59
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Mar 30, 2026

Build Impact Analysis

Full build recommended. Files outside the dependency graph changed:

  • velox/experimental/cudf/expression/ExpressionEvaluator.cpp
  • velox/experimental/cudf/tests/DecimalExpressionTest.cpp
  • velox/experimental/cudf/tests/FilterProjectTest.cpp

These directories are not fully covered by the dependency graph. A full build is the safest option.

cmake --build _build/release

Fast path • Graph from main@8475135005b9ad1d137f0e828770e8f191b47c75

patdevinwilson added a commit to patdevinwilson/velox that referenced this pull request Mar 30, 2026
… PR facebookincubator#16913 from @simoneves)

Cherry-picked PR facebookincubator#16913: LogicalFunction for GPU-side 'and'/'or' evaluation,
split out from Decimal Part 2.

Also fixes API compatibility issues from cherry-picked PRs:
- veloxToCudfTypeId → veloxToCudfDataType (removed double-wrapping)
- toCudfTable missing mr parameter in CudfEnforceSingleRow

Made-with: Cursor
patdevinwilson added a commit to patdevinwilson/velox that referenced this pull request Mar 30, 2026
…38/Q9

- Fix cudf::numeric_scalar calls to pass stream/mr explicitly (banned
  default stream/mr in this build config)
- Fix veloxToCudfTypeId → veloxToCudfDataType API rename
- Fix toCudfTable missing mr parameter
- Add debug logging for boolean marker null counts and NLJ build/probe

Cherry-picked PRs integrated:
- PR facebookincubator#16522 count(*)/count(NULL) from @mattgara
- PR facebookincubator#16920 CudfEnforceSingleRow from @perlitz
- PR facebookincubator#16913 LogicalFunction and/or from @simoneves

Made-with: Cursor
@simoneves simoneves force-pushed the simoneves/cudf_logical_function branch 2 times, most recently from ea6a443 to 54ec1c3 Compare April 15, 2026 02:12
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

CI Failure Analysis

Auto-generated by the CI Failure Analysis workflow. This comment is updated in place each time CI fails on a new commit, so it always reflects the latest run — re-pushing or re-running CI will refresh the analysis below. Last updated 2026-04-24 17:07:21 UTC from workflow run 24901001595.

❌ Spark Aggregate Fuzzer — FUZZER Failure View logs

Fuzzer: Spark Aggregation Fuzzer (spark_aggregation_fuzzer_test)
Failed instance: 4 of 4 (seed=296072680)
Function under test: stddev_samp

Error: Velox and reference DB (Spark) results don't match for stddev_samp on DOUBLE input with a mask column. Velox returned 0 but Spark returned NaN.

Expected 1, got 1
1 extra rows, 1 missing rows
1 of extra rows:
	0

1 of missing rows:
	"NaN"

Plan: Aggregation[SINGLE a0 := stddev_samp(ROW["c0"]) mask: m0] -> a0:DOUBLE
  -- Project[expressions: (c0:DOUBLE, ROW["c0"]), (m0:BOOLEAN, ROW["m0"])]
    -- Values[160 rows in 4 vectors]

File: velox/exec/fuzzer/AggregationFuzzer.cpp:1066
Function: compareEquivalentPlanResults

The fuzzer instance aborted with SIGABRT after the VeloxRuntimeError was thrown.


Correlation with PR changes:

  • Not related. This PR (feat(cudf): Add GPU "and" and "or" #16913) only modifies files under velox/experimental/cudf/ (adding logical AND/OR support for the cuDF expression evaluator). The fuzzer failure is in the Spark stddev_samp aggregation function, which is entirely unrelated to the cuDF expression evaluation changes.
  • Files changed in PR: velox/experimental/cudf/expression/ExpressionEvaluator.cpp, velox/experimental/cudf/tests/DecimalExpressionTest.cpp, velox/experimental/cudf/tests/FilterProjectTest.cpp

Known issues:

  • #16327 — Scheduled Spark Aggregate Fuzzer failing tracks ongoing Spark Aggregate Fuzzer failures.
  • The Spark Aggregate Fuzzer is known to be flaky. On the main branch, the most recent "Fuzzer Jobs" runs show intermittent failures (e.g., run 24871557887 failed, while run 24893776415 succeeded), confirming this is a pre-existing/flaky failure.
  • The specific stddev_samp result mismatch (returning 0 instead of NaN) is a known behavioral difference when all masked input rows are filtered out — Spark returns NaN for sample standard deviation of an empty set, while Velox returns 0.

Reproduce locally:

./_build/debug/velox/functions/sparksql/fuzzer/spark_aggregation_fuzzer_test \
  --seed 296072680 \
  --duration_sec 300 \
  --minloglevel=0 \
  --stderrthreshold=2

Recommended fix:
No action needed for this PR. This is a pre-existing flaky fuzzer failure unrelated to the PR changes. The underlying issue is tracked in #16327.

@simoneves simoneves force-pushed the simoneves/cudf_logical_function branch from 54ec1c3 to 4ed41bb Compare April 15, 2026 06:16
@github-actions
Copy link
Copy Markdown

CI Failure Analysis

⚠️ Build with GCC / Linux release with adapters — CLANG-TIDY Failure View logs

Note: The build and tests passed successfully. Only the clang-tidy step failed.

Error:

[1/25] Processing file CudfHiveDataSource.cpp
...
[7/25] Processing file DecimalAggregationKernels.h
183410 warnings and 6 errors generated when compiling for host.
Error while processing DecimalAggregationKernels.h
...
[25/25] Processing file CudfHiveConnectorTestBase.cpp
584122 warnings and 12 errors generated.

##[error] 'curand_mtgp32_kernel.h' file not found

__clang_cuda_runtime_wrapper.h:486:10: error: 'curand_mtgp32_kernel.h' file not found [clang-diagnostic-error]
  486 | #include "curand_mtgp32_kernel.h"
       |          ^~~~~~~~~~~~~~~~~~~~~~~~

Additional errors (CUDA/clang incompatibility):
  - error: GPU arch sm_35 is supported by CUDA versions between 7.0 and 11.8 (inclusive), but installation at /usr/local/cuda-12.9
  - error: unknown argument: '--generate-code=arch=compute_70,code=[compute_70,sm_70]'
  - error: unknown argument: '-Xcompiler=-fPIC'
  - error: unknown argument: '-ccbin'
  - error: unknown argument: '-forward-unknown-to-host-compiler'

All 25 files processed by clang-tidy are under velox/experimental/cudf/ — every file in this PR's changeset. 19 out of 25 files failed due to these CUDA header/flag incompatibilities.


Correlation with PR changes:

  • This failure is not caused by the PR's code changes. The PR adds GPU "and"/"or" logical operators under velox/experimental/cudf/, which is valid functionality.
  • The failure is an environment issue: clang-tidy 18.1.8 (Clang-based) cannot process CUDA files compiled with GCC/nvcc flags (e.g., -Xcompiler, -ccbin, --generate-code). It also lacks the curand_mtgp32_kernel.h header from the CUDA toolkit.
  • On main branch, the clang-tidy step is skipped because no cudf files are changed. Any PR that modifies velox/experimental/cudf/ files will hit this same failure.

Known issues:

Pre-existing / flaky:

  • The most recent main branch run (24432611972) skipped the clang-tidy step entirely (no cudf files changed). This confirms the failure is triggered specifically by touching cudf files, not by this PR's logic.

Recommended fix:

  • This is an infrastructure issue, not a code issue. The clang-tidy CI step needs to either:
    1. Exclude velox/experimental/cudf/ files (.cu, .cuh, and headers that transitively include CUDA headers), or
    2. Use a CUDA-aware clang-tidy configuration with proper CUDA toolkit paths.
  • The PR itself should not need changes to address this failure.

@simoneves simoneves force-pushed the simoneves/cudf_logical_function branch from 4ed41bb to ec4dc5f Compare April 15, 2026 17:38
@github-actions
Copy link
Copy Markdown

CI Failure Analysis

⚠️ Build with GCC / Linux release with adapters — CLANG-TIDY Failure View logs

Note: The build and all tests passed successfully. The failure is only in the "Install and run clang-tidy" step (step 13).

clang-tidy errors:

clang-tidy 18.1.8 cannot process the CUDA (.cu/.h) files in velox/experimental/cudf/ because it doesn't support the CUDA-specific compiler flags and headers used by the NVCC toolchain.

__clang_cuda_runtime_wrapper.h:486:10: error: 'curand_mtgp32_kernel.h' file not found [clang-diagnostic-error]
error: GPU arch sm_35 is supported by CUDA versions between 7.0 and 11.8 (inclusive), but installation at /usr/local/cuda-12.9 is ...
error: unknown argument: '--generate-code=arch=compute_70,code=[compute_70,sm_70]' [clang-diagnostic-error]
error: unknown argument: '-Xcompiler=-fPIC' [clang-diagnostic-error]
error: unknown argument: '-ccbin' [clang-diagnostic-error]
error: unknown argument: '-forward-unknown-to-host-compiler' [clang-diagnostic-error]

All 25 files processed by clang-tidy are from the PR's cudf changes. 19 out of 25 files produced errors, all stemming from the same CUDA incompatibility issue.

Additionally, two bundled dependency .clang-tidy config files are incompatible with clang-tidy 18:

_build/release/_deps/rmm-src/cpp/.clang-tidy:16:1: error: unknown key 'AnalyzeTemporaryDtors'
_build/release/_deps/cudf-src/cpp/.clang-tidy:53:1: error: unknown key 'ExcludeHeaderFilterRegex'

Correlation with PR changes:

  • The PR modifies 30 files, all under velox/experimental/cudf/. The clang-tidy step runs only on files changed by the PR, so it processes exclusively cudf CUDA-related files, which are inherently incompatible with clang-tidy's Clang frontend.
  • This is not a code defect in the PR. The build and tests pass. The issue is that clang-tidy cannot analyze CUDA files with NVCC-specific flags.

Known issues:

  • #16094 — "run-clang-tidy.py fails in Linux release with adapters build" tracks clang-tidy failures in this CI job (opened Jan 2026, still open).
  • The same CI job also failed on main branch (run 24456746115) with a different clang-tidy error (CastExpr), confirming clang-tidy in this job is generally unstable.
  • Most recent main branch runs (2 of 3) passed, but those didn't touch cudf files so clang-tidy wasn't run on CUDA code.

Recommended fix:

  • The run-clang-tidy.py script or the CI workflow should exclude .cu files and files that include CUDA headers from clang-tidy analysis, since clang-tidy's Clang frontend does not support NVCC compiler flags. This is a CI infrastructure issue, not a PR code issue.
  • As a workaround, the CI step could filter out velox/experimental/cudf/ files from the $FILES list when running clang-tidy.

Comment thread velox/experimental/cudf/connectors/hive/CudfHiveDataSource.cpp
@simoneves simoneves force-pushed the simoneves/cudf_logical_function branch from 8ab90fd to 1eb3d51 Compare April 17, 2026 20:12
// subclass(es).
virtual void convertSplit(std::shared_ptr<ConnectorSplit> split);

// Construct a RowTypePtr for the table column names and types.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should go in private: section since only being used in next() of CudfHiveDataSource for now

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's actually only used in the constructor. What am I missing?

Regardless, agreed, and moved to private: section.

Comment on lines +305 to +317
const RowTypePtr CudfHiveDataSource::getTableRowType() {
if (tableHandle_->dataColumns()) {
std::vector<std::string> names;
std::vector<TypePtr> types;
for (const auto& name : readColumnNames_) {
auto parsedType = tableHandle_->dataColumns()->findChild(name);
names.emplace_back(std::move(name));
types.push_back(parsedType);
}
return ROW(std::move(names), std::move(types));
}
return outputType_;
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to reconstruct this everytime or can we reuse anything here?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function seems pretty lightweight, but I made it cache the result of the first call anyway.

@github-actions
Copy link
Copy Markdown

CI Failure Analysis

⚠️ Build with GCC / Linux release with adapters — CLANG-TIDY Failure View logs

Build and tests passed (steps 8 and 12 succeeded). The failure is in step 14: "Install and run clang-tidy".

clang-tidy errors (all CUDA/clang incompatibility):

# Dependency .clang-tidy config errors:
_build/release/_deps/rmm-src/cpp/.clang-tidy:16:1: error: unknown key 'AnalyzeTemporaryDtors'
_build/release/_deps/cudf-src/cpp/.clang-tidy:53:1: error: unknown key 'ExcludeHeaderFilterRegex'

# CUDA compiler flag errors (clang-tidy doesn't understand nvcc flags):
error: GPU arch sm_35 is supported by CUDA versions between 7.0 and 11.8 (inclusive)
error: unknown argument: '--generate-code=arch=compute_70,code=[compute_70,sm_70]'
error: unknown argument: '-Xcompiler=-fPIC'
error: unknown argument: '-ccbin'
error: unknown argument: '-forward-unknown-to-host-compiler'

# Missing CUDA header:
__clang_cuda_runtime_wrapper.h:486:10: error: 'curand_mtgp32_kernel.h' file not found

All 26 files processed by clang-tidy are under velox/experimental/cudf/ and 20 of them failed with these same CUDA compatibility errors. The errors stem from clang-tidy (clang 18) being unable to parse CUDA-specific compiler flags and headers from the CUDA 12.9 toolchain.


Correlation with PR changes:

  • This PR modifies 31 files, all under velox/experimental/cudf/. The clang-tidy step runs only on changed files, and since every changed file is a cudf file, they all hit the clang-tidy/CUDA incompatibility.
  • The failure is not caused by the PR's code changes — it is a pre-existing infrastructure issue where clang-tidy cannot process cudf files due to CUDA toolchain incompatibilities.

Known issues:

Recommended fix:

@simoneves simoneves requested a review from mhaseeb123 April 17, 2026 21:58
@simoneves simoneves marked this pull request as ready for review April 17, 2026 21:59
@simoneves
Copy link
Copy Markdown
Collaborator Author

Un-drafting for review, as this might now land before other pending stuff

@github-actions
Copy link
Copy Markdown

CI Failure Analysis

❌ Memory Arbitration Fuzzer — FUZZER Failure View logs

Fuzzer: Memory Arbitration Fuzzer (instance 4 of 4)
Seed: 878309077

Error:

VeloxRuntimeError — Error Code: INVALID_STATE

Reason: injectedSpillFsFault: false, injectedTaskAbortRequest: false,
  error message: Failed to wait for task to complete after 5.00s,
  task: {Task test_cursor_713 (test_cursor_713) Running

Plan:
-- Aggregation[3][FINAL [g0, g1] a0 := count("a0")] -> g0:TINYINT, g1:TIMESTAMP, a0:BIGINT
  -- Aggregation[2][INTERMEDIATE [g0, g1] a0 := count("a0")]
    -- Aggregation[1][PARTIAL [g0, g1] a0 := count(1)]
      -- TableScan[0][table: hive_table]

Expression: injectedSpillFsFault || injectedTaskAbortRequest
File: velox/exec/fuzzer/MemoryArbitrationFuzzer.cpp
Line: 992

The task hung during execution (driver was in enqueued state) and the 5-second wait timeout expired. Since neither injectedSpillFsFault nor injectedTaskAbortRequest was set, the VELOX_CHECK on line 992 fired, aborting the process.

Instances 1, 2, and 3 passed. Only instance 4 (seed 878309077) failed.


Correlation with PR changes:

This failure is not related to the PR changes. PR #16913 modifies files exclusively under velox/experimental/cudf/ (CUDA/cuDF integration — hash aggregation, hash join, decimal kernels, expression evaluation, and interop utilities). The failing fuzzer runs the CPU-based Memory Arbitration Fuzzer (velox/exec/fuzzer/MemoryArbitrationFuzzer.cpp), which does not use or depend on any cuDF code.

Known issues:

This is a known, pre-existing flaky failure tracked by multiple open issues:

The most recent Fuzzer Jobs run on main (run 24546923694) also failed, confirming this is an intermittent issue not introduced by this PR.

Reproduce locally:

./velox_memory_arbitration_fuzzer \
    --seed 878309077 \
    --duration_sec 300 \
    --minloglevel=0

Note: This is an intermittent hang, so reproduction with the same seed is not guaranteed.

Recommended fix:

No action needed on this PR. The failure is a known intermittent issue in the Memory Arbitration Fuzzer infrastructure, unrelated to the cuDF changes in this PR.

Copy link
Copy Markdown
Collaborator

@pramodsatya pramodsatya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding these functions @simoneves, I had one suggestion.

Comment thread velox/experimental/cudf/expression/ExpressionEvaluator.cpp Outdated
Comment thread velox/experimental/cudf/expression/ExpressionEvaluator.cpp Outdated
@simoneves simoneves requested a review from pramodsatya April 21, 2026 01:56
Comment thread velox/experimental/cudf/tests/FilterProjectTest.cpp Outdated
Comment thread velox/experimental/cudf/tests/FilterProjectTest.cpp
Comment thread velox/experimental/cudf/expression/ExpressionEvaluator.cpp
Comment thread velox/experimental/cudf/expression/ExpressionEvaluator.cpp Outdated
Comment thread velox/experimental/cudf/expression/ExpressionEvaluator.cpp Outdated
Comment thread velox/experimental/cudf/expression/ExpressionEvaluator.cpp
std::nullopt,
});
auto input = makeRowVector({c0, c1});
auto exprSet = compileExpression("c0 AND c1", rowType);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please try

  // Use this directly if you don't want it to cast the returned vector.
  VectorPtr evaluate(
      const std::string& expression,
      const RowVectorPtr& data,
      const std::optional<SelectivityVector>& rows = std::nullopt) {
    auto typedExpr = makeTypedExpr(expression, asRowType(data->type()));

    return evaluate(typedExpr, data, rows);
  }

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinchengchenghh Sorry, but I'm not sure what you mean. I see that what you pasted is one of the other variants of functions::test::FunctionBaseTest::evaluate() in FunctionBaseTest.h but I don't see how it helps here.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinchengchenghh please clarify this one, thx.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually, we execute

auto result = evaluate("c0 AND c1", input)

auto expected = (generate by your self)

Since we can run the CPU tests to get the expected result, could you add a new function assertCpuAndGpuEqual in CudfFunctionBaseTest ?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jinchengchenghh I get it now, thanks. Refactored as requested. There is still some duplicate code between each pair of tests (some more duplicated than others) but I think each one of them is short enough that any further refactoring would not make them any easier to read, so I'll stop here.

Comment thread velox/experimental/cudf/tests/FilterProjectTest.cpp
Comment thread velox/experimental/cudf/tests/DecimalExpressionTest.cpp
Comment thread velox/experimental/cudf/expression/ExpressionEvaluator.cpp
Comment thread velox/experimental/cudf/expression/ExpressionEvaluator.cpp
@github-actions
Copy link
Copy Markdown

CI Failure Analysis

⌛ Ubuntu debug with system dependencies — TEST Failure (Timeout) View logs

Failed test: velox_exec_test_group7 — timed out after 1800 seconds (30 minutes)

The test binary was killed by ctest's timeout. The same timeout occurred on the automatic retry. In both runs, the test that was executing when the timeout hit was:

ExchangeClientTest/ExchangeClientTest.lazyFetching/CompactRow

The preceding variant (lazyFetching/Presto) completed normally in ~1001ms, but the CompactRow variant hung indefinitely, suggesting a deadlock or infinite wait.

556 of 557 tests in the binary completed successfully. Test execution summary:

  • Total completed test time: ~507 seconds
  • Remaining budget before timeout: ~1293 seconds — all consumed by the hung test
  • The same hang reproduced identically on retry

Correlation with PR changes:

  • Not related. This PR (feat(cudf): Add GPU "and" and "or" #16913) only modifies files under velox/experimental/cudf/:
    • velox/experimental/cudf/expression/ExpressionEvaluator.cpp — adds LogicalFunction for AND/OR
    • velox/experimental/cudf/tests/DecimalExpressionTest.cpp — enables previously disabled tests
    • velox/experimental/cudf/tests/FilterProjectTest.cpp — adds new cuDF logical operation tests
  • The failing test ExchangeClientTest.lazyFetching/CompactRow is in velox/exec/tests/ExchangeClientTest.cpp, which is entirely unrelated to the cuDF expression evaluator changes.

Known issues:

  • No open issues found tracking this specific test failure.
  • This is a pre-existing/flaky failure. On the main branch, velox_exec_test_group7 passes consistently in recent runs (487s and 507s in the two most recent successful main runs). The 4 most recent main CI runs all succeeded. The hang in lazyFetching/CompactRow appears to be an intermittent deadlock that can affect any run.

Reproduce locally:

./_build/debug/velox/exec/tests/velox_exec_test_group7 --gtest_filter="ExchangeClientTest/ExchangeClientTest.lazyFetching/CompactRow"

Note: This is a flaky hang, so it may not reproduce on every run.

Recommended action:
No changes needed in this PR. This is a pre-existing flaky test (intermittent hang in ExchangeClientTest.lazyFetching/CompactRow). Consider re-running the CI workflow. If this test continues to hang intermittently, it may warrant a separate investigation/issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants