Skip to content

Add: batch_paged_attention device test for production-scale bfloat16#154

Merged
ChaoWao merged 1 commit into
mainfrom
test/paged-attention-taskring-ci
Mar 2, 2026
Merged

Add: batch_paged_attention device test for production-scale bfloat16#154
ChaoWao merged 1 commit into
mainfrom
test/paged-attention-taskring-ci

Conversation

@ChaoWao
Copy link
Copy Markdown
Collaborator

@ChaoWao ChaoWao commented Mar 2, 2026

Summary

  • Port batch_paged_attention from examples to device tests
  • Switch data type from float16 to bfloat16 with tighter tolerance (1e-3)
  • Production tile sizes (128x128 / 64x128) with runtime dispatch
  • Production scale: batch=64, head_dim=128, context_len=8193
  • Variable sequence length test case (CaseVarSeq)
  • Chunked batch orchestration with IN_CORE_BATCH=16

Testing

  • Hardware test passed (a2a3, Case1: 131072/131072 elements matched)

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive set of device tests for the batch_paged_attention mechanism, focusing on production-grade bfloat16 precision and large-scale parameters. It ensures the robustness and accuracy of the attention computation under realistic conditions, including support for variable sequence lengths and efficient chunked batch processing on specialized hardware cores.

Highlights

  • Device Test Porting: The batch_paged_attention implementation has been ported from examples to dedicated device tests.
  • Data Type and Tolerance Update: The data type for attention calculations has been switched from float16 to bfloat16, accompanied by a tighter tolerance of 1e-3 for testing.
  • Production Scale Parameters: The tests now utilize production-scale parameters, including batch size 64, head dimension 128, context length 8193, and specific tile sizes (128x128 / 64x128) with runtime dispatch.
  • Variable Sequence Length Support: A new test case, CaseVarSeq, has been introduced to validate functionality with variable sequence lengths per batch.
  • Chunked Batch Orchestration: The orchestration now supports chunked batch processing with an IN_CORE_BATCH size of 16, optimizing resource utilization.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/golden.py
    • Added a golden reference implementation for production-scale batch paged attention, supporting bfloat16, GQA, head tiling, and variable sequence lengths.
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/kernels/aic/aic_hub.cpp
    • Added a hub kernel for AIC (AI Core) operations, serving as an entry point for AIC-specific kernel functions.
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/kernels/aic/aic_pv_matmul.cpp
    • Added a batched PV (Probability-Value) Matmul kernel for AIC, supporting two tile configurations via runtime dispatch.
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/kernels/aic/aic_qk_matmul.cpp
    • Added a batched QK (Query-Key) Matmul kernel for AIC, supporting two tile configurations via runtime dispatch.
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/kernels/aiv/aiv_hub.cpp
    • Added a hub kernel for AIV (AI Vector) operations, serving as an entry point for AIV-specific kernel functions.
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/kernels/aiv/aiv_online_update.cpp
    • Added a batched online softmax update and normalization kernel for AIV, handling accumulation and final normalization.
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/kernels/aiv/aiv_softmax_prepare.cpp
    • Added a batched softmax preparation kernel for AIV, including scaling, row maximum, exponentiation, and row summation.
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/kernels/kernel_config.py
    • Added a configuration file defining the AIC and AIV kernels and the orchestration function for batch paged attention.
  • tests/device_tests/tensormap_and_ringbuffer/batch_paged_attention/kernels/orchestration/paged_attention_orch.cpp
    • Added the orchestration function for batch paged attention, implementing a chunked batched architecture for efficient task scheduling.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a comprehensive device test for batched paged attention, including a Python golden model, C++ kernels for a custom accelerator, and orchestration logic. The implementation is well-structured and handles many important details for a production-scale test, such as using bfloat16, supporting variable sequence lengths, and implementing a chunked batching strategy. My review focuses on improving code clarity, maintainability, and robustness. I've suggested using static_cast for safer type conversions in C++, declaring a key configuration parameter as constexpr, and handling potential division-by-zero in the Python golden model to make it more robust.

Port batch_paged_attention from examples to device tests with:
- bfloat16 data type (replacing float16 from example)
- Production tile sizes (128x128/64x128) with runtime dispatch
- Production scale: batch=64, head_dim=128, context_len=8193
- Variable sequence length test case (CaseVarSeq)
- Tighter tolerance (RTOL/ATOL=1e-3 vs 1e-2 in example)
- Chunked batch orchestration with IN_CORE_BATCH=16
@ChaoWao ChaoWao force-pushed the test/paged-attention-taskring-ci branch from 8fd8610 to 22f76d9 Compare March 2, 2026 04:19
@ChaoWao ChaoWao merged commit a8560f2 into main Mar 2, 2026
3 checks passed
@ChaoWao ChaoWao deleted the test/paged-attention-taskring-ci branch March 2, 2026 06:55
PKUZHOU pushed a commit to PKUZHOU/simpler that referenced this pull request Mar 31, 2026
…w-native-sys#154)

Port batch_paged_attention from examples to device tests with:
- bfloat16 data type (replacing float16 from example)
- Production tile sizes (128x128/64x128) with runtime dispatch
- Production scale: batch=64, head_dim=128, context_len=8193
- Variable sequence length test case (CaseVarSeq)
- Tighter tolerance (RTOL/ATOL=1e-3 vs 1e-2 in example)
- Chunked batch orchestration with IN_CORE_BATCH=16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant