Skip to content

fix(easy/rainbow-table): align hash helpers with uint32 iterative semantics#237

Open
agicy wants to merge 1 commit intoAlphaGPU:mainfrom
agicy:main
Open

fix(easy/rainbow-table): align hash helpers with uint32 iterative semantics#237
agicy wants to merge 1 commit intoAlphaGPU:mainfrom
agicy:main

Conversation

@agicy
Copy link
Copy Markdown

@agicy agicy commented Apr 5, 2026

Summary

Resolves #214.

The Rainbow Table challenge requires R rounds of iterative hashing, meaning the output of one round feeds directly into the next. Several starter templates previously declared their fnv1a_hash helpers with signed integers or performed internal type casting, creating an inconsistent internal state.

This PR ensures the hash helpers consistently operate on unsigned 32-bit integers, leaving initial type normalization to the caller.

Changes

  • CUDA: Changed fnv1a_hash(int input) to unsigned int. Used 0xFFu.
  • Mojo: Changed fnv1a_hash(input: Int32) to UInt32.
  • JAX / CuTe: Removed internal type casting (astype / cute.Uint32) from the helper body. Added Python type hints to CuTe.

(Note: PyTorch uses int64 to bypass missing uint32 bitwise op support in the framework. Triton remains untyped per Triton DSL conventions. Both are left unchanged.)

…tive semantics

The FNV-1a hash is applied iteratively R times, so the helper's input and
output must be consistently typed as unsigned 32-bit integers.

- CUDA: change `fnv1a_hash` input to `unsigned int` and mask to `0xFFu`
- Mojo: change `fnv1a_hash` input/mask to `UInt32`
- JAX/CuTe: remove internal casting so the helper expects normalized state
- PyTorch/Triton: left unchanged due to framework specific constraints

Resolves AlphaGPU#214
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Easy][Rainbow Table] input and output's type mismatch

1 participant