forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 0
Pull requests: shalinib-ibm/llama.cpp
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
ggml-cpu : Add Power12 MMA+ INT8 matmul kernels
#24
opened May 6, 2026 by
shalinib-ibm
Owner
Loading…
Pre Compute scale factor once and reuse inside kenrel.
ggml
#19
opened Oct 8, 2025 by
shalinib-ibm
Owner
Loading…
POC:Avoid PackTranspose TinyBLAS_PPC in MMA kernel
#14
opened Jul 15, 2025 by
shalinib-ibm
Owner
Loading…
PowerPC: Replace static_assert with runtime assertion
#13
opened Jul 14, 2025 by
shalinib-ibm
Owner
Loading…
Exp: Perf Benefit with xvi8ger4pp signed version
#12
opened Jun 16, 2025 by
shalinib-ibm
Owner
Loading…
Refactor: Move matrix packing outside GEMM kernels
#10
opened Jun 5, 2025 by
shalinib-ibm
Owner
Loading…
Avoid PackTranspose calls Fp32 tiny blas kernels
#9
opened May 30, 2025 by
shalinib-ibm
Owner
Loading…
PowerPC: Enable MMA for BF16 in llamafile_sgemm
#4
opened Apr 28, 2025 by
shalinib-ibm
Owner
Loading…
PowerPC: Enable MMA for BF16 in llamafile_sgemm
#2
opened Apr 28, 2025 by
shalinib-ibm
Owner
Loading…
ProTip!
Adding no:label will show everything without a label.