Skip to content

Allow BraidingTensor to have a custom storage type#393

Open
kshyatt wants to merge 14 commits intomainfrom
ksh/cubt
Open

Allow BraidingTensor to have a custom storage type#393
kshyatt wants to merge 14 commits intomainfrom
ksh/cubt

Conversation

@kshyatt
Copy link
Copy Markdown
Member

@kshyatt kshyatt commented Apr 3, 2026

This is probably the nicest way to unblock the various MPSKit changes. I found the similarmatrixtype approach the least gross looking way to achieve this, but open to other ideas.

@kshyatt kshyatt requested review from Jutho and borisdevos April 3, 2026 09:27
@kshyatt
Copy link
Copy Markdown
Member Author

kshyatt commented Apr 6, 2026

The only thing missing here is the parser preprocessing for @planar, where we need to extract the storagetype to hand it over to the BraidingTensor constructor.

@Jutho
Copy link
Copy Markdown
Member

Jutho commented Apr 7, 2026

I will try to review asap, but I first have to catch up on the latest status of the treetransform stuff, as I never managed to review that part of the vectorize fusiontrees PR.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 14, 2026

Codecov Report

❌ Patch coverage is 83.33333% with 13 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/tensors/braidingtensor.jl 83.92% 9 Missing ⚠️
ext/TensorKitAdaptExt.jl 50.00% 2 Missing ⚠️
ext/TensorKitCUDAExt/TensorKitCUDAExt.jl 77.77% 2 Missing ⚠️
Files with missing lines Coverage Δ
ext/TensorKitCUDAExt/cutensormap.jl 76.82% <100.00%> (+2.85%) ⬆️
ext/TensorKitAdaptExt.jl 41.66% <50.00%> (-8.34%) ⬇️
ext/TensorKitCUDAExt/TensorKitCUDAExt.jl 80.00% <77.77%> (-20.00%) ⬇️
src/tensors/braidingtensor.jl 68.07% <83.92%> (+1.18%) ⬆️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Comment thread src/tensors/braidingtensor.jl Outdated
Comment thread src/tensors/braidingtensor.jl
@kshyatt
Copy link
Copy Markdown
Member Author

kshyatt commented Apr 15, 2026

Anything more need doing here or is this ready to go after #389?

@kshyatt kshyatt marked this pull request as ready for review April 15, 2026 13:09
# partial construction: only construct rowr and colr when needed
end
end
function BraidingTensor{T, S}(V1::S, V2::S, ::Type{A}, adjoint::Bool = false) where {T, S <: IndexSpace, A}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm not completely convinced by this constructor syntax, it feels like at this point we might as well just call BraidingTensor{T, S, A} directly, rather than including the Type{A} as an argument. This also just cuts down slightly on the number of constructors we need to have, under the less-is-more strategy :)

Comment thread src/tensors/braidingtensor.jl
m * n == 0 && return data # s ∉ blocksectors(b)
fill!(data, zero(eltype(b)))

if sectortype(b) === Trivial
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly out of curiosity, is this an optimization or was this required?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, what's "this"?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad, I meant the split between the trivial and non-trivial implementation.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh it was so that both could dispatch to set_sublock (which I could specialize) but do their own setup, and to cleanup a quite long function.

Comment thread test/cuda/planar.jl
h′ = force_planar(h)

for alloc in
(TensorOperations.DefaultAllocator(), TensorOperations.CUDAAllocator())
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this might just run the same test twice, (although I might be wrong) does the default allocator not coincide with the CUDAAllocator here?

Comment thread Project.toml
projects = ["test"]

[extras]
GPUArrays = "0c68f7d7-f131-5f86-a1c3-88cf8149b2d7"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a merge error, and probably belongs in a different project.toml?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah no this was here because of a GPUArrays bug that's now fixed

Comment on lines +171 to +190


function TensorKit.add_kernel_nonthreaded!(
::TensorKit.FusionStyle,
tdst::CuTensorMap, tsrc::CuTensorMap, p, transformer::TensorKit.GenericTreeTransformer, α, β, backend...
)
# preallocate buffers
buffers = TensorKit.allocate_buffers(tdst, tsrc, transformer)

for subtransformer in transformer.data
# Special case without intermediate buffers whenever there is only a single block
if length(subtransformer[1]) == 1
TensorKit._add_transform_single!(tdst, tsrc, p, subtransformer, α, β, backend...)
else
cu_subtransformer = tuple(CUDA.adapt(CuArray, subtransformer[1]), subtransformer[2:end]...)
TensorKit._add_transform_multi!(tdst, tsrc, p, cu_subtransformer, buffers, α, β, backend...)
end
end
return nothing
end
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This probably is something separate from this PR right? (Also, might no longer be necessary of the mul! specializations of StridedViews are correctly handled in the later versions of Strided.jl

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this was needed for a test to work, let me look up which one in a moment...

Co-authored-by: Lukas Devos <ldevos98@gmail.com>
@github-actions
Copy link
Copy Markdown
Contributor

Your PR requires formatting changes to meet the project's style guidelines.
Please consider running Runic (git runic main) to apply these changes.

Click here to view the suggested changes.
diff --git a/src/tensors/braidingtensor.jl b/src/tensors/braidingtensor.jl
index 49d1b77..7d873ed 100644
--- a/src/tensors/braidingtensor.jl
+++ b/src/tensors/braidingtensor.jl
@@ -113,7 +113,7 @@ end
 
 function _set_subblock!(data, val)
     f(I) = ((I[1] == I[4]) & (I[2] == I[3])) * val
-    data .= f.(CartesianIndices(data))
+    return data .= f.(CartesianIndices(data))
 end
 
 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants