Conversation
|
The only thing missing here is the parser preprocessing for |
|
I will try to review asap, but I first have to catch up on the latest status of the treetransform stuff, as I never managed to review that part of the vectorize fusiontrees PR. |
Codecov Report❌ Patch coverage is
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
|
Anything more need doing here or is this ready to go after #389? |
| # partial construction: only construct rowr and colr when needed | ||
| end | ||
| end | ||
| function BraidingTensor{T, S}(V1::S, V2::S, ::Type{A}, adjoint::Bool = false) where {T, S <: IndexSpace, A} |
There was a problem hiding this comment.
I think I'm not completely convinced by this constructor syntax, it feels like at this point we might as well just call BraidingTensor{T, S, A} directly, rather than including the Type{A} as an argument. This also just cuts down slightly on the number of constructors we need to have, under the less-is-more strategy :)
| m * n == 0 && return data # s ∉ blocksectors(b) | ||
| fill!(data, zero(eltype(b))) | ||
|
|
||
| if sectortype(b) === Trivial |
There was a problem hiding this comment.
Mostly out of curiosity, is this an optimization or was this required?
There was a problem hiding this comment.
My bad, I meant the split between the trivial and non-trivial implementation.
There was a problem hiding this comment.
Oh it was so that both could dispatch to set_sublock (which I could specialize) but do their own setup, and to cleanup a quite long function.
| h′ = force_planar(h) | ||
|
|
||
| for alloc in | ||
| (TensorOperations.DefaultAllocator(), TensorOperations.CUDAAllocator()) |
There was a problem hiding this comment.
I think this might just run the same test twice, (although I might be wrong) does the default allocator not coincide with the CUDAAllocator here?
| projects = ["test"] | ||
|
|
||
| [extras] | ||
| GPUArrays = "0c68f7d7-f131-5f86-a1c3-88cf8149b2d7" |
There was a problem hiding this comment.
This might be a merge error, and probably belongs in a different project.toml?
There was a problem hiding this comment.
Ah no this was here because of a GPUArrays bug that's now fixed
|
|
||
|
|
||
| function TensorKit.add_kernel_nonthreaded!( | ||
| ::TensorKit.FusionStyle, | ||
| tdst::CuTensorMap, tsrc::CuTensorMap, p, transformer::TensorKit.GenericTreeTransformer, α, β, backend... | ||
| ) | ||
| # preallocate buffers | ||
| buffers = TensorKit.allocate_buffers(tdst, tsrc, transformer) | ||
|
|
||
| for subtransformer in transformer.data | ||
| # Special case without intermediate buffers whenever there is only a single block | ||
| if length(subtransformer[1]) == 1 | ||
| TensorKit._add_transform_single!(tdst, tsrc, p, subtransformer, α, β, backend...) | ||
| else | ||
| cu_subtransformer = tuple(CUDA.adapt(CuArray, subtransformer[1]), subtransformer[2:end]...) | ||
| TensorKit._add_transform_multi!(tdst, tsrc, p, cu_subtransformer, buffers, α, β, backend...) | ||
| end | ||
| end | ||
| return nothing | ||
| end |
There was a problem hiding this comment.
This probably is something separate from this PR right? (Also, might no longer be necessary of the mul! specializations of StridedViews are correctly handled in the later versions of Strided.jl
There was a problem hiding this comment.
No, this was needed for a test to work, let me look up which one in a moment...
Co-authored-by: Lukas Devos <ldevos98@gmail.com>
|
Your PR requires formatting changes to meet the project's style guidelines. Click here to view the suggested changes.diff --git a/src/tensors/braidingtensor.jl b/src/tensors/braidingtensor.jl
index 49d1b77..7d873ed 100644
--- a/src/tensors/braidingtensor.jl
+++ b/src/tensors/braidingtensor.jl
@@ -113,7 +113,7 @@ end
function _set_subblock!(data, val)
f(I) = ((I[1] == I[4]) & (I[2] == I[3])) * val
- data .= f.(CartesianIndices(data))
+ return data .= f.(CartesianIndices(data))
end
|
This is probably the nicest way to unblock the various MPSKit changes. I found the
similarmatrixtypeapproach the least gross looking way to achieve this, but open to other ideas.