You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
X =@fill(3, N) # 10 Allocate data on the new target
288
+
Y =@ones(N) # ...
289
+
@parallelsaxpy!(Y, α, X) # 11 Launch kernel on the new target without redefining anything
290
+
Y # 12 Observe correct results
289
291
```
290
292
Type `?@select_hardware` and `?@current_hardware` in the [Julia REPL] to see what runtime hardware targets are supported and which symbols to use to select them.
291
293
Note that the KernelAbstractions backend comes with a trade-off: the convenience `Data`/`TData` modules for fixed data types and single-architecture backends are not available, as well as the warp-level primitives in `@parallel_indices` kernels (see [Support for architecture-agnostic low level kernel programming](#support-for-architecture-agnostic-low-level-kernel-programming) and the hide communication feature, described in the next section, is implemented to have no effect for KernelAbstractions (but it nevertheless executes correctly).
0 commit comments