You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Source/Docs/articles/copytoasync-throughput-benchmarks.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# CopyToAsync Throughput Benchmark
2
2
3
-
In-memory-based streams typically do not require asynchronous operations in most scenarios because all operations are synchronous and in-memory. However, there are cases where the data in the stream needs to be copied to another stream instance with actual asynchronous behavior, such as a **FileStream**. While this may not be a common use case, it is a real-world scenario. This is where the `CopyToAsync` method becomes relevant.
3
+
In-memory-based streams typically do not require asynchronous operations in most scenarios because all operations are synchronous and in-memory. However, there are cases where the data in the stream needs to be copied to another stream instance with actual asynchronous behavior, such as a **FileStream**. This is where the `CopyToAsync` method becomes relevant.
4
4
5
5
This benchmark scenario uses stream instances that are instantiated as expandable (dynamically growing) streams. The stream is filled with random data, similar to the [`Bulk Fill and Read`](./dynamic-throughput-benchmarks.md#bulk-fill-and-read) scenario.
6
6
@@ -35,7 +35,7 @@ A single benchmark operation consists of performing five loops of the following
35
35
1. Call CopyToAsync() on the stream, passing a mock asynchronous file I/O stream destination.
36
36
1. Dispose of the stream instance.
37
37
38
-
**MemoryStreamSlim** and **RecyclableMemoryStream** classes are created with the option to zero out memory buffers when they are no longer used disabled to focus the benchmark performance on the **CopyToAsync()** call. The **MemoryStream** class does not have an option to zero out memory buffers (used memory is always cleared, i.e., internal buffers are allocated with new byte[]), so this parameter does not apply to that class.
38
+
**MemoryStreamSlim** and **RecyclableMemoryStream** classes are created with the option to zero out memory buffers when they are no longer used set to disabled in order to focus the benchmark performance on the **CopyToAsync()** call. The **MemoryStream** class does not have an option to zero out memory buffers (used memory is always cleared, i.e., internal buffers are allocated with new byte[]), so this parameter does not apply to that class.
Copy file name to clipboardExpand all lines: Source/Docs/articles/dynamic-throughput-benchmarks.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,9 @@ These benchmarks evaluate the performance of dynamic, expandable streams. The st
4
4
5
5
## Summary
6
6
7
-
The benchmark results show that `MemoryStreamSlim` consistently allocates less memory than other classes, with throughput performance on par with or better than **RecyclableMemoryStream**. In some use cases, MemoryStreamSlim performs dramatically better. While these cases are generally edge scenarios, they demonstrate that MemoryStreamSlim provides more consistent and deterministic performance across a wide range of scenarios.
7
+
The benchmark results show that `MemoryStreamSlim` consistently allocates less memory than other classes, with throughput performance on par with or better than **RecyclableMemoryStream**. In some use cases, **MemoryStreamSlim** performs dramatically better. While these cases may not always represent real-world scenarios, they demonstrate that **MemoryStreamSlim** provides more consistent and deterministic performance across a wide range of scenarios.
8
8
9
-
For security-sensitive applications, `MemoryStreamSlim` performs better in most cases when the option to zero out unused memory buffers is enabled ([`ZeroBuffers`](#zerobuffers) benchmark parameter). In these benchmarks, when zeroing out memory buffers is enabled, the MemoryStreamSlim option to clear memory buffers "on release" was used to provide a fair comparison to other classes. However, by default, a more efficient option to clear buffers out-of-band is used, which further improves throughput performance by avoiding the cost of clearing memory buffers at the time of release, instead performing this task in a background thread.
9
+
For security-sensitive applications, `MemoryStreamSlim` performs better in most cases when the option to zero out unused memory buffers is enabled ([`ZeroBuffers`](#zerobuffers) benchmark parameter). In these benchmarks, when zeroing out memory buffers is enabled, the **MemoryStreamSlim** option to clear memory buffers "on release" was used to provide a fair comparison to other classes. However, by default, a more efficient option to clear buffers out-of-band is used, which further improves throughput performance by avoiding the cost of clearing memory buffers at the time of release, instead performing this task in a background thread.
10
10
11
11
The results for segmented operations also show that **RecyclableMemoryStream** has a high memory allocation rate and incurs a large number of garbage collections when stream sizes grow large, especially when the initial capacity is not provided during instantiation ([`CapacityOnCreate`](#capacityoncreate) benchmark parameter is **false**).
12
12
@@ -77,7 +77,7 @@ The amount of data written to the stream in each loop of the operation. The data
77
77
- When **true**, the stream is created with the option to zero out memory buffers when they are no longer used.
78
78
- When **false**, the stream is created with the option to not zero out memory buffers.
79
79
80
-
For the **MemoryStreamSlim** class, the [`ZeroBufferBehavior`](xref:KZDev.PerfUtils.MemoryStreamSlimOptions.ZeroBufferBehavior) option is set to `OnRelease` to provide a fair comparison to other classes. The MemoryStream class does not support zeroing out memory buffers (used memory is always cleared), so this parameter does not apply to that class.
80
+
When **ZeroBuffers** is **true**, for the **MemoryStreamSlim** class, the [`ZeroBufferBehavior`](xref:KZDev.PerfUtils.MemoryStreamSlimOptions.ZeroBufferBehavior) option is set to `OnRelease` to provide a fair comparison to other classes. The **MemoryStream** class does not support zeroing out memory buffers (used memory is always cleared), so this parameter does not apply to that class.
Copy file name to clipboardExpand all lines: Source/Docs/articles/getting-started.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,13 +6,13 @@ This article introduces the `KZDev.PerfUtils` library package, which includes **
6
6
7
7
The standard `MemoryStream` class in the .NET Class Library represents a stream of bytes stored in memory. While it is convenient for working with in-memory data, it has limitations. One major limitation is its reliance on a single byte array to store data, which can lead to significant garbage collection (GC) pressure when handling large amounts of memory or frequently creating and disposing of `MemoryStream` instances.
8
8
9
-
The [`MemoryStreamSlim`](xref:KZDev.PerfUtils.MemoryStreamSlim) class is specifically designed to address these limitations. It improves performance in scenarios where MemoryStream would cause high GC pressure and also provides better overall throughput.
9
+
The [`MemoryStreamSlim`](xref:KZDev.PerfUtils.MemoryStreamSlim) class is specifically designed to address these limitations. It improves performance in scenarios where **MemoryStream** would cause high GC pressure and also provides better overall throughput.
10
10
11
11
Key topics for **MemoryStreamSlim** include:
12
12
13
-
- The [`Memory Stream`](./memorystreamslim.md) topic, which explains how to use the MemoryStreamSlim class, its features, and its internal workings.
13
+
- The [`Memory Stream`](./memorystreamslim.md) topic, which explains how to use the **MemoryStreamSlim** class, its features, and its internal workings.
14
14
- The [`Memory Management`](./memory-management.md) topic, which provides insights into memory management and options for controlling it.
15
-
- The [`Memory Monitoring`](./memory-monitoring.md) topic, which discusses how to monitor memory usage and allocations with MemoryStreamSlim using the .NET Metrics and Events features.
15
+
- The [`Memory Monitoring`](./memory-monitoring.md) topic, which discusses how to monitor memory usage and allocations with **MemoryStreamSlim** using the .NET Metrics and Events features.
16
16
- The [`Benchmarks`](./memorystream-benchmarks.md) topic, which covers performance benchmarks demonstrating the benefits of **MemoryStreamSlim**.
17
17
18
18
### Compression Helpers
@@ -23,29 +23,29 @@ For more details, see the [Compression](./memorystream-compression.md) topic, wh
23
23
24
24
## StringBuilderCache
25
25
26
-
The **StringBuilder** class is a mutable string class that is more memory-efficient than repeated string concatenation. However, frequent allocation and deallocation of StringBuilder instances and their internal buffers can cause memory pressure in high-throughput scenarios.
26
+
The **StringBuilder** class is a mutable string class that is more memory-efficient than repeated string concatenation. However, frequent allocation and deallocation of **StringBuilder** instances and their internal buffers can cause memory pressure in high-throughput scenarios.
27
27
28
28
The [`StringBuilderCache`](xref:KZDev.PerfUtils.StringBuilderCache) class is a static, thread-safe cache of StringBuilder instances. It reduces the number of allocations and deallocations, improving performance in scenarios with frequent string manipulations.
29
29
30
30
Key topics for `StringBuilderCache` include:
31
31
32
32
- The [`StringBuilderCache`](./stringbuildercache.md) topic, which explains how to use the class and its benefits.
33
-
- The [`Benchmarks`](./stringbuildercache-benchmarks.md) topic, which provides performance benchmarks for StringBuilderCache.
33
+
- The [`Benchmarks`](./stringbuildercache-benchmarks.md) topic, which provides performance benchmarks for **StringBuilderCache**.
34
34
35
35
## Interlocked Operations
36
36
37
37
The **Interlocked** class in the .NET Class Library provides atomic operations for thread-safe updates to shared variables. However, its functionality is limited to basic operations.
38
38
39
39
The [`InterlockedOps`](xref:KZDev.PerfUtils.InterlockedOps) class extends the functionality of **Interlocked** by providing additional atomic operations, including:
40
40
41
-
- Xor: Performs an exclusive OR operation on any integer type.
42
-
- ClearBits: Clears bits on any integer type.
43
-
- SetBits: Sets bits on any integer type.
44
-
- ConditionAnd: Conditionally updates bits using an AND operation.
45
-
- ConditionOr: Conditionally updates bits using an OR operation.
46
-
- ConditionXor: Conditionally updates bits using an XOR operation.
47
-
- ConditionClearBits: Conditionally clears bits.
48
-
- ConditionSetBits: Conditionally sets bits.
41
+
-**Xor**: Performs an exclusive OR operation on any integer type.
42
+
-**ClearBits**: Clears bits on any integer type.
43
+
-**SetBits**: Sets bits on any integer type.
44
+
-**ConditionAnd**: Conditionally updates bits using an AND operation.
45
+
-**ConditionOr**: Conditionally updates bits using an OR operation.
46
+
-**ConditionXor**: Conditionally updates bits using an XOR operation.
Copy file name to clipboardExpand all lines: Source/Docs/articles/interlockedops.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@ If the predicate returns false, both values in the tuple will be the same, refle
62
62
63
63
### Predicate Overloads
64
64
65
-
To avoid closures when the predicate requires additional arguments, overloads of the conditional methods accept a predicate with an additional condition data argument.
65
+
To avoid **closures** when the predicate requires additional arguments, overloads of the conditional methods accept a predicate with an additional condition data argument.
Copy file name to clipboardExpand all lines: Source/Docs/articles/memory-management.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,9 +12,9 @@ By default, **MemoryStreamSlim** zeroes out its memory buffers when they are no
12
12
13
13
You can control how memory buffers are cleared by setting the `MemoryStreamSlimOptions`[`ZeroBufferBehavior`](xref:KZDev.PerfUtils.MemoryStreamSlimOptions.ZeroBufferBehavior) property. This property can be set globally as a default value or per instance when instantiating **MemoryStreamSlim**. The ZeroBufferBehavior property can be set to one of the following [MemoryStreamSlimZeroBufferOption](xref:KZDev.PerfUtils.MemoryStreamSlimZeroBufferOption) values:
14
14
15
-
- None: No clearing of memory buffers is performed. This is the fastest option but may leave potentially sensitive data in memory. For streams that do not contain sensitive data, this can improve performance.
16
-
- OutOfBand: This is the default behavior. Memory buffers are cleared out-of-band, meaning the clearing is done on a separate thread. This reduces the latency impact on the thread disposing of or reducing the capacity of **MemoryStreamSlim**. However, it may briefly leave information in memory before it is cleared and returned to the buffer cache. This hybrid approach balances security and performance.
17
-
- OnRelease: Memory buffers are cleared immediately when the **MemoryStreamSlim** instance is disposed or when buffers are released due to capacity reduction. This is the most secure option but can impact latency performance when disposing of **MemoryStreamSlim** instances with large memory buffers.
15
+
-**None**: No clearing of memory buffers is performed. This is the fastest option but may leave potentially sensitive data in memory. For streams that do not contain sensitive data, this can improve performance.
16
+
-**OutOfBand**: This is the default behavior. Memory buffers are cleared out-of-band, meaning the clearing is done on a separate thread. This reduces the latency impact on the thread disposing of or reducing the capacity of **MemoryStreamSlim**. However, it may briefly leave information in memory before it is cleared and returned to the buffer cache. This hybrid approach balances security and performance.
17
+
-**OnRelease**: Memory buffers are cleared immediately when the **MemoryStreamSlim** instance is disposed or when buffers are released due to capacity reduction. This is the most secure option but can impact latency performance when disposing of **MemoryStreamSlim** instances with large memory buffers.
18
18
19
19
```csharp
20
20
usingKZDev.PerfUtils;
@@ -49,7 +49,7 @@ using KZDev.PerfUtils;
49
49
MemoryStreamSlim.ReleaseMemoryBuffers();
50
50
```
51
51
52
-
The primary goal of caching and reusing memory buffers is to reduce allocations and deallocations and prevent LOH fragmentation. Therefore, it is generally best to let the library manage memory automatically. However, in cases where you have used an exceptionally large **MemoryStreamSlim** instance and know it was a one-off use, you can call this method to quickly release the cached memory buffers.
52
+
The primary goal of caching and reusing memory buffers is to reduce allocations and deallocations and prevent LOH fragmentation. Therefore, it is generally best to let the library manage memory automatically. However, in cases where you have used an exceptionally large **MemoryStreamSlim** instance and know it was a one-off use, you can call this method to quickly release the excess cached memory buffers.
53
53
54
54
After calling `ReleaseMemoryBuffers`, future **MemoryStreamSlim** instances will allocate new memory buffers and rebuild the cache as needed. Old memory buffers will become eligible for garbage collection once all stream instances using them are disposed.
Copy file name to clipboardExpand all lines: Source/Docs/articles/memorystream-benchmarks.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,21 +16,21 @@ The benchmark scenarios are categorized into five groups:
16
16
17
17
-**Dynamic Throughput**
18
18
19
-
Benchmarks using dynamically expandable streams instantiated with an initial zero length and capacity. [Read More](./dynamic-throughput-benchmarks.md)
19
+
Benchmarks using dynamically expandable streams instantiated with an initial zero length. [Read More](./dynamic-throughput-benchmarks.md)
20
20
21
21
-**CopyToAsync Throughput**
22
22
23
23
Benchmarks demonstrating the performance impact of copying the contents of a stream to another stream asynchronously. [Read More](./copytoasync-throughput-benchmarks.md)
24
24
25
25
-**Wrapper Throughput**
26
26
27
-
Benchmarks for streams instantiated with an already allocated and available byte array to evaluate "wrapped" mode behavior. [Read More](./wrapper-throughput-benchmarks.md)
27
+
Benchmarks for streams instantiated with an already allocated and available byte array to evaluate "fixed" mode (wrapped arrray) behavior. [Read More](./wrapper-throughput-benchmarks.md)
28
28
29
29
## Reading Results
30
30
31
31
### Parameter Effect
32
32
33
-
Not every parameter used in the benchmarks applies to every class being compared. For example, the MemoryStream class does not have an option to zero out memory buffers, as this is its default and only behavior.
33
+
Not every parameter used in the benchmarks applies to every class being compared. For example, the **MemoryStream** class does not have an option to zero out memory buffers, as this is its default and only behavior.
Copy file name to clipboardExpand all lines: Source/Docs/articles/memorystream-compression-benchmarks.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ A single benchmark operation consists of performing five loops of the following
40
40
For the benchmarks:
41
41
42
42
-**MemoryStreamSlim** and **RecyclableMemoryStream** are created with the option to zero out memory buffers disabled to focus the performance comparison on the compression operation.
43
-
- The MemoryStream class does not support zeroing out memory buffers (its internal buffers are always allocated with new byte[]), so this option does not apply.
43
+
- The **MemoryStream** class does not support zeroing out memory buffers (its internal buffers are always allocated with new byte[]), so this option does not apply.
44
44
45
45
### Benchmark Parameters
46
46
@@ -55,9 +55,9 @@ The size of the source data that is compressed into the destination stream in ea
55
55
- When **true**, the destination stream is instantiated with the data size as the initial capacity.
56
56
- When **false**, the stream is created with the default capacity (no initial capacity specified).
57
57
58
-
The results show no notable difference in throughput performance whether this parameter is 'true' or 'false'. However, there is a noticeable impact on memory allocations for **MemoryStream** and a significant impact for **RecyclableMemoryStream** when this is **false**.
58
+
The results show no notable difference in throughput performance whether this parameter is **true** or **false**. However, there is a noticeable impact on memory allocations for **MemoryStream** and a significant impact for **RecyclableMemoryStream** when this is **false**.
59
59
60
-
The issue with this parameter is that the capacity set on the stream for the benchmark is a best guess based on the data size, and the actual size of the compressed data is not known until after the compression operation is complete. This means that the used capacity may be too small or too large, however the guess capacity used did result in generally better allocation results in these benchmarks.
60
+
The issue with this parameter is that the capacity set on the stream for the benchmark is a best guess based on the data size (source data size is used), and the actual size of the compressed data is not known until after the compression operation is complete. This means that the used capacity may be too small or too large, however the guess capacity used in these benchmarks did result in generally better allocation results in this case.
0 commit comments