Skip to content

Commit bd486f0

Browse files
committed
Final documentation cleanup.
1 parent 65d4b2f commit bd486f0

9 files changed

Lines changed: 38 additions & 38 deletions

Source/Docs/articles/copytoasync-throughput-benchmarks.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# CopyToAsync Throughput Benchmark
22

3-
In-memory-based streams typically do not require asynchronous operations in most scenarios because all operations are synchronous and in-memory. However, there are cases where the data in the stream needs to be copied to another stream instance with actual asynchronous behavior, such as a **FileStream**. While this may not be a common use case, it is a real-world scenario. This is where the `CopyToAsync` method becomes relevant.
3+
In-memory-based streams typically do not require asynchronous operations in most scenarios because all operations are synchronous and in-memory. However, there are cases where the data in the stream needs to be copied to another stream instance with actual asynchronous behavior, such as a **FileStream**. This is where the `CopyToAsync` method becomes relevant.
44

55
This benchmark scenario uses stream instances that are instantiated as expandable (dynamically growing) streams. The stream is filled with random data, similar to the [`Bulk Fill and Read`](./dynamic-throughput-benchmarks.md#bulk-fill-and-read) scenario.
66

@@ -35,7 +35,7 @@ A single benchmark operation consists of performing five loops of the following
3535
1. Call CopyToAsync() on the stream, passing a mock asynchronous file I/O stream destination.
3636
1. Dispose of the stream instance.
3737

38-
**MemoryStreamSlim** and **RecyclableMemoryStream** classes are created with the option to zero out memory buffers when they are no longer used disabled to focus the benchmark performance on the **CopyToAsync()** call. The **MemoryStream** class does not have an option to zero out memory buffers (used memory is always cleared, i.e., internal buffers are allocated with new byte[]), so this parameter does not apply to that class.
38+
**MemoryStreamSlim** and **RecyclableMemoryStream** classes are created with the option to zero out memory buffers when they are no longer used set to disabled in order to focus the benchmark performance on the **CopyToAsync()** call. The **MemoryStream** class does not have an option to zero out memory buffers (used memory is always cleared, i.e., internal buffers are allocated with new byte[]), so this parameter does not apply to that class.
3939

4040
### Asynchronous Stream Emulation
4141

Source/Docs/articles/dynamic-throughput-benchmarks.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ These benchmarks evaluate the performance of dynamic, expandable streams. The st
44

55
## Summary
66

7-
The benchmark results show that `MemoryStreamSlim` consistently allocates less memory than other classes, with throughput performance on par with or better than **RecyclableMemoryStream**. In some use cases, MemoryStreamSlim performs dramatically better. While these cases are generally edge scenarios, they demonstrate that MemoryStreamSlim provides more consistent and deterministic performance across a wide range of scenarios.
7+
The benchmark results show that `MemoryStreamSlim` consistently allocates less memory than other classes, with throughput performance on par with or better than **RecyclableMemoryStream**. In some use cases, **MemoryStreamSlim** performs dramatically better. While these cases may not always represent real-world scenarios, they demonstrate that **MemoryStreamSlim** provides more consistent and deterministic performance across a wide range of scenarios.
88

9-
For security-sensitive applications, `MemoryStreamSlim` performs better in most cases when the option to zero out unused memory buffers is enabled ([`ZeroBuffers`](#zerobuffers) benchmark parameter). In these benchmarks, when zeroing out memory buffers is enabled, the MemoryStreamSlim option to clear memory buffers "on release" was used to provide a fair comparison to other classes. However, by default, a more efficient option to clear buffers out-of-band is used, which further improves throughput performance by avoiding the cost of clearing memory buffers at the time of release, instead performing this task in a background thread.
9+
For security-sensitive applications, `MemoryStreamSlim` performs better in most cases when the option to zero out unused memory buffers is enabled ([`ZeroBuffers`](#zerobuffers) benchmark parameter). In these benchmarks, when zeroing out memory buffers is enabled, the **MemoryStreamSlim** option to clear memory buffers "on release" was used to provide a fair comparison to other classes. However, by default, a more efficient option to clear buffers out-of-band is used, which further improves throughput performance by avoiding the cost of clearing memory buffers at the time of release, instead performing this task in a background thread.
1010

1111
The results for segmented operations also show that **RecyclableMemoryStream** has a high memory allocation rate and incurs a large number of garbage collections when stream sizes grow large, especially when the initial capacity is not provided during instantiation ([`CapacityOnCreate`](#capacityoncreate) benchmark parameter is **false**).
1212

@@ -77,7 +77,7 @@ The amount of data written to the stream in each loop of the operation. The data
7777
- When **true**, the stream is created with the option to zero out memory buffers when they are no longer used.
7878
- When **false**, the stream is created with the option to not zero out memory buffers.
7979

80-
For the **MemoryStreamSlim** class, the [`ZeroBufferBehavior`](xref:KZDev.PerfUtils.MemoryStreamSlimOptions.ZeroBufferBehavior) option is set to `OnRelease` to provide a fair comparison to other classes. The MemoryStream class does not support zeroing out memory buffers (used memory is always cleared), so this parameter does not apply to that class.
80+
When **ZeroBuffers** is **true**, for the **MemoryStreamSlim** class, the [`ZeroBufferBehavior`](xref:KZDev.PerfUtils.MemoryStreamSlimOptions.ZeroBufferBehavior) option is set to `OnRelease` to provide a fair comparison to other classes. The **MemoryStream** class does not support zeroing out memory buffers (used memory is always cleared), so this parameter does not apply to that class.
8181

8282
### GrowEachLoop
8383

Source/Docs/articles/getting-started.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ This article introduces the `KZDev.PerfUtils` library package, which includes **
66

77
The standard `MemoryStream` class in the .NET Class Library represents a stream of bytes stored in memory. While it is convenient for working with in-memory data, it has limitations. One major limitation is its reliance on a single byte array to store data, which can lead to significant garbage collection (GC) pressure when handling large amounts of memory or frequently creating and disposing of `MemoryStream` instances.
88

9-
The [`MemoryStreamSlim`](xref:KZDev.PerfUtils.MemoryStreamSlim) class is specifically designed to address these limitations. It improves performance in scenarios where MemoryStream would cause high GC pressure and also provides better overall throughput.
9+
The [`MemoryStreamSlim`](xref:KZDev.PerfUtils.MemoryStreamSlim) class is specifically designed to address these limitations. It improves performance in scenarios where **MemoryStream** would cause high GC pressure and also provides better overall throughput.
1010

1111
Key topics for **MemoryStreamSlim** include:
1212

13-
- The [`Memory Stream`](./memorystreamslim.md) topic, which explains how to use the MemoryStreamSlim class, its features, and its internal workings.
13+
- The [`Memory Stream`](./memorystreamslim.md) topic, which explains how to use the **MemoryStreamSlim** class, its features, and its internal workings.
1414
- The [`Memory Management`](./memory-management.md) topic, which provides insights into memory management and options for controlling it.
15-
- The [`Memory Monitoring`](./memory-monitoring.md) topic, which discusses how to monitor memory usage and allocations with MemoryStreamSlim using the .NET Metrics and Events features.
15+
- The [`Memory Monitoring`](./memory-monitoring.md) topic, which discusses how to monitor memory usage and allocations with **MemoryStreamSlim** using the .NET Metrics and Events features.
1616
- The [`Benchmarks`](./memorystream-benchmarks.md) topic, which covers performance benchmarks demonstrating the benefits of **MemoryStreamSlim**.
1717

1818
### Compression Helpers
@@ -23,29 +23,29 @@ For more details, see the [Compression](./memorystream-compression.md) topic, wh
2323

2424
## StringBuilderCache
2525

26-
The **StringBuilder** class is a mutable string class that is more memory-efficient than repeated string concatenation. However, frequent allocation and deallocation of StringBuilder instances and their internal buffers can cause memory pressure in high-throughput scenarios.
26+
The **StringBuilder** class is a mutable string class that is more memory-efficient than repeated string concatenation. However, frequent allocation and deallocation of **StringBuilder** instances and their internal buffers can cause memory pressure in high-throughput scenarios.
2727

2828
The [`StringBuilderCache`](xref:KZDev.PerfUtils.StringBuilderCache) class is a static, thread-safe cache of StringBuilder instances. It reduces the number of allocations and deallocations, improving performance in scenarios with frequent string manipulations.
2929

3030
Key topics for `StringBuilderCache` include:
3131

3232
- The [`StringBuilderCache`](./stringbuildercache.md) topic, which explains how to use the class and its benefits.
33-
- The [`Benchmarks`](./stringbuildercache-benchmarks.md) topic, which provides performance benchmarks for StringBuilderCache.
33+
- The [`Benchmarks`](./stringbuildercache-benchmarks.md) topic, which provides performance benchmarks for **StringBuilderCache**.
3434

3535
## Interlocked Operations
3636

3737
The **Interlocked** class in the .NET Class Library provides atomic operations for thread-safe updates to shared variables. However, its functionality is limited to basic operations.
3838

3939
The [`InterlockedOps`](xref:KZDev.PerfUtils.InterlockedOps) class extends the functionality of **Interlocked** by providing additional atomic operations, including:
4040

41-
- Xor: Performs an exclusive OR operation on any integer type.
42-
- ClearBits: Clears bits on any integer type.
43-
- SetBits: Sets bits on any integer type.
44-
- ConditionAnd: Conditionally updates bits using an AND operation.
45-
- ConditionOr: Conditionally updates bits using an OR operation.
46-
- ConditionXor: Conditionally updates bits using an XOR operation.
47-
- ConditionClearBits: Conditionally clears bits.
48-
- ConditionSetBits: Conditionally sets bits.
41+
- **Xor**: Performs an exclusive OR operation on any integer type.
42+
- **ClearBits**: Clears bits on any integer type.
43+
- **SetBits**: Sets bits on any integer type.
44+
- **ConditionAnd**: Conditionally updates bits using an AND operation.
45+
- **ConditionOr**: Conditionally updates bits using an OR operation.
46+
- **ConditionXor**: Conditionally updates bits using an XOR operation.
47+
- **ConditionClearBits**: Conditionally clears bits.
48+
- **ConditionSetBits**: Conditionally sets bits.
4949

5050
For more details, see the [`Interlocked Operations`](./interlockedops.md) topic.
5151

Source/Docs/articles/interlockedops.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ If the predicate returns false, both values in the tuple will be the same, refle
6262

6363
### Predicate Overloads
6464

65-
To avoid closures when the predicate requires additional arguments, overloads of the conditional methods accept a predicate with an additional condition data argument.
65+
To avoid **closures** when the predicate requires additional arguments, overloads of the conditional methods accept a predicate with an additional condition data argument.
6666

6767
```csharp
6868
public class ConditionXorExample

Source/Docs/articles/memory-management.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ By default, **MemoryStreamSlim** zeroes out its memory buffers when they are no
1212

1313
You can control how memory buffers are cleared by setting the `MemoryStreamSlimOptions` [`ZeroBufferBehavior`](xref:KZDev.PerfUtils.MemoryStreamSlimOptions.ZeroBufferBehavior) property. This property can be set globally as a default value or per instance when instantiating **MemoryStreamSlim**. The ZeroBufferBehavior property can be set to one of the following [MemoryStreamSlimZeroBufferOption](xref:KZDev.PerfUtils.MemoryStreamSlimZeroBufferOption) values:
1414

15-
- None: No clearing of memory buffers is performed. This is the fastest option but may leave potentially sensitive data in memory. For streams that do not contain sensitive data, this can improve performance.
16-
- OutOfBand: This is the default behavior. Memory buffers are cleared out-of-band, meaning the clearing is done on a separate thread. This reduces the latency impact on the thread disposing of or reducing the capacity of **MemoryStreamSlim**. However, it may briefly leave information in memory before it is cleared and returned to the buffer cache. This hybrid approach balances security and performance.
17-
- OnRelease: Memory buffers are cleared immediately when the **MemoryStreamSlim** instance is disposed or when buffers are released due to capacity reduction. This is the most secure option but can impact latency performance when disposing of **MemoryStreamSlim** instances with large memory buffers.
15+
- **None**: No clearing of memory buffers is performed. This is the fastest option but may leave potentially sensitive data in memory. For streams that do not contain sensitive data, this can improve performance.
16+
- **OutOfBand**: This is the default behavior. Memory buffers are cleared out-of-band, meaning the clearing is done on a separate thread. This reduces the latency impact on the thread disposing of or reducing the capacity of **MemoryStreamSlim**. However, it may briefly leave information in memory before it is cleared and returned to the buffer cache. This hybrid approach balances security and performance.
17+
- **OnRelease**: Memory buffers are cleared immediately when the **MemoryStreamSlim** instance is disposed or when buffers are released due to capacity reduction. This is the most secure option but can impact latency performance when disposing of **MemoryStreamSlim** instances with large memory buffers.
1818

1919
```csharp
2020
using KZDev.PerfUtils;
@@ -49,7 +49,7 @@ using KZDev.PerfUtils;
4949
MemoryStreamSlim.ReleaseMemoryBuffers();
5050
```
5151

52-
The primary goal of caching and reusing memory buffers is to reduce allocations and deallocations and prevent LOH fragmentation. Therefore, it is generally best to let the library manage memory automatically. However, in cases where you have used an exceptionally large **MemoryStreamSlim** instance and know it was a one-off use, you can call this method to quickly release the cached memory buffers.
52+
The primary goal of caching and reusing memory buffers is to reduce allocations and deallocations and prevent LOH fragmentation. Therefore, it is generally best to let the library manage memory automatically. However, in cases where you have used an exceptionally large **MemoryStreamSlim** instance and know it was a one-off use, you can call this method to quickly release the excess cached memory buffers.
5353

5454
After calling `ReleaseMemoryBuffers`, future **MemoryStreamSlim** instances will allocate new memory buffers and rebuild the cache as needed. Old memory buffers will become eligible for garbage collection once all stream instances using them are disposed.
5555

Source/Docs/articles/memorystream-benchmarks.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,21 +16,21 @@ The benchmark scenarios are categorized into five groups:
1616

1717
- **Dynamic Throughput**
1818

19-
Benchmarks using dynamically expandable streams instantiated with an initial zero length and capacity. [Read More](./dynamic-throughput-benchmarks.md)
19+
Benchmarks using dynamically expandable streams instantiated with an initial zero length. [Read More](./dynamic-throughput-benchmarks.md)
2020

2121
- **CopyToAsync Throughput**
2222

2323
Benchmarks demonstrating the performance impact of copying the contents of a stream to another stream asynchronously. [Read More](./copytoasync-throughput-benchmarks.md)
2424

2525
- **Wrapper Throughput**
2626

27-
Benchmarks for streams instantiated with an already allocated and available byte array to evaluate "wrapped" mode behavior. [Read More](./wrapper-throughput-benchmarks.md)
27+
Benchmarks for streams instantiated with an already allocated and available byte array to evaluate "fixed" mode (wrapped arrray) behavior. [Read More](./wrapper-throughput-benchmarks.md)
2828

2929
## Reading Results
3030

3131
### Parameter Effect
3232

33-
Not every parameter used in the benchmarks applies to every class being compared. For example, the MemoryStream class does not have an option to zero out memory buffers, as this is its default and only behavior.
33+
Not every parameter used in the benchmarks applies to every class being compared. For example, the **MemoryStream** class does not have an option to zero out memory buffers, as this is its default and only behavior.
3434

3535
Two considerations apply in such cases:
3636

Source/Docs/articles/memorystream-compression-benchmarks.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ A single benchmark operation consists of performing five loops of the following
4040
For the benchmarks:
4141

4242
- **MemoryStreamSlim** and **RecyclableMemoryStream** are created with the option to zero out memory buffers disabled to focus the performance comparison on the compression operation.
43-
- The MemoryStream class does not support zeroing out memory buffers (its internal buffers are always allocated with new byte[]), so this option does not apply.
43+
- The **MemoryStream** class does not support zeroing out memory buffers (its internal buffers are always allocated with new byte[]), so this option does not apply.
4444

4545
### Benchmark Parameters
4646

@@ -55,9 +55,9 @@ The size of the source data that is compressed into the destination stream in ea
5555
- When **true**, the destination stream is instantiated with the data size as the initial capacity.
5656
- When **false**, the stream is created with the default capacity (no initial capacity specified).
5757

58-
The results show no notable difference in throughput performance whether this parameter is 'true' or 'false'. However, there is a noticeable impact on memory allocations for **MemoryStream** and a significant impact for **RecyclableMemoryStream** when this is **false**.
58+
The results show no notable difference in throughput performance whether this parameter is **true** or **false**. However, there is a noticeable impact on memory allocations for **MemoryStream** and a significant impact for **RecyclableMemoryStream** when this is **false**.
5959

60-
The issue with this parameter is that the capacity set on the stream for the benchmark is a best guess based on the data size, and the actual size of the compressed data is not known until after the compression operation is complete. This means that the used capacity may be too small or too large, however the guess capacity used did result in generally better allocation results in these benchmarks.
60+
The issue with this parameter is that the capacity set on the stream for the benchmark is a best guess based on the data size (source data size is used), and the actual size of the compressed data is not known until after the compression operation is complete. This means that the used capacity may be too small or too large, however the guess capacity used in these benchmarks did result in generally better allocation results in this case.
6161

6262
## Benchmark Results
6363

0 commit comments

Comments
 (0)