|
| 1 | +## Prerequisites |
| 2 | + |
| 3 | +Before attempting this problem, you should be comfortable with: |
| 4 | + |
| 5 | +- **NumPy Array Operations** - PyTorch tensors mirror NumPy arrays in API and behavior, so if you know NumPy, you already know 80% of PyTorch |
| 6 | +- **Reshaping and Dimension Semantics** - Understanding what `dim=0` vs `dim=1` means in a 2D tensor is critical because getting the wrong dimension silently produces wrong results |
| 7 | + |
| 8 | +--- |
| 9 | + |
| 10 | +## Concept |
| 11 | + |
| 12 | +PyTorch is the most widely used deep learning framework. Its core data structure is the **tensor**, which works like a NumPy array but adds two superpowers: automatic differentiation (autograd) and GPU acceleration. |
| 13 | + |
| 14 | +**Reshaping** changes a tensor's dimensions without touching its data. A $(2, 4)$ tensor has 8 elements, and you can reshape it to $(4, 2)$, $(1, 8)$, or $(8, 1)$. The total element count must stay the same. This is used constantly: for example, flattening a 2D image into a 1D vector for a linear layer. |
| 15 | + |
| 16 | +**Averaging** along a dimension collapses that dimension. For a $(3, 2)$ tensor, averaging along `dim=0` (rows) produces a $(2,)$ tensor with column means. Averaging along `dim=1` (columns) produces a $(3,)$ tensor with row means. Think of `dim=` as "the dimension that disappears." |
| 17 | + |
| 18 | +**Concatenation** joins tensors along a dimension. Concatenating two $(2, 3)$ tensors along `dim=1` produces a $(2, 6)$ tensor. The tensors must agree on all other dimensions. |
| 19 | + |
| 20 | +**MSE Loss** is available as `torch.nn.functional.mse_loss`, computing $\frac{1}{N}\sum(pred_i - target_i)^2$. Using built-in loss functions is preferred because they handle edge cases and are numerically stable. |
| 21 | + |
| 22 | +--- |
| 23 | + |
| 24 | +## Solution |
| 25 | + |
| 26 | +### Intuition |
| 27 | + |
| 28 | +Each method exercises a core PyTorch operation. We use `torch.reshape` for reshaping, `torch.mean` with a dimension argument for averaging, `torch.cat` for concatenation, and `torch.nn.functional.mse_loss` for the loss computation. |
| 29 | + |
| 30 | +### Implementation |
| 31 | + |
| 32 | +::tabs-start |
| 33 | +```python |
| 34 | +import torch |
| 35 | +import torch.nn |
| 36 | +from torchtyping import TensorType |
| 37 | + |
| 38 | + |
| 39 | +class Solution: |
| 40 | + def reshape(self, to_reshape: TensorType[float]) -> TensorType[float]: |
| 41 | + M, N = to_reshape.shape |
| 42 | + reshaped = torch.reshape(to_reshape, (M * N // 2, 2)) |
| 43 | + return torch.round(reshaped, decimals=4) |
| 44 | + |
| 45 | + def average(self, to_avg: TensorType[float]) -> TensorType[float]: |
| 46 | + averaged = torch.mean(to_avg, dim = 0) |
| 47 | + return torch.round(averaged, decimals=4) |
| 48 | + |
| 49 | + def concatenate(self, cat_one: TensorType[float], cat_two: TensorType[float]) -> TensorType[float]: |
| 50 | + concatenated = torch.cat((cat_one, cat_two), dim = 1) |
| 51 | + return torch.round(concatenated, decimals=4) |
| 52 | + |
| 53 | + def get_loss(self, prediction: TensorType[float], target: TensorType[float]) -> TensorType[float]: |
| 54 | + loss = torch.nn.functional.mse_loss(prediction, target) |
| 55 | + return torch.round(loss, decimals=4) |
| 56 | +``` |
| 57 | +::tabs-end |
| 58 | + |
| 59 | + |
| 60 | +### Walkthrough |
| 61 | + |
| 62 | +| Operation | Input | Computation | Output | |
| 63 | +|---|---|---|---| |
| 64 | +| Reshape $(2,4) \to (4,2)$ | $[[1,2,3,4],[5,6,7,8]]$ | Split into pairs of 2 | $[[1,2],[3,4],[5,6],[7,8]]$ | |
| 65 | +| Average `dim=0` | $[[1,2],[3,4],[5,6]]$ | Column means | $[3.0, 4.0]$ | |
| 66 | +| Concat `dim=1` | $[[1,2],[3,4]]$ and $[[5,6],[7,8]]$ | Side by side | $[[1,2,5,6],[3,4,7,8]]$ | |
| 67 | +| MSE Loss | pred=$[1.0,2.0]$, target=$[1.5,2.5]$ | $(0.25+0.25)/2$ | $0.25$ | |
| 68 | + |
| 69 | +### Time & Space Complexity |
| 70 | + |
| 71 | +- Time: $O(n)$ for each operation where $n$ is the total number of elements |
| 72 | +- Space: $O(n)$ for the output tensors |
| 73 | + |
| 74 | +--- |
| 75 | + |
| 76 | +## Common Pitfalls |
| 77 | + |
| 78 | +### Wrong Dimension for Averaging |
| 79 | + |
| 80 | +`dim=0` averages across rows (column-wise means), `dim=1` averages across columns (row-wise means). These are easy to confuse. |
| 81 | + |
| 82 | +::tabs-start |
| 83 | +```python |
| 84 | +# Wrong: averages across columns instead of rows |
| 85 | +averaged = torch.mean(to_avg, dim=1) |
| 86 | + |
| 87 | +# Correct: averages across rows (column means) |
| 88 | +averaged = torch.mean(to_avg, dim=0) |
| 89 | +``` |
| 90 | +::tabs-end |
| 91 | + |
| 92 | + |
| 93 | +### Mismatched Shapes for Concatenation |
| 94 | + |
| 95 | +Concatenation along `dim=1` requires the same number of rows. Different row counts cause a runtime error. |
| 96 | + |
| 97 | +::tabs-start |
| 98 | +```python |
| 99 | +# Wrong: different number of rows (2 vs 3) |
| 100 | +torch.cat((torch.zeros(2, 3), torch.zeros(3, 3)), dim=1) |
| 101 | + |
| 102 | +# Correct: same number of rows |
| 103 | +torch.cat((torch.zeros(2, 3), torch.zeros(2, 3)), dim=1) |
| 104 | +``` |
| 105 | +::tabs-end |
| 106 | + |
| 107 | + |
| 108 | +--- |
| 109 | + |
| 110 | +## In the GPT Project |
| 111 | + |
| 112 | +This becomes `foundations/pytorch_basics.py`. Every subsequent problem uses these operations. Reshaping appears when flattening logits for cross-entropy loss. Averaging is used in layer normalization. Concatenation joins multi-head attention outputs. MSE loss trains regression models. |
| 113 | + |
| 114 | +--- |
| 115 | + |
| 116 | +## Key Takeaways |
| 117 | + |
| 118 | +- PyTorch tensors mirror NumPy arrays in API but add autograd and GPU support, making them the foundation of modern deep learning. |
| 119 | +- `torch.reshape`, `torch.mean`, and `torch.cat` are the three tensor manipulation functions you will use most often. |
| 120 | +- Using built-in functions like `mse_loss` is safer than manual computation because they handle numerical edge cases automatically. |
0 commit comments