Skip to content

Commit 4dab75c

Browse files
committed
keword-ed instances of matmul
1 parent 1b4bc2c commit 4dab75c

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

chapters/Optimization.tex

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ \section{Some optimizations you are allowed to use}
4747

4848
That being said, there are a few general rules that will prevent your code from accidentally becoming horrendously slow:
4949
\begin{itemize}
50-
\item\textbf{Don't reinvent the wheel.} Don't try to write your own matrix multiplication routine, use \texttt{matmul}, or the routines from BLAS (see \nameref{chap:Linear algebra}).
50+
\item\textbf{Don't reinvent the wheel.} Don't try to write your own matrix multiplication routine, use \keyword{matmul}, or the routines from BLAS (see \nameref{chap:Linear algebra}).
5151
Fortran has many built-in numerical functions that are much faster than anything you'll be able to write---use them! (Google `Fortran intrinsics' to get an overview.)
5252
\newpage
5353
\item\textbf{Use array operations.} You can add two arrays by writing
@@ -134,7 +134,7 @@ \section{Scalability; or, the importance of big O}
134134
\item Solving a system of linear equations is also $\Oh{n^2}$, but the prefactor is much larger than for matrix-vector multiplication.
135135
\item \Naive\ matrix-matrix multiplication is $\Oh{n^3}$.
136136
Libraries such as BLAS and \indexentry{LAPACK} are smarter and do it in $\Oh{n^{\log_2(7)}}\approx\Oh{n^{2.807}}$.
137-
Another reason to avoid writing this yourself.\footnote{The Fortran intrinsic \texttt{matmul} is actually implemented using BLAS.}
137+
Another reason to avoid writing this yourself.\footnote{The Fortran intrinsic \keyword{matmul} is actually implemented using BLAS.}
138138
\item Taking the inverse of\ a matrix and calculating the eigenvalues/eigenvectors are both $\Oh{n^3}$.
139139
However, eigendecomposition has a much larger prefactor and is therefore significantly slower.
140140
It is logically the most expensive matrix operation, since all operations on a diagonal matrix are $\Oh{n}$.

0 commit comments

Comments
 (0)