ScaledMatrix
classResidualMatrix 1.6.1
The ScaledMatrix
provides yet another method of running scale()
on a matrix.
In other words, these three operations are equivalent:
mat <- matrix(rnorm(10000), ncol=10)
smat1 <- scale(mat)
head(smat1)
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] -0.31139755 -0.29202680 0.11337637 0.3722894 0.79256390 -0.3856232
## [2,] 0.55319564 -1.14657537 0.77846638 0.1504829 1.30886184 -1.0455441
## [3,] -0.09344085 -0.05441405 1.55309843 -0.2849963 0.03048235 1.7687088
## [4,] 1.65844349 -0.83336967 0.37910160 0.5083530 1.31146678 -0.3796140
## [5,] 0.35908534 0.05507507 0.58397315 1.9988014 1.72120189 0.6394793
## [6,] 0.73717917 -0.21354296 0.07663761 -0.4763050 -0.93776563 -1.6875382
## [,7] [,8] [,9] [,10]
## [1,] -0.99270094 2.20514864 -1.0729933 -0.2584169
## [2,] 0.94425746 -0.08822147 0.5709287 -0.2672068
## [3,] -0.05168789 0.21701645 -0.4746384 -0.3455821
## [4,] 0.94624390 -0.95320291 -0.2127884 -0.3218301
## [5,] -0.24944553 0.60324367 -0.4495445 -0.8958174
## [6,] -0.31087299 0.89380181 1.0308307 0.2763101
library(DelayedArray)
smat2 <- scale(DelayedArray(mat))
head(smat2)
## <6 x 10> matrix of class DelayedMatrix and type "double":
## [,1] [,2] [,3] ... [,9] [,10]
## [1,] -0.31139755 -0.29202680 0.11337637 . -1.0729933 -0.2584169
## [2,] 0.55319564 -1.14657537 0.77846638 . 0.5709287 -0.2672068
## [3,] -0.09344085 -0.05441405 1.55309843 . -0.4746384 -0.3455821
## [4,] 1.65844349 -0.83336967 0.37910160 . -0.2127884 -0.3218301
## [5,] 0.35908534 0.05507507 0.58397315 . -0.4495445 -0.8958174
## [6,] 0.73717917 -0.21354296 0.07663761 . 1.0308307 0.2763101
library(ScaledMatrix)
smat3 <- ScaledMatrix(mat, center=TRUE, scale=TRUE)
head(smat3)
## <6 x 10> matrix of class ScaledMatrix and type "double":
## [,1] [,2] [,3] ... [,9] [,10]
## [1,] -0.31139755 -0.29202680 0.11337637 . -1.0729933 -0.2584169
## [2,] 0.55319564 -1.14657537 0.77846638 . 0.5709287 -0.2672068
## [3,] -0.09344085 -0.05441405 1.55309843 . -0.4746384 -0.3455821
## [4,] 1.65844349 -0.83336967 0.37910160 . -0.2127884 -0.3218301
## [5,] 0.35908534 0.05507507 0.58397315 . -0.4495445 -0.8958174
## [6,] 0.73717917 -0.21354296 0.07663761 . 1.0308307 0.2763101
The biggest difference lies in how they behave in downstream matrix operations.
smat1
is an ordinary matrix, with the scaled and centered values fully realized in memory.
Nothing too unusual here.smat2
is a DelayedMatrix
and undergoes block processing whereby chunks are realized and operated on, one at a time.
This sacrifices speed for greater memory efficiency by avoiding a copy of the entire matrix.
In particular, it preserves the structure of the original mat
, e.g., from a sparse or file-backed representation.smat3
is a ScaledMatrix
that refactors certain operations so that they can be applied to the original mat
without any scaling or centering.
This takes advantage of the original data structure to speed up matrix multiplication and row/column sums,
albeit at the cost of numerical precision.Given an original matrix \(\mathbf{X}\) with \(n\) columns, a vector of column centers \(\mathbf{c}\) and a vector of column scaling values \(\mathbf{s}\), our scaled matrix can be written as:
\[ \mathbf{Y} = (\mathbf{X} - \mathbf{c} \cdot \mathbf{1}_n^T) \mathbf{S} \]
where \(\mathbf{S} = \text{diag}(s_1^{-1}, ..., s_n^{-1})\). If we wanted to right-multiply it with another matrix \(\mathbf{A}\), we would have:
\[ \mathbf{YA} = \mathbf{X}\mathbf{S}\mathbf{A} - \mathbf{c} \cdot \mathbf{1}_n^T \mathbf{S}\mathbf{A} \]
The right-most expression is simply the outer product of \(\mathbf{c}\) with the column sums of \(\mathbf{SA}\). More important is the fact that we can use the matrix multiplication operator for \(\mathbf{X}\) with \(\mathbf{SA}\), as this allows us to use highly efficient algorithms for certain data representations, e.g., sparse matrices.
library(Matrix)
mat <- rsparsematrix(20000, 10000, density=0.01)
smat <- ScaledMatrix(mat, center=TRUE, scale=TRUE)
blob <- matrix(runif(ncol(mat) * 5), ncol=5)
system.time(out <- smat %*% blob)
## user system elapsed
## 0.020 0.004 0.024
# The slower way with block processing.
da <- scale(DelayedArray(mat))
system.time(out2 <- da %*% blob)
## user system elapsed
## 33.622 6.782 40.869
The same logic applies for left-multiplication and cross-products.
This allows us to easily speed up high-level operations involving matrix multiplication by just switching to a ScaledMatrix
,
e.g., in approximate PCA algorithms from the BiocSingular package.
library(BiocSingular)
set.seed(1000)
system.time(pcs <- runSVD(smat, k=10, BSPARAM=IrlbaParam()))
## user system elapsed
## 11.039 0.201 11.240
Row and column sums are special cases of matrix multiplication and can be computed quickly:
system.time(rowSums(smat))
## user system elapsed
## 0.011 0.000 0.011
system.time(rowSums(da))
## user system elapsed
## 23.940 7.462 31.403
Subsetting, transposition and renaming of the dimensions are all supported without loss of the ScaledMatrix
representation:
smat[,1:5]
## <20000 x 5> matrix of class ScaledMatrix and type "double":
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## [2,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## [3,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## [4,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## [5,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## ... . . . . .
## [19996,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## [19997,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## [19998,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## [19999,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
## [20000,] 0.0121071895 0.0008375371 -0.0032606849 0.0004692082 0.0162599926
t(smat)
## <10000 x 20000> matrix of class ScaledMatrix and type "double":
## [,1] [,2] [,3] ... [,19999]
## [1,] 0.0121071895 0.0121071895 0.0121071895 . 0.0121071895
## [2,] 0.0008375371 0.0008375371 0.0008375371 . 0.0008375371
## [3,] -0.0032606849 -0.0032606849 -0.0032606849 . -0.0032606849
## [4,] 0.0004692082 0.0004692082 0.0004692082 . 0.0004692082
## [5,] 0.0162599926 0.0162599926 0.0162599926 . 0.0162599926
## ... . . . . .
## [9996,] 0.0047321123 0.0047321123 0.0047321123 . 0.0047321123
## [9997,] -0.0053672241 -0.0053672241 -0.0053672241 . -0.0053672241
## [9998,] 0.0022485016 0.0022485016 0.0022485016 . 0.0022485016
## [9999,] 0.0076265979 0.0076265979 0.0076265979 . 0.0076265979
## [10000,] 0.0009521193 0.0009521193 0.0009521193 . 0.0009521193
## [,20000]
## [1,] 0.0121071895
## [2,] 0.0008375371
## [3,] -0.0032606849
## [4,] 0.0004692082
## [5,] 0.0162599926
## ... .
## [9996,] 0.0047321123
## [9997,] -0.0053672241
## [9998,] 6.1484943743
## [9999,] 0.0076265979
## [10000,] 0.0009521193
rownames(smat) <- paste0("GENE_", 1:20000)
smat
## <20000 x 10000> matrix of class ScaledMatrix and type "double":
## [,1] [,2] [,3] ... [,9999]
## GENE_1 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## GENE_2 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## GENE_3 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## GENE_4 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## GENE_5 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## ... . . . . .
## GENE_19996 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## GENE_19997 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## GENE_19998 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## GENE_19999 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## GENE_20000 0.0121071895 0.0008375371 -0.0032606849 . 0.0076265979
## [,10000]
## GENE_1 0.0009521193
## GENE_2 0.0009521193
## GENE_3 0.0009521193
## GENE_4 0.0009521193
## GENE_5 0.0009521193
## ... .
## GENE_19996 0.0009521193
## GENE_19997 0.0009521193
## GENE_19998 0.0009521193
## GENE_19999 0.0009521193
## GENE_20000 0.0009521193
Other operations will cause the ScaledMatrix
to collapse to the general DelayedMatrix
representation, after which point block processing will be used.
smat + 1
## <20000 x 10000> matrix of class DelayedMatrix and type "double":
## [,1] [,2] [,3] ... [,9999] [,10000]
## GENE_1 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## GENE_2 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## GENE_3 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## GENE_4 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## GENE_5 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## ... . . . . . .
## GENE_19996 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## GENE_19997 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## GENE_19998 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## GENE_19999 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
## GENE_20000 1.0121072 1.0008375 0.9967393 . 1.007627 1.000952
For most part, the implementation of the multiplication assumes that the \(\mathbf{A}\) matrix and the matrix product are small compared to \(\mathbf{X}\).
It is also possible to multiply two ScaledMatrix
es together if the underlying matrices have efficient operators for their product.
However, if this is not the case, the ScaledMatrix
offers little benefit for increased overhead.
It is also worth noting that this speed-up is not entirely free.
The expression above involves subtracting two matrix with potentially large values, which runs the risk of catastrophic cancellation.
The example below demonstrates how ScaledMatrix
is more susceptible to loss of precision than a normal DelayedArray
:
set.seed(1000)
mat <- matrix(rnorm(1000000), ncol=100000)
big.mat <- mat + 1e12
# The 'correct' value, unaffected by numerical precision.
ref <- rowMeans(scale(mat))
head(ref)
## [1] -0.0025584703 -0.0008570664 -0.0019225335 -0.0001039903 0.0024761772
## [6] 0.0032943203
# The value from scale'ing a DelayedArray.
library(DelayedArray)
smat2 <- scale(DelayedArray(big.mat))
head(rowMeans(smat2))
## [1] -0.0025583534 -0.0008571123 -0.0019226040 -0.0001039539 0.0024761618
## [6] 0.0032943783
# The value from a ScaledMatrix.
library(ScaledMatrix)
smat3 <- ScaledMatrix(big.mat, center=TRUE, scale=TRUE)
head(rowMeans(smat3))
## [1] -0.00480 0.00848 0.00544 -0.00976 -0.01056 0.01520
In most practical applications, though, this does not seem to be a major concern, especially as most values (e.g., log-normalized expression matrices) lie close to zero anyway.
sessionInfo()
## R version 4.2.1 (2022-06-23)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.5 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.15-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.15-bioc/R/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_GB LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats4 stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] BiocSingular_1.12.0 ScaledMatrix_1.4.1 DelayedArray_0.22.0
## [4] IRanges_2.30.1 S4Vectors_0.34.0 MatrixGenerics_1.8.1
## [7] matrixStats_0.62.0 BiocGenerics_0.42.0 Matrix_1.5-0
## [10] BiocStyle_2.24.0
##
## loaded via a namespace (and not attached):
## [1] Rcpp_1.0.9 bslib_0.4.0
## [3] compiler_4.2.1 BiocManager_1.30.18
## [5] jquerylib_0.1.4 tools_4.2.1
## [7] DelayedMatrixStats_1.18.0 digest_0.6.29
## [9] jsonlite_1.8.0 evaluate_0.16
## [11] lattice_0.20-45 rlang_1.0.5
## [13] cli_3.4.0 parallel_4.2.1
## [15] yaml_2.3.5 xfun_0.32
## [17] fastmap_1.1.0 stringr_1.4.1
## [19] knitr_1.40 sass_0.4.2
## [21] grid_4.2.1 R6_2.5.1
## [23] BiocParallel_1.30.3 rmarkdown_2.16
## [25] bookdown_0.28 irlba_2.3.5
## [27] magrittr_2.0.3 codetools_0.2-18
## [29] htmltools_0.5.3 sparseMatrixStats_1.8.0
## [31] rsvd_1.0.5 beachmat_2.12.0
## [33] stringi_1.7.8 cachem_1.0.6