First of all we need to install NewWave:
NewWave is a new package that assumes a Negative Binomial distributions for dimensionality reduction and batch effect removal. In order to reduce the memory consumption it uses a PSOCK cluster combined with the R package SharedObject that allow to share a matrix between different cores without memory duplication. Thanks to that we can massively parallelize the estimation process with huge benefit in terms of time consumption. We can reduce even more the time consumption using some minibatch approaches on the different steps of the optimization.
I am going to show how to use NewWave with example data generated with Splatter.
params <- newSplatParams()
N=500
set.seed(1234)
data <- splatSimulateGroups(params,batchCells=c(N/2,N/2),
group.prob = rep(0.1,10),
de.prob = 0.2,
verbose = FALSE)
Now we have a dataset with 500 cells and 10000 genes, I will use only the 500 most variable genes. NewWave takes as input raw data, not normalized.
set.seed(12359)
hvg <- rowVars(counts(data))
names(hvg) <- rownames(counts(data))
data <- data[names(sort(hvg,decreasing=TRUE))[1:500],]
As you can see there is a variable called batch in the colData section.
colData(data)
#> DataFrame with 500 rows and 4 columns
#> Cell Batch Group ExpLibSize
#> <character> <character> <factor> <numeric>
#> Cell1 Cell1 Batch1 Group2 41752.9
#> Cell2 Cell2 Batch1 Group7 59876.2
#> Cell3 Cell3 Batch1 Group6 55498.9
#> Cell4 Cell4 Batch1 Group9 49258.3
#> Cell5 Cell5 Batch1 Group1 61011.4
#> ... ... ... ... ...
#> Cell496 Cell496 Batch2 Group1 62947.8
#> Cell497 Cell497 Batch2 Group5 59063.6
#> Cell498 Cell498 Batch2 Group4 53274.1
#> Cell499 Cell499 Batch2 Group9 65240.0
#> Cell500 Cell500 Batch2 Group8 59956.5
IMPORTANT: For batch effecr removal the batch variable must be a factor
We also have a variable called Group that represent the cell type labels.
We can see the how the cells are distributed between group and batch
There is a clear batch effect between the cells.
Let’s try to correct it.
I am going to show different implementation and the suggested way to use them with the given hardware.
Some advise:
This is the way to insert the batch variable, in the same manner can be inserted other cell-related variable and if you need some gene related variable those can be inserted in V.
res <- newWave(data,X = "~Batch", K=10, verbose = TRUE)
#> Time of setup
#> user system elapsed
#> 0.013 0.004 0.296
#> Time of initialization
#> user system elapsed
#> 0.051 0.000 0.592
#> Iteration 1
#> penalized log-likelihood = -1291868.62455124
#> Time of dispersion optimization
#> user system elapsed
#> 0.636 0.052 0.688
#> after optimize dispersion = -1055100.84502696
#> Time of right optimization
#> user system elapsed
#> 0.001 0.000 8.577
#> after right optimization= -1054375.89874989
#> after orthogonalization = -1054375.87808076
#> Time of left optimization
#> user system elapsed
#> 0.002 0.000 21.758
#> after left optimization= -1054048.74980816
#> after orthogonalization = -1054048.74663015
#> Iteration 2
#> penalized log-likelihood = -1054048.74663015
#> Time of dispersion optimization
#> user system elapsed
#> 0.884 0.016 0.899
#> after optimize dispersion = -1054041.97214316
#> Time of right optimization
#> user system elapsed
#> 0.002 0.000 6.413
#> after right optimization= -1054009.39792234
#> after orthogonalization = -1054009.39644108
#> Time of left optimization
#> user system elapsed
#> 0.000 0.000 5.088
#> after left optimization= -1053998.73019492
#> after orthogonalization = -1053998.73006203
In order to make it faster you can increase the number of cores using “children” parameter:
res2 <- newWave(data,X = "~Batch", K=10, verbose = TRUE, children=2)
#> Time of setup
#> user system elapsed
#> 0.015 0.008 0.274
#> Time of initialization
#> user system elapsed
#> 0.044 0.016 0.417
#> Iteration 1
#> penalized log-likelihood = -1291868.62487
#> Time of dispersion optimization
#> user system elapsed
#> 0.712 0.003 0.717
#> after optimize dispersion = -1055100.84476161
#> Time of right optimization
#> user system elapsed
#> 0.002 0.000 3.563
#> after right optimization= -1054375.89762047
#> after orthogonalization = -1054375.87695437
#> Time of left optimization
#> user system elapsed
#> 0.002 0.000 6.314
#> after left optimization= -1054048.76996822
#> after orthogonalization = -1054048.76681189
#> Iteration 2
#> penalized log-likelihood = -1054048.76681189
#> Time of dispersion optimization
#> user system elapsed
#> 1.165 0.007 1.175
#> after optimize dispersion = -1054041.99273706
#> Time of right optimization
#> user system elapsed
#> 0.007 0.027 9.137
#> after right optimization= -1054009.35477483
#> after orthogonalization = -1054009.35334615
#> Time of left optimization
#> user system elapsed
#> 0.004 0.000 4.713
#> after left optimization= -1053998.73179938
#> after orthogonalization = -1053998.73167744
If you do not have an high number of cores to run newWave this is the fastest way to run. The optimization process is done by three process itereated until convercence.
Each of these three steps can be accelerated using mini batch, the number of observation is settled with these parameters:
res3 <- newWave(data,X = "~Batch", verbose = TRUE,K=10, children=2,
n_gene_disp = 100, n_gene_par = 100, n_cell_par = 100)
#> Time of setup
#> user system elapsed
#> 0.013 0.000 0.248
#> Time of initialization
#> user system elapsed
#> 0.043 0.008 0.366
#> Iteration 1
#> penalized log-likelihood = -1291868.62476215
#> Time of dispersion optimization
#> user system elapsed
#> 0.669 0.000 0.669
#> after optimize dispersion = -1055100.84441299
#> Time of right optimization
#> user system elapsed
#> 0.001 0.000 3.765
#> after right optimization= -1054375.89630668
#> after orthogonalization = -1054375.87564422
#> Time of left optimization
#> user system elapsed
#> 0.002 0.000 5.290
#> after left optimization= -1054048.7613862
#> after orthogonalization = -1054048.75821809
#> Iteration 2
#> penalized log-likelihood = -1054048.75821809
#> Time of dispersion optimization
#> user system elapsed
#> 0.420 0.008 0.428
#> after optimize dispersion = -1054048.44177912
#> Time of right optimization
#> user system elapsed
#> 0.004 0.000 1.778
#> after right optimization= -1054040.30191489
#> after orthogonalization = -1054040.29596669
#> Time of left optimization
#> user system elapsed
#> 0.003 0.000 1.200
#> after left optimization= -1054039.69194023
#> after orthogonalization = -1054039.69173985
If you have a lot of core disposable or you want to estimate a genewise dispersion parameter this is the fastes configuration:
res3 <- newWave(data,X = "~Batch", verbose = TRUE,K=10, children=2,
n_gene_par = 100, n_cell_par = 100, commondispersion = FALSE)
#> Time of setup
#> user system elapsed
#> 0.016 0.008 0.641
#> Time of initialization
#> user system elapsed
#> 0.060 0.004 0.659
#> Iteration 1
#> penalized log-likelihood = -1291868.62480017
#> Time of dispersion optimization
#> user system elapsed
#> 1.434 0.032 1.484
#> after optimize dispersion = -1055100.84556799
#> Time of right optimization
#> user system elapsed
#> 0.004 0.000 7.608
#> after right optimization= -1054375.89912777
#> after orthogonalization = -1054375.87845208
#> Time of left optimization
#> user system elapsed
#> 0.000 0.002 4.955
#> after left optimization= -1054048.75474905
#> after orthogonalization = -1054048.75158629
#> Iteration 2
#> penalized log-likelihood = -1054048.75158629
#> Time of dispersion optimization
#> user system elapsed
#> 0.058 0.004 0.605
#> after optimize dispersion = -1049577.14834921
#> Time of right optimization
#> user system elapsed
#> 0.000 0.001 1.068
#> after right optimization= -1049570.61611706
#> after orthogonalization = -1049570.61544235
#> Time of left optimization
#> user system elapsed
#> 0.002 0.000 1.226
#> after left optimization= -1049534.99159779
#> after orthogonalization = -1049534.99079407
#> Iteration 3
#> penalized log-likelihood = -1049534.99079407
#> Time of dispersion optimization
#> user system elapsed
#> 0.095 0.000 0.413
#> after optimize dispersion = -1049534.99526173
#> Time of right optimization
#> user system elapsed
#> 0.000 0.002 0.601
#> after right optimization= -1049530.08402654
#> after orthogonalization = -1049530.08374441
#> Time of left optimization
#> user system elapsed
#> 0.002 0.000 0.673
#> after left optimization= -1049499.78944819
#> after orthogonalization = -1049499.78872764
NB: do not use n_gene_disp in this case, it will slower the computation.
Now I can use the latent dimension rapresentation for visualization purpose:
latent <- reducedDim(res)
tsne_latent <- data.frame(Rtsne(latent)$Y)
tsne_latent$batch <- data$Batch
tsne_latent$group <- data$Group
or for clustering:
sessionInfo()
#> R version 4.1.1 (2021-08-10)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: Ubuntu 20.04.3 LTS
#>
#> Matrix products: default
#> BLAS: /home/biocbuild/bbs-3.14-bioc/R/lib/libRblas.so
#> LAPACK: /home/biocbuild/bbs-3.14-bioc/R/lib/libRlapack.so
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
#> [3] LC_TIME=en_GB LC_COLLATE=C
#> [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
#> [9] LC_ADDRESS=C LC_TELEPHONE=C
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> attached base packages:
#> [1] stats4 stats graphics grDevices utils datasets methods
#> [8] base
#>
#> other attached packages:
#> [1] NewWave_1.4.0 mclust_5.4.7
#> [3] ggplot2_3.3.5 Rtsne_0.15
#> [5] irlba_2.3.3 Matrix_1.3-4
#> [7] splatter_1.18.0 SingleCellExperiment_1.16.0
#> [9] SummarizedExperiment_1.24.0 Biobase_2.54.0
#> [11] GenomicRanges_1.46.0 GenomeInfoDb_1.30.0
#> [13] IRanges_2.28.0 S4Vectors_0.32.0
#> [15] BiocGenerics_0.40.0 MatrixGenerics_1.6.0
#> [17] matrixStats_0.61.0
#>
#> loaded via a namespace (and not attached):
#> [1] sass_0.4.0 BiocSingular_1.10.0 jsonlite_1.7.2
#> [4] bslib_0.3.1 assertthat_0.2.1 highr_0.9
#> [7] GenomeInfoDbData_1.2.7 yaml_2.2.1 pillar_1.6.4
#> [10] backports_1.2.1 lattice_0.20-45 glue_1.4.2
#> [13] beachmat_2.10.0 SharedObject_1.8.0 digest_0.6.28
#> [16] XVector_0.34.0 checkmate_2.0.0 colorspace_2.0-2
#> [19] htmltools_0.5.2 pkgconfig_2.0.3 zlibbioc_1.40.0
#> [22] purrr_0.3.4 scales_1.1.1 ScaledMatrix_1.2.0
#> [25] BiocParallel_1.28.0 tibble_3.1.5 generics_0.1.1
#> [28] farver_2.1.0 ellipsis_0.3.2 withr_2.4.2
#> [31] magrittr_2.0.1 crayon_1.4.1 evaluate_0.14
#> [34] fansi_0.5.0 tools_4.1.1 lifecycle_1.0.1
#> [37] stringr_1.4.0 munsell_0.5.0 locfit_1.5-9.4
#> [40] DelayedArray_0.20.0 compiler_4.1.1 jquerylib_0.1.4
#> [43] rsvd_1.0.5 rlang_0.4.12 grid_4.1.1
#> [46] RCurl_1.98-1.5 bitops_1.0-7 labeling_0.4.2
#> [49] rmarkdown_2.11 gtable_0.3.0 DBI_1.1.1
#> [52] R6_2.5.1 knitr_1.36 dplyr_1.0.7
#> [55] fastmap_1.1.0 utf8_1.2.2 stringi_1.7.5
#> [58] parallel_4.1.1 Rcpp_1.0.7 vctrs_0.3.8
#> [61] tidyselect_1.1.1 xfun_0.27