RNAmodR.ML 1.8.0
Post-transcriptional modifications can be found abundantly in rRNA and tRNA and can be detected classically via several strategies. However, difficulties arise if the identity and the position of the modified nucleotides is to be determined at the same time. Classically, a primer extension, a form of reverse transcription (RT), would allow certain modifications to be accessed by blocks during the RT changes or changes in the cDNA sequences. Other modification would need to be selectively treated by chemical reactions to influence the outcome of the reverse transcription.
With the increased availability of high throughput sequencing, these classical methods were adapted to high throughput methods allowing more RNA molecules to be accessed at the same time. However, patterns of some modification cannot be detected by accessing small number of parameters. For these cases machine learning models can be trained on data from positions known to be modified in order to detect additional modified positions.
To extend the functionality of the RNAmodR
package and classical detection
strategies used for RiboMethSeq or AlkAnilineSeq (see RNAmodR.RiboMethSeq
and
RNAmodR.AlkAnilineSeq
packages) towards detection through machine learning
models, RNAmodR.ML
provides classes and an example workflow.
## snapshotDate(): 2021-10-18
library(rtracklayer)
library(GenomicRanges)
library(RNAmodR.ML)
library(RNAmodR.Data)
The ModifierML
class extends the Modifier
class from the RNAmodR
package
and adds one slot, mlModel
, a getter/setter getMLModel
/setMLModel
, an
additional useMLModel
function to be called from the aggregate
function.
The slot mlModel
can either be an empty character or contain a name of a
ModifierMLModel
class, which is loaded upon creation of a ModifierML
object,
and serves as a wrapper around a machine learning model. For different types of
machine learning models ModifierMLModel
derived classes are available, which
currently are:
ModifierMLranger
for models generated with the ranger
package
(Wright and Ziegler 2017)ModifierMLkeras
for models generated with the keras
package
(Allaire and Chollet 2018)To illustrate how to develop a machine learning model with help from the
RNAmodR.ML
package, an example is given below.
Modifier
classAs an example for this vignette, we will try to detect D positions in
AlkAnilineSeq data. First define a specific ModifierML
class loading pileup
and coverage data. In this example, the RNA specific RNAModifierML
class is
used.
setClass("ModMLExample",
contains = c("RNAModifierML"),
prototype = list(mod = c("D"),
score = "score",
dataType = c("PileupSequenceData",
"CoverageSequenceData"),
mlModel = character(0)))
# constructor function for ModMLExample
ModMLExample <- function(x, annotation = NA, sequences = NA, seqinfo = NA,
...){
RNAmodR:::Modifier("ModMLExample", x = x, annotation = annotation,
sequences = sequences, seqinfo = seqinfo, ...)
}
setClass("ModSetMLExample",
contains = "ModifierSet",
prototype = list(elementType = "ModMLExample"))
# constructor function for ModSetMLExample
ModSetMLExample <- function(x, annotation = NA, sequences = NA, seqinfo = NA,
...){
RNAmodR:::ModifierSet("ModMLExample", x, annotation = annotation,
sequences = sequences, seqinfo = seqinfo, ...)
}
Since the mlModel
slot contains an empty character, the creation of the object
will not automatically trigger a search for modifications. However, it will
aggregate the data in the format we want to use. The aggregate_example
function is just an example and the aggregation of the data is part of the
model building. (See (#Summary))
setMethod(
f = "aggregateData",
signature = signature(x = "ModMLExample"),
definition =
function(x){
aggregate_example(x)
}
)
To gather training data, we just create a ModMLExample
object and let it do
its job.
me <- ModMLExample(files[[1]], annotation, sequences)
## Import genomic features from the file as a GRanges object ... OK
## Prepare the 'metadata' data frame ... OK
## Make the TxDb object ... OK
## Loading Pileup data from BAM files ... OK
## Loading Coverage data from BAM files ... OK
## Aggregating data and calculating scores ... Starting to search for 'Dihydrouridine' ...
## Warning: ML model not set. Skipped search for modifications ...
## done.
Afterwards we need to load/create coordinates for positions known to be modified as well as positions known to be unmodified.
data("dmod",package = "RNAmodR.ML")
# we just select the next U position from known positions
nextUPos <- function(gr){
nextU <- lapply(seq.int(1L,2L),
function(i){
subseq <- subseq(sequences(me)[dmod$Parent], start(dmod)+3L)
pos <- start(dmod) + 2L +
vapply(strsplit(as.character(subseq),""),
function(y){which(y == "U")[i]},integer(1))
ans <- dmod
ranges(ans) <- IRanges(start = pos, width = 1L)
ans
})
nextU <- do.call(c,nextU)
nextU$mod <- NULL
unique(nextU)
}
nondmod <- nextUPos(dmod)
nondmod <- nondmod[!(nondmod %in% dmod)]
coord <- unique(c(dmod,nondmod))
coord <- coord[order(as.integer(coord$Parent))]
With these coordinates the aggregated data of the ModMLExample
can be subset
to a training data set using the function trainingData
.
trainingData <- trainingData(me,coord)
trainingData <- unlist(trainingData, use.names = FALSE)
# converting logical labels to integer
trainingData$labels <- as.integer(trainingData$labels)
How a specific model can be trained or what kind of strategies can be employed
to successfully train a model, is out of scope for the vignette. For this
example a random forest is trained using the functionality provided by the
ranger
package.
library(ranger)
model <- ranger(labels ~ ., data.frame(trainingData))
Now, the trained model can be used to create a new ModifierMLModel
class and
object.
setClass("ModifierMLexample",
contains = c("ModifierMLranger"),
prototype = list(model = model))
ModifierMLexample <- function(...){
new("ModifierMLexample")
}
mlmodel <- ModifierMLexample()
To be able to use the model via the ModifierMLModel
class, we also need to
define an accessor to the predictions made by the model. This function is called
useModel
and is already prefined for the ModifierMLModel
classes mentioned
above in secion Using RNAmodR.ML.
getMethod("useModel", c("ModifierMLranger","ModifierML"))
## Method Definition:
##
## function (x, y)
## {
## data <- getAggregateData(y)
## model <- x@model
## if (!is(model, "ranger")) {
## stop("model is not a ranger model")
## }
## unlisted_data <- unlist(data, use.names = FALSE)
## p <- predict(x@model, data.frame(unlisted_data))
## relist(p$predictions, data)
## }
## <bytecode: 0x55aa2c414c10>
## <environment: namespace:RNAmodR.ML>
##
## Signatures:
## x y
## target "ModifierMLranger" "ModifierML"
## defined "ModifierMLranger" "ModifierML"
If the results of these function is not usable for a specific model, it can
redefined for your needs. As defined by RNAmodR.ML
the function returns a
NumericList
along the aggregated data of the ModifierML
object.
The generated ModifierMLexample
wrapper can now be set for the ModifierML
object using the setMLModel
function. (If a model is saved as part of a
package, this step ist not necessary, because it can be part of the class
definition)
setMLModel(me) <- mlmodel
In order for the prediction data to be usable, another function has to be
implemented to save the predictions in the aggregated data. The function is
called useMLModel
.
setMethod(f = "useMLModel",
signature = signature(x = "ModMLExample"),
definition =
function(x){
predictions <- useModel(getMLModel(x), x)
data <- getAggregateData(x)
unlisted_data <- unlist(data, use.names = FALSE)
unlisted_data$score <- unlist(predictions)
x@aggregate <- relist(unlisted_data,data)
x
}
)
By re-running the aggregate
function and force an update of the data, the
predictions are made and used to populate the score
column as outlined above.
me <- aggregate(me, force = TRUE)
During the model building phase some form of a performance measurement usually
is included. In addition to these model specific measurements, RNAmodR.ML
includes the functionality from the ROCR
package inherited from the RNAmodR
package. With this the performance of a model can evaluted over the training set
or any coordinates.
plotROC(me, dmod)
ModifierML
classSince we want to use the ModifierML
object to detect modifications, we also
need to define the findMod
function. Based on the information on the
performance, we set a threshold of 0.8
for the prediction score for detecting
D modifications. In the example below this threshold is hardcoded in the
find_mod_example
function, but can also be implemented using the settings
function.
setMethod(
f = "findMod",
signature = signature(x = "ModMLExample"),
definition =
function(x){
find_mod_example(x, 25L)
}
)
Now we can redfine the ModMLExample
class with the slot mlModel
already set
to the ModifierMLexample
class. The ModMLExample
is now complete.
rm(me)
setClass("ModMLExample",
contains = c("RNAModifierML"),
prototype = list(mod = c("D"),
score = "score",
dataType = c("PileupSequenceData",
"CoverageSequenceData"),
mlModel = "ModifierMLexample"))
me <- ModMLExample(files[[1]], annotation, sequences)
## Import genomic features from the file as a GRanges object ... OK
## Prepare the 'metadata' data frame ... OK
## Make the TxDb object ... OK
## Loading Pileup data from BAM files ... OK
## Loading Coverage data from BAM files ... OK
## Aggregating data and calculating scores ... Starting to search for 'Dihydrouridine' ... done.
The detected modifications can be access using the modifications
function.
mod <- modifications(me)
mod <- split(mod, factor(mod$Parent,levels = unique(mod$Parent)))
mod
## GRangesList object of length 37:
## $`1`
## GRanges object with 2 ranges and 5 metadata columns:
## seqnames ranges strand | mod source type
## <Rle> <IRanges> <Rle> | <character> <character> <character>
## [1] Q0020_15S_RRNA 48 + | D RNAmodR.ML RNAMOD
## [2] Q0020_15S_RRNA 323 + | D RNAmodR.ML RNAMOD
## score Parent
## <numeric> <character>
## [1] 0.920805 1
## [2] 0.800471 1
## -------
## seqinfo: 60 sequences from an unspecified genome; no seqlengths
##
## $`3`
## GRanges object with 6 ranges and 5 metadata columns:
## seqnames ranges strand | mod source type score
## <Rle> <IRanges> <Rle> | <character> <character> <character> <numeric>
## [1] RDN18-1 229 + | D RNAmodR.ML RNAMOD 0.809967
## [2] RDN18-1 454 + | D RNAmodR.ML RNAMOD 0.819900
## [3] RDN18-1 494 + | D RNAmodR.ML RNAMOD 0.808833
## [4] RDN18-1 839 + | D RNAmodR.ML RNAMOD 0.807600
## [5] RDN18-1 1225 + | D RNAmodR.ML RNAMOD 0.822500
## [6] RDN18-1 1669 + | D RNAmodR.ML RNAMOD 0.830133
## Parent
## <character>
## [1] 3
## [2] 3
## [3] 3
## [4] 3
## [5] 3
## [6] 3
## -------
## seqinfo: 60 sequences from an unspecified genome; no seqlengths
##
## $`4`
## GRanges object with 9 ranges and 5 metadata columns:
## seqnames ranges strand | mod source type score
## <Rle> <IRanges> <Rle> | <character> <character> <character> <numeric>
## [1] RDN25-1 2 + | D RNAmodR.ML RNAMOD 0.846600
## [2] RDN25-1 14 + | D RNAmodR.ML RNAMOD 0.805367
## [3] RDN25-1 506 + | D RNAmodR.ML RNAMOD 0.805033
## [4] RDN25-1 748 + | D RNAmodR.ML RNAMOD 0.861533
## [5] RDN25-1 1415 + | D RNAmodR.ML RNAMOD 0.822000
## [6] RDN25-1 1720 + | D RNAmodR.ML RNAMOD 0.837267
## [7] RDN25-1 1818 + | D RNAmodR.ML RNAMOD 0.829833
## [8] RDN25-1 2297 + | D RNAmodR.ML RNAMOD 0.821367
## [9] RDN25-1 2571 + | D RNAmodR.ML RNAMOD 0.815067
## Parent
## <character>
## [1] 4
## [2] 4
## [3] 4
## [4] 4
## [5] 4
## [6] 4
## [7] 4
## [8] 4
## [9] 4
## -------
## seqinfo: 60 sequences from an unspecified genome; no seqlengths
##
## ...
## <34 more elements>
Some of the modification found look reasonable. However, some of the positions seem to be noise.
options(ucscChromosomeNames=FALSE)
plotDataByCoord(sequenceData(me),mod[["4"]][1])
Several options exist to improve the model: Either the threshold applied to the
prediction score can be raised to a higher value, like 0.9
or the model can
maybe retrained by including these position in another training data set. In
addition, the training data might be improved in general by higher sequencing
depth.
nonValidMod <- mod[c("1","4")]
nonValidMod[["18"]] <- nonValidMod[["18"]][2]
nonValidMod[["26"]] <- nonValidMod[["26"]][2]
nonValidMod <- unlist(nonValidMod)
nonValidMod <- nonValidMod[,"Parent"]
coord <- unique(c(dmod,nondmod,nonValidMod))
coord <- coord[order(as.integer(coord$Parent))]
As an example, a new model is trained including the wrongly identified positions from the previous model as unmodified positions.
trainingData <- trainingData(me,coord)
trainingData <- unlist(trainingData, use.names = FALSE)
trainingData$labels <- as.integer(trainingData$labels)
model2 <- ranger(labels ~ ., data.frame(trainingData), num.trees = 2000)
setClass("ModifierMLexample2",
contains = c("ModifierMLranger"),
prototype = list(model = model2))
ModifierMLexample2 <- function(...){
new("ModifierMLexample2")
}
mlmodel2 <- ModifierMLexample2()
me2 <- me
setMLModel(me2) <- mlmodel2
me2 <- aggregate(me2, force = TRUE)
After updating the ModifierMLexample
class and aggregating the data again
the performance looks a bit better…
plotROC(me2, dmod, score="score")
… and leads to a better result for detecting D modifications. Some positions are not detected anymore, which suggest that this model is still not an optimal solution and other factors could and should be improved upon as suggested above.
setMethod(
f = "findMod",
signature = signature(x = "ModMLExample"),
definition =
function(x){
find_mod_example(x, 25L)
}
)
me2 <- modify(me2, force = TRUE)
modifications(me2)
## GRanges object with 45 ranges and 5 metadata columns:
## seqnames ranges strand | mod source type
## <Rle> <IRanges> <Rle> | <character> <character> <character>
## [1] tA-AGC-D 47 + | D RNAmodR.ML RNAMOD
## [2] tA-TGC-A 47 + | D RNAmodR.ML RNAMOD
## [3] tC-GCA-B 19 + | D RNAmodR.ML RNAMOD
## [4] tC-GCA-B 46 + | D RNAmodR.ML RNAMOD
## [5] tE-CTC-D 20 + | D RNAmodR.ML RNAMOD
## ... ... ... ... . ... ... ...
## [41] tV-CAC-D 47 + | D RNAmodR.ML RNAMOD
## [42] tW-CCA-G1 16 + | D RNAmodR.ML RNAMOD
## [43] tW-CCA-G1 46 + | D RNAmodR.ML RNAMOD
## [44] tY-GTA-D 21 + | D RNAmodR.ML RNAMOD
## [45] tY-GTA-D 22 + | D RNAmodR.ML RNAMOD
## score Parent
## <numeric> <character>
## [1] 0.954300 6
## [2] 0.957833 7
## [3] 0.909648 8
## [4] 0.950142 8
## [5] 0.835575 11
## ... ... ...
## [41] 0.927017 57
## [42] 0.800117 59
## [43] 0.839833 59
## [44] 0.893017 60
## [45] 0.807850 60
## -------
## seqinfo: 60 sequences from an unspecified genome; no seqlengths
In addition to training a single model, several models can be trained and
combined to a ModifierSet
.
mse <- ModSetMLExample(list(one = me, two = me2))
An overall performance over several models can be analyzed or the individual performance compaired.
plotROC(mse, dmod, score= "score",
plot.args = list(avg = "threshold", spread.estimate = "stderror"))
If several models are trained and each provides useful information, these can be
package into a single ModifierMLModel
class to combine the output of several
models. Some of the functions outlined above need, e.g. useMLModel
and/or
useModel
, to be modified to provide one or more scores for detecting a
modification.
If the created models can be saved to file, they can also be included in a package. This would allow others to use the models and the models can potentially be improved upon with increasing version numbers.
RNAmodR.ML
provides the interface for machine learning models to be used
with RNAmodR
to detect modified nucleotides in high throughput sequencing
data. For your own work defining a working model might take some time. We hope
that by using RNAmodR.ML
the steps surrounding this crucial step might become
a bit easier.
However, if some steps or design choices made for RNAmodR.ML
do not suit your
need, let us know. Contributions are always welcome as well.
We also want to provide some additional hints and suggestions for developing machine learning models.
keras
.cbind
on the data from
the SequenceData
objects (cbind
works on SequenceData
objects, if they
are of the same type. Convert each of them to a CompressedSplitDataFrameList
using as(x,"CompressedSplitDataFrameList")
).trainingData
, a 2D tensor is returned (sample,
values). This can be converted into a 3D tensor by providing a flanking value.sessionInfo()
## R version 4.1.1 (2021-08-10)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.3 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.14-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.14-bioc/R/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_GB LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats4 stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] ranger_0.13.1 Rsamtools_2.10.0 RNAmodR.Data_1.7.0
## [4] ExperimentHubData_1.20.0 AnnotationHubData_1.24.0 futile.logger_1.4.3
## [7] ExperimentHub_2.2.0 AnnotationHub_3.2.0 BiocFileCache_2.2.0
## [10] dbplyr_2.1.1 RNAmodR.ML_1.8.0 RNAmodR_1.8.0
## [13] Modstrings_1.10.0 Biostrings_2.62.0 XVector_0.34.0
## [16] rtracklayer_1.54.0 GenomicRanges_1.46.0 GenomeInfoDb_1.30.0
## [19] IRanges_2.28.0 S4Vectors_0.32.0 BiocGenerics_0.40.0
## [22] BiocStyle_2.22.0
##
## loaded via a namespace (and not attached):
## [1] backports_1.2.1 Hmisc_4.6-0
## [3] plyr_1.8.6 lazyeval_0.2.2
## [5] splines_4.1.1 BiocParallel_1.28.0
## [7] ggplot2_3.3.5 digest_0.6.28
## [9] ensembldb_2.18.0 htmltools_0.5.2
## [11] magick_2.7.3 fansi_0.5.0
## [13] magrittr_2.0.1 checkmate_2.0.0
## [15] memoise_2.0.0 BSgenome_1.62.0
## [17] cluster_2.1.2 ROCR_1.0-11
## [19] matrixStats_0.61.0 prettyunits_1.1.1
## [21] jpeg_0.1-9 colorspace_2.0-2
## [23] blob_1.2.2 rappdirs_0.3.3
## [25] xfun_0.27 dplyr_1.0.7
## [27] crayon_1.4.1 RCurl_1.98-1.5
## [29] jsonlite_1.7.2 graph_1.72.0
## [31] survival_3.2-13 VariantAnnotation_1.40.0
## [33] glue_1.4.2 gtable_0.3.0
## [35] zlibbioc_1.40.0 DelayedArray_0.20.0
## [37] scales_1.1.1 futile.options_1.0.1
## [39] DBI_1.1.1 Rcpp_1.0.7
## [41] xtable_1.8-4 progress_1.2.2
## [43] htmlTable_2.3.0 foreign_0.8-81
## [45] bit_4.0.4 OrganismDbi_1.36.0
## [47] Formula_1.2-4 AnnotationForge_1.36.0
## [49] getopt_1.20.3 htmlwidgets_1.5.4
## [51] httr_1.4.2 gplots_3.1.1
## [53] RColorBrewer_1.1-2 ellipsis_0.3.2
## [55] pkgconfig_2.0.3 XML_3.99-0.8
## [57] Gviz_1.38.0 nnet_7.3-16
## [59] sass_0.4.0 utf8_1.2.2
## [61] tidyselect_1.1.1 rlang_0.4.12
## [63] reshape2_1.4.4 later_1.3.0
## [65] AnnotationDbi_1.56.0 biocViews_1.62.0
## [67] munsell_0.5.0 BiocVersion_3.14.0
## [69] tools_4.1.1 cachem_1.0.6
## [71] generics_0.1.1 RSQLite_2.2.8
## [73] evaluate_0.14 stringr_1.4.0
## [75] fastmap_1.1.0 yaml_2.2.1
## [77] knitr_1.36 bit64_4.0.5
## [79] caTools_1.18.2 purrr_0.3.4
## [81] KEGGREST_1.34.0 AnnotationFilter_1.18.0
## [83] RBGL_1.70.0 mime_0.12
## [85] formatR_1.11 xml2_1.3.2
## [87] biomaRt_2.50.0 compiler_4.1.1
## [89] rstudioapi_0.13 filelock_1.0.2
## [91] curl_4.3.2 png_0.1-7
## [93] interactiveDisplayBase_1.32.0 tibble_3.1.5
## [95] bslib_0.3.1 stringi_1.7.5
## [97] highr_0.9 GenomicFeatures_1.46.0
## [99] lattice_0.20-45 ProtGenerics_1.26.0
## [101] Matrix_1.3-4 vctrs_0.3.8
## [103] stringdist_0.9.8 BiocCheck_1.30.0
## [105] pillar_1.6.4 lifecycle_1.0.1
## [107] optparse_1.7.1 RUnit_0.4.32
## [109] BiocManager_1.30.16 jquerylib_0.1.4
## [111] data.table_1.14.2 bitops_1.0-7
## [113] httpuv_1.6.3 colorRamps_2.3
## [115] R6_2.5.1 BiocIO_1.4.0
## [117] latticeExtra_0.6-29 bookdown_0.24
## [119] promises_1.2.0.1 KernSmooth_2.23-20
## [121] gridExtra_2.3 codetools_0.2-18
## [123] lambda.r_1.2.4 dichromat_2.0-0
## [125] gtools_3.9.2 assertthat_0.2.1
## [127] SummarizedExperiment_1.24.0 rjson_0.2.20
## [129] withr_2.4.2 GenomicAlignments_1.30.0
## [131] GenomeInfoDbData_1.2.7 parallel_4.1.1
## [133] hms_1.1.1 grid_4.1.1
## [135] rpart_4.1-15 rmarkdown_2.11
## [137] MatrixGenerics_1.6.0 biovizBase_1.42.0
## [139] Biobase_2.54.0 shiny_1.7.1
## [141] base64enc_0.1-3 restfulr_0.0.13
Allaire, JJ, and François Chollet. 2018. Keras: R Interface to ’Keras’. https://CRAN.R-project.org/package=keras.
Wright, Marvin N., and Andreas Ziegler. 2017. “ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R.” Journal of Statistical Software 77 (1): 1–17. https://doi.org/10.18637/jss.v077.i01.