Single cell RNA-seq data analysis with R视频学习笔记(三)

第三讲:Normalization of scRNA-seq data
视频地址:https://www.youtube.com/watch?v=gbIks6bA8nI&list=PLjiXAZO27elC_xnk7gVNM85I2IQl5BEJN&index=3&frags=wn

In normalization, we remove non-biological variation, you can also remove biological variation that not interesting from the point of you . The aim is to make the contributions comparable from the different cells. In single cell sequencing, usually ,only cells between cells normalization usually done with 3' prime tag data. You don't need to do the gene length normalization. But if you have full length data, like from SMART-seq2, then also the gene length must be take into account.

So the aim of normalization is to remove these bias typical to single cell data. So after normalization, the gene expression should not be clearly correlated with sequencing depth of the cells. And the variation should mainly reflects the biological variation instead of technical variation across the cells.

We already discussed the QC in scRNA-seq data is different from bulk RNA-seq. One reason is that is noisy. So you have low mRNA content in a single cell. You have this difference in mRNA captures ,sequencing depth, and so on random variation. And also you have different cell types within the same sample. So the bulk-RNA seq normalization method don't necessarily work well by themselves.

In order to separate the biological and technical variants , you have to make kind of an estimate of what kind of variance you have in the data. And sometimes ,this has been done with spike-in RNAs, but with droplet-based single cell methodologies, spike-in are not usually used because you would end up sequencing a lot of empty barcodes with nothing but spikes, and the cost is not really worth. So spike-in I mainly used in the plate-based methods. The technical variance need to be estimated from the whole data. Here it is assumed that most of the genes don't changed it's expression, so the bulk gene expression probably it is just due to technical reasons.

There are a lot of different normalization methods. The most common ones are based on size factors one way or the other. But lately they have also been introduced some new models of probabilistic normalization.

Counts per million transcripts for million method are themselves not really sufficient, but they can be combined with other procedures as we will see to make them better suited for scRNA-seq data. If you do full length transcriptome with plate-based method, you also can almost use the gene length normalizing method.
一般不用DESeq来进行标准化,因为有很多的dropout,所以size factor可能会是0。

One of group of size factor based normalization method , the global scaling methods, these are commonly used. Here it's assuming that RNA level are not varying very much between cells depend on the data how reasonable that assumption is. This is some kind of modified CPM normalization which is used in cell ranger 10× software which is basically a kind of CPM-like normalization followed by log transformation . It also seem to work well for many datasets.

One of quite recently published method is the deconvolution pooling . In this method, you pull several cell together to make a kind of pseudo cell , and then you sum up all the counts to get rid of these dropout problem, and normalize that to a reference which is reference to pseudo cell made by pooling all the cells together. You repeat and continue with this many times until every individual cell is part of several different pools . And from those pools you can deconvolute the cells' specific size factors.

One more thing that I want to mention is Bayesian model based methods.(这个方法适用于那些有spike-in的data)

Next step before doing principle component analysis or clustering is selecting interesting genes. Usually you want to exclude genes variably expressed because they are not give you any interesting information anyway. By doing this, you can improve the signal-to -noise ratio and also make the statistics and computations a little bit easier. This can also be done with many many different methods. One popular group of methods is selecting HV, DS, highly variable genes. But you can also look at gene correlations. And one quite obvious way is to do PCA , and then choose the top of genes on the PCs.

For the highly variable genes, you are looking for the genes which are standing out from the normal model that is describing technical noise in your data. One common way of doing this is to fit some kind of mean variance trend to your data. The other way of doing this is looking at the coefficient of variation. The other method is to cut the dropout rate of the genes. If you have a high number of zeros in your data for some genes , means that probably a cell type specifically expressed genes.

You can also use the gene correlations for selecting interesting genes .The idea is that if you have different cell types in your data, there will be groups of genes that are correlated in their expression. That correlation tells you that they are probably two variable genes instead of being just a random up and down. This method assumes that the technical noise is independent for each cell, but this is obviously not always true if you have batch effects, then this assumption doesn't hold. I only talk about unsupervised methods of choosing variable genes , you might also want to take some lists of genes that you know that is important in your biological model. For instance, if you already done one experiment, which is similar and got a list of variable genes . You might want to use the same list to analysis a new set of data. You just need to ensure that your data doesn't contain some other variability which you are not aware of.

附:标准化的实战练习网址(包括非常详细的代码和练习数据下载):https://github.com/NBISweden/excelerate-scRNAseq/blob/master/session-normalization/Normalization_with_answers.md

你可能感兴趣的:(Single cell RNA-seq data analysis with R视频学习笔记(三))