临床预测模型+机器学习(R语言)

0——1章


0、课程简介

建议先学习R语言有关的统计学课程

定义

构建方式

    1. 参数化模型和半参数化模型有回归系数beta;而机器学习没有参数。
    1. 正则化技术作为变量筛选方法:包括岭回归、lasso回归和弹性网络;而聚类分析和主成分与因子一般是降维分析(如:100多个变量聚类成10个变量)
image.png

课表

课表1

课表2

课表3

课表4

课表5

课表6

课表7

课表8

第一章 广义线性模型(generalize linear model,GLM)与R语言

image.png

image.png

image.png

image.png

image.png

**通过连接函数将非正态分布的函数变为正态分布的函数**

image.png

image.png

image.png

模型拟合和回归诊断

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

image.png

## 第1章代码开始
# 01)Generalized linear models   广义线性模型                 
# requires packages AER, robust, gcc           
# install.packages(c("AER", "robust", "qcc"))

#####################################################################################################
## 02) Logistic Regression  logistic回归

# get summary statistics
data(Affairs, package="AER")
#View(Affairs)
summary(Affairs)#展示结果
str(Affairs)#可以查看数据的结构
table(Affairs$affairs)

# create binary outcome variable
Affairs$ynaffair[Affairs$affairs > 0] <- 1
Affairs$ynaffair[Affairs$affairs == 0] <- 0
Affairs$ynaffair <- factor(Affairs$ynaffair, 
                           levels=c(0,1),
                           labels=c("No","Yes"))
> table(Affairs$ynaffair)
  No Yes 
  451 150 
# fit full model
#注意此时的y变量是刚定义的ynaffair结局只有2类(有偷情的和无偷情的),可以二分类logit回归,因为用原始数据来构建模型只能用泊松回归,因为符合有限个数正整数结局时间。
fit.full <- glm(ynaffair ~ gender + age + yearsmarried + children + 
                  religiousness + education + occupation +rating,
                data=Affairs,family=binomial())#binomial就是logit回归
> summary(fit.full)#注意下面的带*的结果,都是有意义的。
Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept)    1.37726    0.88776   1.551 0.120807    #Intercept表示常量
gendermale     0.28029    0.23909   1.172 0.241083    
age           -0.04426    0.01825  -2.425 0.015301 *  
yearsmarried   0.09477    0.03221   2.942 0.003262 ** 
childrenyes    0.39767    0.29151   1.364 0.172508    
religiousness -0.32472    0.08975  -3.618 0.000297 ***
education      0.02105    0.05051   0.417 0.676851    
occupation     0.03092    0.07178   0.431 0.666630    
rating        -0.46845    0.09091  -5.153 2.56e-07 ***
---
Signif. codes:  
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 675.38  on 600  degrees of freedom
Residual deviance: 609.51  on 592  degrees of freedom
AIC: 627.51#AIC信息准则即Akaike information criterion,是衡量统计模型拟合优良性(Goodness of fit)的一种标准
# fit reduced model#将summary(fit.full)当中带*号的指标重新进行建模如下
fit.reduced <- glm(ynaffair ~ age + yearsmarried + religiousness + 
                     rating, data=Affairs, family=binomial())
> summary(fit.reduced)
Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept)    1.93083    0.61032   3.164 0.001558 ** 
age           -0.03527    0.01736  -2.032 0.042127 *  
yearsmarried   0.10062    0.02921   3.445 0.000571 ***
religiousness -0.32902    0.08945  -3.678 0.000235 ***
rating        -0.46136    0.08884  -5.193 2.06e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 675.38  on 600  degrees of freedom
Residual deviance: 615.36  on 596  degrees of freedom
AIC: 625.36

# compare models
> anova(fit.reduced, fit.full, test="Chisq")#比较两个模型的优劣,没有显著性
Analysis of Deviance Table

Model 1: ynaffair ~ age + yearsmarried + religiousness + rating
Model 2: ynaffair ~ gender + age + yearsmarried + children + religiousness + 
    education + occupation + rating
  Resid. Df Resid. Dev Df Deviance Pr(>Chi)
1       596     615.36                     
2       592     609.51  4   5.8474   0.2108


# interpret coefficients,
coef(fit.reduced)#查看模型的回归系数
exp(coef(fit.reduced))#OR值
exp(confint(fit.reduced)#OR值置信区间

# calculate probability of extramariatal affair by marital ratings#构建新的数据框,评分高低和婚外情的概率
testdata <- data.frame(rating = c(1, 2, 3, 4, 5),
                       age = mean(Affairs$age),
                       yearsmarried = mean(Affairs$yearsmarried),
                       religiousness = mean(Affairs$religiousness))
> testdata$prob <- predict(fit.reduced, newdata=testdata, type="response")
> testdata
  rating      age yearsmarried religiousness      prob
1      1 32.48752     8.177696      3.116473 0.5302296
2      2 32.48752     8.177696      3.116473 0.4157377
3      3 32.48752     8.177696      3.116473 0.3096712
4      4 32.48752     8.177696      3.116473 0.2204547
5      5 32.48752     8.177696      3.116473 0.1513079

# calculate probabilites of extramariatal affair by age#年龄段和婚外情的概率
> testdata <- data.frame(rating = mean(Affairs$rating),
                       age = seq(17, 57, 10), 
                       yearsmarried = mean(Affairs$yearsmarried),
                       religiousness = mean(Affairs$religiousness))
> testdata$prob <- predict(fit.reduced, newdata=testdata, type="response")
> testdata#17到26岁出轨概率更高
   rating age yearsmarried religiousness      prob
1 3.93178  17     8.177696      3.116473 0.3350834
2 3.93178  27     8.177696      3.116473 0.2615373
3 3.93178  37     8.177696      3.116473 0.1992953
4 3.93178  47     8.177696      3.116473 0.1488796
5 3.93178  57     8.177696      3.116473 0.1094738

# evaluate overdispersion#评价过度离势
> fit <- glm(ynaffair ~ age + yearsmarried + religiousness +
             rating, family = binomial(), data = Affairs)#相当于logit回归
> fit.od <- glm(ynaffair ~ age + yearsmarried + religiousness +
                rating, family = quasibinomial(), data = Affairs)#拟合过度离势;相当于稳健logit回归
> pchisq(summary(fit.od)$dispersion * fit$df.residual,  
       fit$df.residual, lower = F)#卡方检验的P值,大于0.05时,认为没有过度离势;如果有只要quasibinomial回归
[1] 0.340122




##########################################################################################
# Logistic回归案例2
Example11_4  <- read.table ("example11_4.csv", header=TRUE, sep=",")
attach(Example11_4)

fit1 <- glm(y~ x1 + x2, family= binomial(), data=Example11_4)
summary(fit1)
coefficients(fit1)
exp(coefficients(fit1))
exp (confint(fit1))

fit2 <- glm(y~ x1 + x2 + x1:x2 ,  family= binomial(), data=Example11_4)
summary(fit2)
coefficients(fit2)
exp(coefficients(fit2))
exp (confint(fit2))

Example11_4$x11  <- ifelse (x1==1 & x2==1, 1, 0)
Example11_4$x10  <- ifelse (x1==1 & x2==0, 1, 0)
Example11_4$x01  <- ifelse (x1==0 & x2==1, 1, 0)
Example11_4$x00  <- ifelse (x1==0 & x2==0, 1, 0)

fit3 <- glm(y~ x11 + x10 + x01, family= binomial(), data=Example11_4)
summary(fit3)
coefficients(fit3)
exp(coefficients(fit3))
exp(confint(fit3))
detach (Example11_4)

# Logistic回归案例3
Example11_5 <- read.table ("example11_5.csv", header=TRUE, sep=",")
attach(Example11_5)
fullfit <- glm(y~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 ,  family= binomial(), data=Example11_5)
summary(fullfit)
nothing <- glm(y~ 1, family= binomial(), data=Example11_5)
summary(nothing)
bothways <- step(nothing, list(lower=formula(nothing), upper=formula(fullfit)),  direction="both")
fit1 <- glm(y~ x6 + x5 + x8 + x1 + x2 ,  family= binomial(), data=Example11_5)
summary(fit1)
fit2 <- glm(y~ x6 + x5 + x8 + x1, family= binomial(), data=Example11_5)
summary(fit2)
coefficients(fit2)
exp(coefficients(fit2))
exp (confint(fit2))
detach (Example11_5)

#条件Logistic回归案例 4
#install.packages("survival")
library(survival)
Example11_6  <- read.table ("example11_6.csv", header=TRUE, sep=",")
attach(Example11_6)
model <- clogit(outcome~ exposure+ strata(id))
summary(model)
detach(Example11_6)

###################################################################################################
# 03)列线图
#Logistic回归案例 5 低出生体重儿列线图
library(foreign) 
library(rms)

mydata<-read.spss("lweight.sav")
mydata<-as.data.frame(mydata)
head(mydata)

mydata$low <- ifelse(mydata$low =="低出生体重",1,0)
mydata$race1 <- ifelse(mydata$race =="白种人",1,0)
mydata$race2 <- ifelse(mydata$race =="黑种人",1,0)
mydata$race3 <- ifelse(mydata$race =="其他种族",1,0)

attach(mydata)
dd<-datadist(mydata)
options(datadist='dd')

fit1<-lrm(low~age+ftv+ht+lwt+ptl+smoke+ui+race1+race2,data=mydata,x=T,y=T)
fit1
summary(fit1)
nom1 <- nomogram(fit1, fun=plogis,fun.at=c(.001, .01, .05, seq(.1,.9, by=.1), .95, .99, .999),lp=F, funlabel="Low weight rate")
plot(nom1)
cal1 <- calibrate(fit1, method='boot', B=1000)
plot(cal1,xlim=c(0,1.0),ylim=c(0,1.0))

mydata$race <- as.factor(ifelse(mydata$race=="白种人", "白种人","黑人及其他种族"))

dd<-datadist(mydata)
options(datadist='dd')

fit2<-lrm(low~age+ftv+ht+lwt+ptl+smoke+ui+race,data=mydata,x=T,y=T)
fit2
summary(fit2)

nom2 <- nomogram(fit2, fun=plogis,fun.at=c(.001, .01, .05, seq(.1,.9, by=.1), .95, .99, .999),lp=F, funlabel="Low weight rate")
plot(nom2)
cal2 <- calibrate(fit2, method='boot', B=1000)
plot(cal2,xlim=c(0,1.0),ylim=c(0,1.0))

fit3<-lrm(low~ht+lwt+ptl+smoke+race,data=mydata,x=T,y=T)
fit3
summary(fit3)

nom3 <- nomogram(fit3, fun=plogis,fun.at=c(.001, .01, .05, seq(.1,.9, by=.1), .95, .99, .999),lp=F, funlabel="Low weight rate")
plot(nom3)
cal3 <- calibrate(fit3, method='boot', B=1000)
plot(cal3,xlim=c(0,1.0),ylim=c(0,1.0))


#########################################################################################################
#  04)C-statistics计算
library(foreign) 
library(rms)

mydata<-read.spss("lweight.sav")
mydata<-as.data.frame(mydata)
head(mydata)

mydata$low <- ifelse(mydata$low =="低出生体重",1,0)
mydata$race1 <- ifelse(mydata$race =="白种人",1,0)
mydata$race2 <- ifelse(mydata$race =="黑种人",1,0)
mydata$race3 <- ifelse(mydata$race =="其他种族",1,0)

attach(mydata)
dd<-datadist(mydata)
options(datadist='dd')

fit1<-lrm(low~age+ftv+ht+lwt+ptl+smoke+ui+race1+race2,data=mydata,x=T,y=T)
fit1 #直接读取模型中Rank Discrim.参数 C

mydata$predvalue<-predict(fit1)
library(ROCR)
pred <- prediction(mydata$predvalue, mydata$low)
perf<- performance(pred,"tpr","fpr")
plot(perf)
abline(0,1)
auc <- performance(pred,"auc")
auc #auc即是C-statistics

#library(Hmisc)
somers2(mydata$predvalue, mydata$low) #somers2 {Hmisc}

######################################################################################################
## 05)亚组分析森林图
#Logistic回归案例 6 亚组分析森林图
library(forestplot)
rs_forest <- read.csv('rs_forest.csv',header = FALSE)
# 读入数据的时候大家一定要把header设置成FALSE,保证第一行不被当作列名称。
# tiff('Figure 1.tiff',height = 1600,width = 2400,res= 300)
forestplot(labeltext = as.matrix(rs_forest[,1:3]),
           #设置用于文本展示的列,此处我们用数据的前三列作为文本,在图中展示
           mean = rs_forest$V4, #设置均值
           lower = rs_forest$V5, #设置均值的lowlimits限
           upper = rs_forest$V6, #设置均值的uplimits限
           is.summary = c(T,T,T,F,F,T,F,F,T,F,F),
           #该参数接受一个逻辑向量,用于定义数据中的每一行是否是汇总值,若是,则在对应位置设置为TRUE,若否,则设置为FALSE;设置为TRUE的行则以粗体出现
           zero = 1, #设置参照值,此处我们展示的是OR值,故参照值是1,而不是0
           boxsize = 0.4, #设置点估计的方形大小
           lineheight = unit(10,'mm'),#设置图形中的行距
           colgap = unit(3,'mm'),#设置图形中的列间距
           lwd.zero = 2,#设置参考线的粗细
           lwd.ci = 1.5,#设置区间估计线的粗细
           col=fpColors(box='#458B00', summary= "#8B008B",lines = 'black',zero = '#7AC5CD'),
           #使用fpColors()函数定义图形元素的颜色,从左至右分别对应点估计方形,汇总值,区间估计线,参考线
           xlab="The estimates",#设置x轴标签
           graph.pos = 3)#设置森林图的位置,此处设置为3,则出现在第三列

##################################################################################################################
## 06) Poisson Regression

# look at dataset
data(breslow.dat, package="robust")
names(breslow.dat)
summary(breslow.dat[c(6, 7, 8, 10)])
str(breslow.dat[c(6, 7, 8, 10)])
# plot distribution of post-treatment seizure counts
opar <- par(no.readonly=TRUE)
par(mfrow=c(1, 2))
attach(breslow.dat)
hist(sumY, breaks=20, xlab="Seizure Count", 
     main="Distribution of Seizures")
boxplot(sumY ~ Trt, xlab="Treatment", main="Group Comparisons")
par(opar)

# fit regression#拟合回归
> fit <- glm(sumY ~ Base + Age + Trt, data=breslow.dat, family=poisson())
> summary(fit)
Call:
glm(formula = sumY ~ Base + Age + Trt, family = poisson(), data = breslow.dat)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-6.0569  -2.0433  -0.9397   0.7929  11.0061  
#哪些因素是影响癫痫发作的,看Coefficients:
Coefficients:
               Estimate Std. Error z value Pr(>|z|)    
(Intercept)   1.9488259  0.1356191  14.370  < 2e-16 ***#截距是常量
Base          0.0226517  0.0005093  44.476  < 2e-16 ***#影响
Age           0.0227401  0.0040240   5.651 1.59e-08 ***#影响
Trtprogabide -0.1527009  0.0478051  -3.194   0.0014 ** #影响
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 2122.73  on 58  degrees of freedom
Residual deviance:  559.44  on 55  degrees of freedom
AIC: 850.71

Number of Fisher Scoring iterations: 5



# interpret model parameters
> coef(fit)
(Intercept)         Base          Age Trtprogabide 
  1.94882593   0.02265174   0.02274013  -0.15270095 
> exp(coef(fit))#接下来,看回归系数的反对数,泊松回归不太容易解释
(Intercept)         Base          Age   Trtprogabide 
   7.0204403    1.0229102    1.0230007    0.8583864 

###泊松回归不像logit回归不太容易解释exp(coef(fit))中Trtprogabide 药物和对照组的比0.8583864 的关系


# evaluate overdispersion
> deviance(fit)/df.residual(fit)
> library(qcc)
> qcc.overdispersion.test(breslow.dat$sumY, type="poisson")
Overdispersion test Obs.Var/Theor.Var Statistic p-value
       poisson data          62.87013  3646.468       0
##这里的0表示有有过度离势,只能用稳健的泊松分布,发现只有一个有意义的。
# fit model with quasipoisson#用quasipoisson评估过度离势是泊松分布的特点,根据之前评价过度离势可以进行两模型的比较,这里结果三个因素都是有统计学意义的因素,所以这里考虑用qcc的包,qcc.overdispersion.test检验
fit.od <- glm(sumY ~ Base + Age + Trt, data=breslow.dat,
              family=quasipoisson())
summary(fit.od)
Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)   1.948826   0.465091   4.190 0.000102 ***
Base          0.022652   0.001747  12.969  < 2e-16 ***###只有这有关系
Age           0.022740   0.013800   1.648 0.105085    
Trtprogabide -0.152701   0.163943  -0.931 0.355702    ###否定了药物的作用,统计和临床有差别
## 第1章代码结束

泊松分布案例

image.png
  • logistic回归就是指“有”和“无”
  • 泊松分布指的是非0的整数

你可能感兴趣的:(临床预测模型+机器学习(R语言))