Machine Learning, Homework 9, Neural NetsApril 15, 2019ContentsBoston Housing with a Single Layer and R package nnet 1Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Digit Recognition with R package h2o 5Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Boston Housing with a Single Layer and R package nnetLet’s do a very simple example with single layer neural nets.We’ll do the Boston housing data with x=lstat and y =mdev so that we have one numeric x and a numeric y.We’ve used this classic data set a few times so we are very familiar with it.Let’s get the data, pull off x and y and standardize x.library(MASS) ## a library of example datasetsattach(Boston)## standardize lstatrg = range(Boston$lstat)lstats = (Boston$lstat-rg[1])/(rg[2]-rg[1])##make data frame with standardized lstat values sorted for plottingddf = data.frame(lstats,medv=Boston$medv)oo = order(ddf$lstats) #order the data by x, convenient for plottingddf = ddf[oo,]head(ddf)## lstats medv## 162 0.000000000 50.0## 163 0.005242826 50.0## 41 0.006898455 34.9## 233 0.020419426 41.7## 193 0.031456954 36.4## 205 0.031732892 50.0And here is the familiar plot:plot(ddf)10.0 0.2 0.4 0.6 0.8 1.010 20 30 40 50lstatsmedvLet’s fit a simple neural net.One hidden layer with 5 units (neurons).library(nnet)set.seed(14)nn1 = nnet(medv~lstats,ddf,size=5,decay=.1,linout=T,maxit=1000)## # weights: 16## initial value 274435.143486## iter 10 value 14655.902880## iter 20 value 13675.210318## iter 30 value 13618.543249## iter 40 value 13593.167670## iter 50 value 13548.561442## iter 60 value 13545.520754## iter 70 value 13544.330448## iter 80 value 13541.583759## iter 90 value 13540.386199## iter 100 value 13539.604916## iter 110 value 13536.860853## iter 120 value 13535.643158## iter 130 value 13535.589069## final value 13535.578458## convergedsummary(nn1)## a 1-5-1 network with 16 weights## options were - linear output units decay=0.1## b->h1 i1->h1## 1.06 0.69## b->h2 i1->h2## 2.38 -38.17## b->h3 i1->h3## 2.49 -7.61## b->h4 i1->h4## 2.05 0.55## b->h5 i1->h5## 2.53 -7.60## b->o h1->o h2->o h3->o h4->o h5->o## 4.67 3.64 21.22 9.19 3.48 8.93Now let’s plot the fit:yhat1 = predict(nn1,ddf)plot(ddf)lines(ddf$lstats,yhat1,lty=1,col=red,lwd=3)20.0 0.2 0.4 0.6 0.8 1.010 20 30 40 50lstatsmedvNotice that you understand exactly how the single layer neural fit did this !!!Now let’s fit the 5 unit neural net for a set of decay values.Let’s do this in parallel using the R parallel package. This is simple enough that we don’t really need tospeed it up, but we can illustrate the approach. You may want to use if for some of the more complicatedmodel fits!library(doParallel) #library for parallel computing## Loading required package: foreach## Loading required package: iterators## Loading required package: parallelregisterDoParallel()cat(number of workers is: ,getDoParWorkers(), )## number of workers is: 4#you could pick the number of workers with:# registerDoParallel(cores=num) where num is the number of workers.Now we will use the function foreach to fit neural net models in parallel. First we set up a vector of decayvalues to try. Then we use foreach to run the neural net fits. foreach will return a list, with the ith listelement corresponding to the results obtained in the ith loop iteration.decv = c(.5,.1,.01,.005,.0025,.001,.0001,.00001)#do a parallel loop over decay valuesmodsL = foreach(i=1:length(decv)) %dopar% {library(nnet) #I did not have to do this when I was not in Rmarkdown.set.seed(5*i) #I did have to to this.nnfit = nnet(medv~lstats,ddf,size=5,decay=decv[i],linout=T,maxit=10000)nnfit}is.list(modsL)## [1] TRUElength(modsL)## [1] 8The function foreach will launch a bunch of R processes so things like random number seeds may have to bereset for each process.Now we can plot all the fits by looping over the list of models.plot(ddf)for(i in 1:length(modsL)) {yhat = predict(modsL[[i]],ddf)3lines(ddf$lstats,yhat,col=i,lty=i,lwd=2)}0.0 0.2 0.4 0.6 0.8 1.010 20 30 40 50lstatsmedvProblem fit the neural net model with size=100 and decay=.001, plot the fits. How does it look? Try runningthe fit at least twice to see that it changes. Redo the the loop over decay values with size=100. How does it look now? Do we need 100? Willdecay be more important with 100 than it was with 5 units?4Digit Recognition with R package h2oFirst, let’s fire up h2o.print(date())## [1] Tue Apr 16 16:22:12 2019library(h2o)#### ----------------------------------------------------------------------#### Your next step is to start H2O:## > h2o.init()#### For H2O package documentation, ask for help:## > ??h2o#### After starting H2O, you can use the Web UI at http://localhost:54321## For more information visit http://docs.h2o.ai#### ----------------------------------------------------------------------#### Attaching package: h2o## The following objects are masked from package:stats:#### cor, sd, var## The following objects are masked from package:base:#### &&, %*%, %in%, ||, apply, as.factor, as.numeric, colnames,## colnames## log10, log1p, log2, round, signif, trunch2o.init()#### H2O is not running yet, starting it now...#### Note: In case of errors look at the following log files:## /tmp/RtmpzaPmRq/h2o_root_started_from_r.out## /tmp/RtmpzaPmRq/h2o_root_started_from_r.err###### Starting H2O JVM and connecting: . Connection successful!#### R is connected to the H2O cluster:## H2O cluster uptime: 1 seconds 203 milliseconds## H2O cluster timezone: America/Phoenix## H2O data parsing timezone: UTC## H2O cluster version: 3.20.0.8## H2O cluster version age: 6 months and 25 days !!!## H2O cluster name: H2O_started_from_R_root_jrw534## H2O cluster total nodes: 1## H2O cluster total memory: 6.84 GB## H2O cluster total cores: 8## H2O cluster allowed cores: 8## H2O cluster healthy: TRUE## H2O Connection ip: localhost## H2O Connection port: 54321## H2O Connection proxy: NA## H2O Internal Security: FALSE## H2O API Extensions: XGBoost, Algos, AutoML, Core V3, Core V4## R Version: R version 3.5.1 (2018-07-02)## Warning in h2o.clusterInfo():## Your H2O cluster version is too old (6 months and 25 days)!## Please download and install the latest version from http://h2o.ai/download/Now we can read in the data.In order to make things run faster I’ll down sample to just ns=10,000 observations.train60D = read.csv(http://www.rob-mcculloch.org/data/mnist-train.csv)train60D$C785 = as.factor(train60D$C785)n = nrow(train60D)5set.seed(99)ns = 10000trainDS = train60D[sample(1:n,ns),]trainS = as.h2o(trainDS,trainS)##|| | 0%||=================================================================| 100%testD = read.csv(http://www.rob-mcculloch.org/data/mnist-test.csv)testD$C785 = as.factor(testD$C785)test = as.h2o(testD,test)##|| | 0%||=================================================================| 100%x=1:784;y=785print(ls())## [1] ddf decv i lstats modsL n## [7] nn1 ns oo rg test testD## [13] train60D trainDS trainS x y yhat## [19] yhat1print(h2o.ls())## key## 1 test## 2 trainSLet’s run h2o.deeplearning at settings similar to the ones that were found to work in the lecture notes. Idropped the layer/node architecture down to (50,50) so it would run faster. On my laptap in took about 90seconds to run the one below.I don’t know how long it will take on your machine.fp = file.path(./files,mDNNdrop)if(file.exists(fp)) {mDNNdrop = h2o.loadModel(fp)} else {tm = system.time({mDNNdrop = h2o.deeplearning(x,y,training_frame = trainS,hidden=c(50,50),activation=TanhWithDropout,hidden_dropout_ratios=c(.1,.1),l1=1e-4,epochs=2000,model_id=mDNNdrop,validation_frame=test)})}## Warning in .h2o.startModelJob(algo, params, h2oRestApiVersion): Dropping bad and constant columns: [C646, C645, C644, C365, C760, C51, C53, C52, C55, C54, C57, C56, C59, C58, C533, C253, C60, C703, C702, C701, C700, C1, C422, C2, C784, C3, C420, C783, C4, C782, C5, C143, C781, C6, C142, C780, C7, C141, C8, C9, C674, C673, C672, C393, C84, C83, C86, C85, C88, C87, C729, C728, C727, C726, C169, C561, C281, C11, C10, C12, C15, C617, C616, C17, C16, C19, C18, C699, C732, C731, C730, C450, C170, C20, C22, C21, C24, C23, C26, C25, C28, C505, C27, C29, C589, C225, C588, C31, C30, C32, C35, C759, C758, C757, C756, C755, C754, C115, C753, C477, C113, C112, C111, C197].##|| | 0%|| | 1%||= | 1%||= | 2%||== | 2%||== | 3%||=================================================================| 100%cat(the time is: ,tm, )## the time is: 0.617 0.005 80.098 0 06print(h2o.confusionMatrix(mDNNdrop,valid=TRUE))## Confusion Matrix: Row labels: Actual class; Column labels: Predicted class## 0 1 2 3 4 5 6 7 8 9 Error Rate## 0 959 0 2 1 0 9 6 1 2 0 0.0214 = 21 / 980## 1 0 1111 1 7 0 1 5 3 7 0 0.0211 = 24 / 1,135## 2 20 3 954 14 7 1 10 7 16 0 0.0756 = 78 / 1,032## 3 0 1 18 946 0 15 2 16 9 3 0.0634 = 64 / 1,010## 4 1 0 3 0 938 1 11 4 3 21 0.0448 = 44 / 982## 5 6 2 5 33 10 775 15 11 29 6 0.1312 = 117 / 892## 6 10 4 5 1 9 12 911 3 3 0 0.0491 = 47 / 958## 7 3 5 19 8 8 0 2 964 1 18 0.0623 = 64 / 1,028## 8 11 6 7 23 12 13 10 11 874 7 0.1027 = 100 / 974## 9 7 6 3 12 33 8 2 22 4 912 0.0961 = 97 / 1,009## Totals 1017 1138 1017 1045 1017 835 974 1042 948 967 0.0656 = 656 / 10,000missclass = h2o.performance(mDNNdrop,valid=TRUE)@metrics$mean_per_class_errorcat(the mean per class error is: ,missclass, )## the mean per class error is: 0.06676157## if you like it, keep it#h2o.saveModel(mDNNdrop,path=./files)print(date())## [1] Tue Apr 16 16:25:01 2019Problem I always used dropout. Is that a good idea? Change the settings to not use dropout. Is it worse orbetter? Do a couple of runs. look at h2o.deeplearning. Pick another option and try changing it to see if you can improve theprediction.7本团队核心人员组成主要包括BAT一线工程师,精通德英语!我们主要业务范围是代做编程大作业、课程设计等等。我们的方向领域:window编程 数值算法 AI人工智能 金融统计 计量分析 大数据 网络编程 WEB编程 通讯编程 游戏编程多媒体linux 外挂编程 程序API图像处理 嵌入式/单片机 数据库编程 控制台 进程与线程 网络安全 汇编语言 硬件编程 软件设计 工程标准规等。其中代写编程、代写程序、代写留学生程序作业语言或工具包括但不限于以下范围:C/C++/C#代写Java代写IT代写Python代写辅导编程作业Matlab代写Haskell代写Processing代写Linux环境搭建Rust代写Data Structure Assginment 数据结构代写MIPS代写Machine Learning 作业 代写Oracle/SQL/PostgreSQL/Pig 数据库代写/代做/辅导Web开发、网站开发、网站作业ASP.NET网站开发Finance Insurace Statistics统计、回归、迭代Prolog代写Computer Computational method代做因为专业,所以值得信赖。如有需要,请加QQ:99515681 或邮箱:[email protected] 微信:codehelp