For more detail, down the html file here.
What is MySQL? SQL is short for Structured Query Language and MySQL is the world’s most popular database.(Further information can be found in wiki page and mysql page)
And why using it is the important point to keep in mind here is that as a data scientist what role that you will have is likely to collect data from a database, and maybe later you’re going to put some data back in it. But usually, the basic data collection has already been formed before you get there, so you usually be handed a database and trying having to get data out of it.
Now we will focus on how to access MySQL database using R. Firstly you need to install R package RMySQL
. The instruction can be found in my blog How to install RMySQL
package on Windows. FOn a Mac, general way install.packages("RMySQL")
is OK. Then we will access the database and collect some information about it.
dbConnect
function.library(RMySQL)
ucscDb <- dbConnect(MySQL(), user="genome", host="genome-mysql.cse.ucsc.edu")
dbGetQuery
function. (Here the result contains all the databases in this sever.)result <- dbGetQuery(ucscDb, "show databases;")
dbDisconnect
function.(It is very important that whenever you’ve done analysis data or collecting data from MySQL server that you disconnect from the server.)dbDisconnect(ucscDb)
Or you can use dbSendQuery
function in step 2. The difference for dbGetQuery
and dbSendQuery
is that
1 dbSendQuery
only submits and synchronously executes the SQL statement to the database engine. It does not extracts any records — for that you need to use the function dbFetch, and then you must call dbClearResult when you finish fetching the records you need.
2 dbGetQuery
comes with a default implementation that calls dbSendQuery
, then if dbHasCompleted
is TRUE, it uses fetch to return the results. on.exit
is used to ensure the result set is always freed by dbClearResult
. Subclasses should override this method only if they provide some sort of performance optimisation.
The above codes gives all databases in the connection. If we want to focus on a specific database, we need to use the argument dbname for the name of database.
dbConnect
function with argument dbname .hg19 <- dbConnect(MySQL(), user="genome", dbname="hg19", host="genome-mysql.cse.ucsc.edu")
allTables <- dbListTables(hg19)
length(allTables) # NO. of tables or data frames in database hg19
allTables[1:4]
dbListFields(hg19, "affyU133Plus2")
num <- dbGetQuery(hg19, "select count(*) from affyU133Plus2")
num
oldw <- getOption("warn")
options(warn = -1)
affyData <- dbReadTable(hg19, "affyU133Plus2")
head(affyData)
options(warn = oldw)
dbSentQuery
function.oldw <- getOption("warn")
options(warn = -1)
query <- dbSendQuery(hg19, "select * from affyU133Plus2 where misMatches between 1 and 3")
options(warn = oldw)
affyMis <- fetch(query)
quantile(affyMis$misMatches)
affyMisSmall <- fetch(query, n=10)
dbClearResult(query)
dbDisconnect(hg19)
What is HDF? HDF stands for hierarchical data format and HDF5 is a data model, library, and file format for storing and managing data.(More details can br found in here)
Now we begin to play with HDF5.
rhdf5
package, which is installed through bioconductor.(This will install packages from Bioconductor, which is used for genomics but also has good “big data” packages)#source("http://bioconductor.org/biocLite.R")
#biocLite("rhdf5")
library(rhdf5)
created <- h5createFile("./data/example.h5")
created <- h5createGroup("./data/example.h5", "foo")
created <- h5createGroup("./data/example.h5", "baa")
created <- h5createGroup("./data/example.h5", "foo/foobaa")
h5ls("./data/example.h5")
# matrix
A <- matrix(1:10, 5, 2)
# write the matrix to a particular group
h5write(A, "./data/example.h5", "foo/A")
# multidimension array
B <- array(seq(0.1, 2, by=0.2), dim = c(5, 2, 2))
# add attributes
attr(B, "scale") <- "liter"
# write the array to a particular group
h5write(B, "./data/example.h5", "foo/foobaa/B")
h5ls("./data/example.h5")
# data frame
df <- data.frame(1L:5L, seq(0, 1, length.out = 5), c("ab", "cde", "fghi", "a", "s"), stringsAsFactors = FALSE)
# write the data frame to a particular group
h5write(df, "./data/example.h5", "df")
h5ls("./data/example.h5")
# read HDF5 data
readA <- h5read("./data/example.h5", "foo/A")
readB <- h5read("./data/example.h5", "foo/foobaa/B")
readdf <- h5read("./data/example.h5", "df")
readA
h5write(c(12, 13, 15), "./data/example.h5", "foo/A", index = list(1:3, 1))
h5read("./data/example.h5", "foo/A")
Webscraping: Programatically extracting data from HTML code of website
(一) readLines
function
usl
functioncon <- url("http://scholar.google.com/citations?user=HI-I6C0AAAAJ&hl=en")
htmlCode <- readLines(con)
close(con)
#htmlCode
(二) XML
package
library(XML)
url <- "http://scholar.google.com/citations?user=HI I6C0AAAAJ&hl=en"
html <- htmlTreeParse(url, useInternalNodes = TRUE)
xpathSApply(html, "//title", xmlValue)
xpathSApply(html, "//td[@id='col-citedby']", xmlValue)
(maybe the last script is no longer right, but it gives us a good view to see how to extract information from website.)
(三)httr
package
library(httr)
url <- "http://scholar.google.com/citations?user=HI I6C0AAAAJ&hl=en"
html2 <- GET(url)
content2 <- content(html2, as = "text")
parseHtml <- htmlParse(content2, asText = TRUE)
xpathSApply(parseHtml, "//title", xmlValue)
pg1 <- GET("http://httpbin.org/basic-auth/user/passwd")
pg1
pg2 <- GET("http://httpbin.org/basic-auth/user/passwd", authenticate("user", "passwd"))
pg2
names(pg2)
What is API?
API stands for application programming interfaces through which softwares can interact with each other. For example, most internet companies, like twitter or Facebook will have an application programming interface, where you can download data. For example, you can get data about which users are tweeting or what they are tweeting about, where you can get information about what people are posting on Facebook.
step 1: create an account, not a user account but an account with the api or with the development team out of each particular organization. Then you can create a new application which will give you some number being used to authenticate the application through R and access data later.(on website)
step 2: start the autherization processing for your application
library(httr)
myapp <- oauth_app("twitter",
key="YourConsumerKey",
secret="YourConsumerSecret")
sig = sign_oauth1.0(myapp,
token="YourAccessToken",
token_secret="YourAccessTokenSecret")
homeTL = GET("https://api.twitter.com/1.1/statuses/home_timeline.json", sig)
# use the content function to extract the information
json1 <- content(homeTL)
json2 <- jsonlite::fromJSON(jsonlite::toJSON(json1))
json2[1,1:4]