web上有大量可用的数据。其中一些是以格式化的、可下载的data-sets的形式,易于访问。但大多数在线数据都是作为网络内容存在的,如博客、新闻故事和烹饪菜谱。使用格式化的文件,访问数据相当简单;只需下载文件,必要时解压缩,然后导入到r。
然而,对于“wild”数据,将数据转换成可分析的格式更困难。访问此类的在线数据有时称为“web抓取”。您将需要从互联网下载目标页面并提取您需要的信息。两个r工具,从基本包中的readline ( )和rcurl包中的geturl ()使此任务成为可能。
web_page <- readLines("http://www.interestingwebsite.com")
作为一个(有点)实际使用web抓取的例子,设想一个场景,我们想知道2009年1月的r -help服务器上的10个最常见的海报。因为服务器是在一个安全的站点上(例如,它有https : / /而不是http : / /在URL中),我们不能轻松地使用readline ()访问live版本。因此,对于此示例,我已在此站点上发布了列表归档的本地副本。
readline ()其本身只能获取数据。您将需要使用grep ( )、gsub ()或等价物来解析数据并保留您所需要的内容。web抓取中的一个关键挑战是找到一种方法,从包含其他元素的网页中打开所需的数据。
web_page <- read.csv("http://www.programmingr.com/jan09rlist.html")
author_lines <- web_page[grep("", web_page)]
authors <- gsub("", "", author_lines, fixed = TRUE)
author_counts <- sort(table(authors), decreasing = TRUE)
author_counts[1:10]
为了理解为什么这个示例如此简单,下面是对底层html的更深入的了解:
老实说,这是关于用户友好,因为您可以得到的html数据格式的“在野外”。我们感兴趣的数据元素(海报名称)是它自己行的主要元素。我们可以使用grep ()快速轻松地获取这些行。一旦我们有了我们感兴趣的行,我们可以通过使用gsub ()来替换不需要的html代码。
顺便说一句,对于那些也是web开发人员的人来说,这可能是重复任务的一个巨大的节省时间。如果您没有处理任何高度敏感的问题,请向您的站点添加一些简单的“数据转储”页面,并使用readline ()在您需要的时候撤回数据。这对于进度报告和状态更新非常重要。确保页面设计简单——基本的、格式良好的html和最小的绒毛。
在找一个测试项目吗?查看我们的网页抓取项目的想法!
# Install the RCurl package if necessary
install.packages("RCurl", dependencies = TRUE)
library("RCurl")
# Install the XML package if necessary
install.packages("XML", dependencies = TRUE)
library("XML")
# Get first quarter archives
jan09 <- getURL("https://stat.ethz.ch/pipermail/r-help/2009-January/date.html", ssl.verifypeer = FALSE)
jan09_parsed <- htmlTreeParse(jan09)
对于基本的web抓取任务readline ()将足够并避免使任务复杂化。对于更困难的程序或需要其他http功能geturl ()或rcurl包中的其他功能的任务,可能需要。
这是我们在网上抓取的系列中的第一个。请查看后面的一篇文章,了解有关抓取的更多信息:
json已经成为在web上共享数据的通用标准之一,特别是可能被前端JavaScript应用程序使用的数据。json ( JavaScriptobject符号)是一个关键的:值格式,它为读者提供了一个关于价值的含义的高度的上下文。键-值结构可以嵌套,允许如下数据分组:
{‘book’:”Midsummer Nights Dream”,
‘author’: “William Shakespeare”,
‘price’:5.99,
‘inventory’:12}
对于已经出现了几个用于r用户的库,使您能够轻松地处理和消化json数据。我们将从其中一个库jsonlite提供一个示例,它是另一个领先库rjsonio的分叉。我们选择了这个图书馆,由于它的相对易用性。
我们从预赛开始,因为jsonlite不作为r标准库的一部分:
json_file <- "https://jsonplaceholder.typicode.com/posts"
data <- fromJSON(json_file)
我们将使用一个用于JSON数据的占位符生成器:https : / / www . jsonplaceholder.typicode.com . org/帖子.这个服务列出了一个错误的JSON数据列表,据称是一个博客文章或新闻文章的列表。将这些信息移动到r数据帧相当简单。它为我们提供了一个具有要求的字段的可爱的数据帧。对于喜欢在文本编辑器或excel中浏览数据的人,您可以轻松地将文件转储到CSV文件,并使用以下一个线性文件:该包可以支持更高级的数据检索,包括:
访问需要密钥的API;
提取并连接到单个数据帧中的多页划痕;
使用复杂的标头和数据元素的post请求操作;
这里详细介绍了一组示例(由包作者提供)。
本节列出了以JSON格式发布数据的公共httpAPI的一些示例。这些是很好的,以了解在真实的世界JSON数据中遇到的复杂结构。所有服务都是免费的,但有些服务需要注册/身份验证。每个示例返回大量数据,因此不是所有输出都在本文档中打印。
library(jsonlite)
github是一个在线代码存储库,并具有几乎所有活动的APIs来获取实时数据。下面是一个著名的r包和作者的一些例子:
hadley_orgs <- fromJSON("https://api.github.com/users/hadley/orgs")
hadley_repos <- fromJSON("https://api.github.com/users/hadley/repos")
gg_commits <- fromJSON("https://api.github.com/repos/hadley/ggplot2/commits")
gg_issues <- fromJSON("https://api.github.com/repos/hadley/ggplot2/issues")
#latest issues
paste(format(gg_issues$user$login), ":", gg_issues$title)
[1] "jsta : fix broken stowers link"
[2] "krlmlr : Log transform on geom_bar() silently omits layer"
[3] "yutannihilation : Fix a broken link in README"
[4] "raubreywhite : Fix theme_gray's legend/panels for large base_size"
[5] "batuff : Add minor ticks to axes"
[6] "mcol : overlapping boxes with geom_boxplot(varwidth=TRUE)"
[7] "karawoo : Fix density calculations for groups with one or two elements" [8] "Thieffen : fix typo"
[9] "Thieffen : fix typo"
[10] "thjwong : `axis.line` works, but not `axis.line.x` and `axis.line.y`"
[11] "schloerke : scale_discrete not listening to 'breaks' arg"
[12] "hadley : Consider use of vwline"
[13] "JTapper : geom_polygon accessing data$y"
[14] "Ax3man : Added linejoin parameter to geom_segment."
[15] "LSanselme : geom_density with groups of 1 or 2 elements"
[16] "philstraforelli : (feature request) Changing facet_wrap strip colour based on variable in data frame"
[17] "eliocamp : geom_tile() + coord_map() is extremely slow."
[18] "eliocamp : facet_wrap() doesn't play well with expressions in facets. "
[19] "dantonnoriega : Request: Quick visual example for each geom at http://ggplot2.tidyverse.org/reference/"
[20] "randomgambit : it would be nice to have date_breaks('0.2 sec')"
[21] "adrfantini : Labels can overlap in coord_sf()"
[22] "adrfantini : borders() is incompatible with coord_sf() with projected coordinates"
[23] "adrfantini : coord_proj() is superior to coord_map() and could be included in the default ggplot"
[24] "adrfantini : Coordinates labels and gridlines are wrong in coord_map()"
[25] "jonocarroll : Minor typo: monotonous -> monotonic"
[26] "FabianRoger : label.size in geom_label is ignored when printing to pdf"
[27] "andrewdolman : Add note recommending annotate"
[28] "Henrik-P : scale_identity doesn't play well with guide = \"legend\""
[29] "cpsievert : stat_sf(geom = \"text\")"
[30] "hadley : Automatically fill in x for univariate boxplot"
一个单一的公共API,显示了纽约市自行车共享模拟的所有站点的位置、状态和当前可用性。
citibike <- fromJSON("http://citibikenyc.com/stations/json")stations <- citibike$stationBeanListcolnames(stations)
[1] "id" "stationName"
[3] "availableDocks" "totalDocks"
[5] "latitude" "longitude"
[7] "statusValue" "statusKey"
[9] "availableBikes" "stAddress1"
[11] "stAddress2" "city"
[13] "postalCode" "location"
[15] "altitude" "testStation"
[17] "lastCommunicationTime" "landMark"
nrow(stations)
[1] 666
res <- fromJSON('http://ergast.com/api/f1/2004/1/results.json')drivers <- res$MRData$RaceTable$Races$Results[[1]]$Drivercolnames(drivers)
[1] "driverId" "code" "url" "givenName"
[5] "familyName" "dateOfBirth" "nationality" "permanentNumber"
drivers[1:10, c("givenName", "familyName", "code", "nationality")]
givenName familyName code nationality
1 Michael Schumacher MSC German
2 Rubens Barrichello BAR Brazilian
3 Fernando Alonso ALO Spanish
4 Ralf Schumacher SCH German
5 Juan Pablo Montoya MON Colombian
6 Jenson Button BUT British
7 Jarno Trulli TRU Italian
8 David Coulthard COU British
9 Takuma Sato SAT Japanese
10 Giancarlo Fisichella FIS Italian
#store all pages in a list firs
tbaseurl <- "https://projects.propublica.org/nonprofits/api/v1/search.json?order=revenue&sort_order=desc"
pages <- list()for(i in 0:10){
mydata <- fromJSON(paste0(baseurl, "&page=", i), flatten=TRUE)
message("Retrieving page ", i)
pages[[i+1]] <- mydata$filings}
#combine all into one
filings <- rbind_pages(pages)
#check outputnrow(filings)
[1] 275
filings[1:10, c("organization.sub_name", "organization.city", "totrevenue")]
organization.sub_name organization.city totrevenue
1 KAISER FOUNDATION HEALTH PLAN INC OAKLAND 40148558254
2 KAISER FOUNDATION HEALTH PLAN INC OAKLAND 37786011714
3 KAISER FOUNDATION HOSPITALS OAKLAND 20796549014
4 KAISER FOUNDATION HOSPITALS OAKLAND 17980030355
5 PARTNERS HEALTHCARE SYSTEM INC SOMERVILLE 10619215354
6 UPMC PITTSBURGH 10098163008
7 UAW RETIREE MEDICAL BENEFITS TR DETROIT 9890722789
8 THRIVENT FINANCIAL FOR LUTHERANS MINNEAPOLIS 9475129863
9 THRIVENT FINANCIAL FOR LUTHERANS MINNEAPOLIS 9021585970
10 DIGNITY HEALTH SAN FRANCISCO 8718896265
search for articles
article_key <- "&api-key=b75da00e12d54774a2d362adddcc9bef"
url <- "http://api.nytimes.com/svc/search/v2/articlesearch.json?q=obamacare+socialism"
req <- fromJSON(paste0(url, article_key))
articles <- req$response$docscolnames(articles)
[1] "web_url" "snippet" "lead_paragraph"
[4] "abstract" "print_page" "blog"
[7] "source" "multimedia" "headline"
[10] "keywords" "pub_date" "document_type"
[13] "news_desk" "section_name" "subsection_name"
[16] "byline" "type_of_material" "_id"
[19] "word_count" "slideshow_credits"
#search for best sellers
books_key <- "&api-key=76363c9e70bc401bac1e6ad88b13bd1d"
url <- "http://api.nytimes.com/svc/books/v2/lists/overview.json?published_date=2013-01-01"
req <- fromJSON(paste0(url, books_key))
bestsellers <- req$results$list
category1 <- bestsellers[[1, "books"]]
subset(category1, select = c("author", "title", "publisher"))
author title publisher
1 Gillian Flynn GONE GIRL Crown Publishing
2 John Grisham THE RACKETEER Knopf Doubleday Publishing
3 E L James FIFTY SHADES OF GREY Knopf Doubleday Publishing
4 Nicholas Sparks SAFE HAVEN Grand Central Publishing
5 David Baldacci THE FORGOTTEN Grand Central Publishing
#movie reviews
movie_key <- "&api-key=b75da00e12d54774a2d362adddcc9bef"
url <- "http://api.nytimes.com/svc/movies/v2/reviews/dvd-picks.json?order=by-date"
req <- fromJSON(paste0(url, movie_key))
reviews <- req$resultscolnames(reviews)
[1] "display_title" "mpaa_rating" "critics_pick"
[4] "byline" "headline" "summary_short"
[7] "publication_date" "opening_date" "date_updated"
[10] "link" "multimedia"
reviews[1:5, c("display_title", "byline", "mpaa_rating")]
display_title byline mpaa_rating
1 Hermia & Helena GLENN KENNY
2 The Women's Balcony NICOLE HERRINGTON
3 Long Strange Trip DANIEL M. GOLD R
4 Joshua: Teenager vs. Superpower KEN JAWOROWSKI
5 Berlin Syndrome GLENN KENNY R
阳光基金会
阳光基金会是一个非营利组织,有助于通过数据、工具、政策和新闻工作使政府透明和负责。在这里注册一个免费钥匙。提供了一个示例密钥。
key <- "&apikey=39c83d5a4acc42be993ee637e2e4ba3d"
key <- "&apikey=39c83d5a4acc42be993ee637e2e4ba3d"
#Find bills about drones
Twitter
TwitterAPI需要oauth2身份验证。一些示例代码:
#Create your own appication key at https://dev.twitter.com/apps
consumer_key = "EZRy5JzOH2QQmVAe9B4j2w";
consumer_secret = "OIDC4MdfZJ82nbwpZfoUO4WOLTYjoRhpHRAWj6JMec";
#Use basic auth
secret <- jsonlite::base64_enc(paste(consumer_key, consumer_secret, sep = ":"))
req <- httr::POST("https://api.twitter.com/oauth2/token",httr::add_headers(
"Authorization" = paste("Basic", gsub("\n", "", secret)),
"Content-Type" = "application/x-www-form-urlencoded;charset=UTF-8"),
body = "grant_type=client_credentials");
#Extract the access token
httr::stop_for_status(req, "authenticate with twitter")
token <- paste("Bearer", httr::content(req)$access_token)
#Actual API call
url <- "https://api.twitter.com/1.1/statuses/user_timeline.json?count=10&screen_name=Rbloggers"
req <- httr::GET(url, httr::add_headers(Authorization = token))
json <- httr::content(req, as = "text")
tweets <- fromJSON(json)substring(tweets$text, 1, 100)
[1] "simmer 3.6.2 https://t.co/rRxgY2Ypfa #rstats #DataScience" [2] "Getting data for every Census tract in the US with purrr and tidycensus https://t.co/B3NYJS8sLO #rst"
[3] "Gender Roles with Text Mining and N-grams https://t.co/Rwj0IaTiAR #rstats #DataScience" [4] "Data Science Podcasts https://t.co/SaAuO82a7M #rstats #DataScience" [5] "Reflections on ROpenSci Unconference 2017 https://t.co/87kMldvrsd #rstats #DataScience" [6] "Summarizing big data in R https://t.co/GMaZZ9sWiL #rstats #DataScience" [7] "Mining CRAN DESCRIPTION Files https://t.co/gWEIAYaBZF #rstats #DataScience" [8] "New package polypoly (helper functions for orthogonal polynomials) https://t.co/MzzzcIySym #rstats #"
[9] "Hospital Infection Scores – R Shiny App https://t.co/Rf8wKNBPU6 #rstats #DataScience" [10] "New R job: Software Engineer in Test for RStudio https://t.co/X1bWkKlzYv #rstats #DataScience #jobs"