机器学习常用数据集

  1. 鸢尾花卉数据集(Iris data)。此样本数据是机器学习和统计分析最经典的数据集,包含山鸢尾、变色鸢尾和维吉尼亚鸢尾各自的花萼和花瓣的长度和宽度。总共有150个数据集,每类有50个样本。用Python加载样本数据集时,可以使用Scikit Learn的数据集函数,使用方式如下:
from sklearn import datasets
iris = datasets.load_iris()
print(len(iris.data))
150
print(len(iris.target))
150
print(iris.target[0]) # Sepal length, Sepal width,Petal length,Petal width
[ 5.1 3.5 1.4 0.2]
print(set(iris.target)) # I. setosa, I. virginica, I. versicolor
{0, 1, 2}

2.出生体重数据(Birth weight data)。此样本数据集是婴儿出生体重以及母亲和家庭历史人口统计学、医学指标,有189个样本集,包含11个特征变量。使用Python访问的数据的方式:

import requests
birthdata_url = 'https://www.umass.edu/statdata/statdata/data/
lowbwt.dat'
birth_file = requests.get(birthdata_url)
birth_data = birth_file.text.split('\'r\n') [5:]
birth_header = [x for x in birth_data[0].split( '') if len(x)>=1]
birth_data = [[float(x) for x in y.split( ')'' if len(x)>=1] for y
in birth_data[1:] if len(y)>=1]
print(len(birth_data))
189
print(len(birth_data[0]))
11
  1. 波士顿房价数据(Boston Housing data) 。此样本数据集保存在卡内基梅隆大学机器学习仓库,总共有506个房价样本,包含14个特征变量。使用Python获取数据的方式:
import requests
housing_url = 'https://archive.ics.uci.edu/ml/machine-learningdatabases/housing/housing.data'
housing_header = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM',
'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV0']
housing_file = requests.get(housing_url)
housing_data = [[float(x) for x in y.split( '') if len(x)>=1] for
y in housing_file.text.split('\n') if len(y)>=1]
print(len(housing_data))
506
print(len(housing_data[0]))
14

4.MNIST手写体字库:MNIST手写体字库是NIST手写体字库的子样本数据集,网址:https://yann.lecun.com/exdb/mnist/。包含70000张0到9的图像,其中60000张标注为训练样本数据集,
10000张为测试样本数据集。TensorFlow提供内建函数来访问它,MNIST手写体字库常用来进行图像识别训
练。在机器学习中,提供验证样本数据集来预防过拟合是非常重要的,TensorFlow从训练样本数据集中留出
5000张图片作为验证样本数据集。这里展示使用Python访问数据的方式:

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/"," one_hot=True)
print(len(mnist.train.images))
55000
print(len(mnist.test.images))
10000
print(len(mnist.validation.images))
5000
print(mnist.train.labels[1,:]) # The first label is a 3'''
[ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]

5.垃圾短信文本数据集(Spam-ham text data)。通过以下方式访问垃圾短信文本数据:

import requests
import io
from zipfile import ZipFile
zip_url = 'http://archive.ics.uci.edu/ml/machine-learningdatabases/
00228/smsspamcollection.zip'
r = requests.get(zip_url)
z = ZipFile(io.BytesIO(r.content))
file = z.read('SMSSpamCollection')
text_data = file.decode()
text_data = text_data.encode('ascii',errors='ignore')
text_data = text_data.decode().split(\n')
text_data = [x.split(\t') for x in text_data if len(x)>=1]
[text_data_target, text_data_train] = [list(x) for x in zip(*text_
data)]
print(len(text_data_train))
5574
print(set(text_data_target))
{'ham', 'spam'}
print(text_data_train[1])
Ok lar... Joking wif u oni...

6.影评样本数据集。此样本数据集是电影观看者的影评,分为好评和差评,可以在网站
http://www.cs.cornell.edu/people/pabo/movie-review-data/下载。这里用Python进行数据处理,使用方式如下:

import requests
import io
import tarfile
movie_data_url = 'http://www.cs.cornell.edu/people/pabo/moviereview-
data/rt-polaritydata.tar.gz'
r = requests.get(movie_data_url)
# Stream data into temp object
stream_data = io.BytesIO(r.content)
tmp = io.BytesIO()
while True:
s = stream_data.read(16384)
if not s:
break
tmp.write(s)
stream_data.close()
tmp.seek(0)
# Extract tar file
tar_file = tarfile.open(fileobj=tmp, mode="r:gz")
pos = tar_file.extractfile('rt'-polaritydata/rt-polarity.pos')
neg = tar_file.extractfile('rt'-polaritydata/rt-polarity.neg')
# Save pos/neg reviews (Also deal with encoding)
pos_data = []
for line in pos:
pos_data.append(line.decode('ISO'-8859-1').
encode('ascii',errors='ignore').decode())
neg_data = []
for line in neg:
neg_data.append(line.decode('ISO'-8859-1').
encode('ascii',errors='ignore').decode())
tar_file.close()
print(len(pos_data))
5331
print(len(neg_data))
5331
# Print out first negative review
print(neg_data[0])
simplistic , silly and tedious .

7.CIFAR-10图像数据集。此图像数据集是CIFAR机构发布的8亿张彩色图片(已标注为,32×32像素)的
子集,总共分10类,60000张图片。50000张图片训练数据集,10000张测试数据集。由于这个图像数据集数
据量大,并在本书中以多种方式使用,后面到具体用时再细讲,访问网址
为:http://www.cs.toronto.edu/~kriz/cifar.html。
8.莎士比亚著作文本数据集(Shakespeare text data)。此文本数据集是古登堡数字电子书计划提供的免
费电子书籍,他们编译了莎士比亚所有著作。用Python访问文本文件的方式如下:

import requests
shakespeare_url = 'http://www.gutenberg.org/cache/epub/100/pg100.
txt'
# Get Shakespeare text
response = requests.get(shakespeare_url)
shakespeare_file = response.content
# Decode binary into string
shakespeare_text = shakespeare_file.decode('utf-8')
# Drop first few descriptive paragraphs.
shakespeare_text = shakespeare_text[7675:]
print(len(shakespeare_text)) # Number of characters
5582212

9.英德句子翻译样本集。此数据集由Tatoeba(在线翻译数据库)发布,ManyThings.org(http://www.manythings.org)整理并提供下载。这里提供英德语句互译的文本文件(你可以
通过改变URL,使用你需要的任何语言的文本文件),使用方式如下:

import requests
import io
from zipfile import ZipFile
sentence_url = 'http://www.manythings.org/anki/deu-eng.zip'
r = requests.get(sentence_url)
z = ZipFile(io.BytesIO(r.content))
file = z.read('deu.txt''')
# Format Data
eng_ger_data = file.decode()
eng_ger_data = eng_ger_data.encode('ascii''',errors='ignore''')
eng_ger_data = eng_ger_data.decode().split(\n''')
eng_ger_data = [x.split(\t''') for x in eng_ger_data if len(x)>=1]
[english_sentence, german_sentence] = [list(x) for x in zip(*eng_
ger_data)]
print(len(english_sentence))
137673
print(len(german_sentence))
137673
print(eng_ger_data[10])
['I won!, 'Ich habe gewonnen!']

你可能感兴趣的:(深度学习,python)