(转)如何将Sklearn数据集Bunch格式转换为Pandas数据集DataFrame?

转载链接:[https://vimsky.com/article/4362.html]

from sklearn.datasets import load_iris
import pandas as pd
data = load_iris()
print(type(data))   
 #输出:
<class 'sklearn.utils.Bunch'>
data1 = pd. # Is there a Pandas method to accomplish this? 

最佳思路
可以手动使用pd.DataFrame构造函数,提供一个numpy数组(data)和列名的列表(columns)。要将所有内容都放在一个DataFrame中,可以使用np.c_[…]将特征和目标(标签)连接到一个numpy数组中(请注意运算符[]):

import numpy as np
import pandas as pd
from sklearn.datasets import load_iris

# save load_iris() sklearn dataset to iris
# if you'd like to check dataset type use: type(load_iris())
# if you'd like to view list of attributes use: dir(load_iris())
iris = load_iris()

# np.c_ is the numpy concatenate function
# which is used to concat iris['data'] and iris['target'] arrays 
# for pandas column argument: concat iris['feature_names'] list
# and string list (in this case one string); you can make this anything you'd like..  
# the original dataset would probably call this ['Species']
data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
                     columns= iris['feature_names'] + ['target'])

#numpy科学计算工具箱
import numpy as np
#使用make_classification构造1000个样本,每个样本有20个feature
from sklearn.datasets import make_classification
X, y = make_classification(1000, n_features=20, n_informative=2, 
                           n_redundant=2, n_classes=2, random_state=0)
#存为dataframe格式,TypeError: unsupported operand type(s) for +: #'range' and 'range',这里python代码报错如标题,实际是两个range相#加。仍然是python2和python3版本导致的错误。

#python2中,range()返回的是list,可以将两个range()直接相加,如#range(5)+range(10) 
#python3中,range()成了一个class,不可以直接将两个range()直接相加,##需要先加个list,如list(range(5))+list(range(10)) 
#因为python3中的range()为节省内存,仅仅存储了range()的start,stop,#step这三个元素,其余值使用时一个一个的算,其实就是个迭代器,加上#list()让range()把所有值算出来就可以相加了.

from pandas import DataFrame
df = DataFrame(np.hstack((X, y[:, None])),columns = list(range(20)) + ["class"])

(转)如何将Sklearn数据集Bunch格式转换为Pandas数据集DataFrame?_第1张图片

type(load_iris())
sklearn.utils.Bunch

dir(load_iris())
['DESCR', 'data', 'feature_names', 'target', 'target_names']

 'feature_names': ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'],
 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),
 'target_names': array(['setosa', 'versicolor', 'virginica']

第二种思路
对于scikit-learn中的所有数据集,上文"最佳思路"的解决方案不够通用。例如,它不适用于波士顿住房数据集。我提出了另一种更通用的解决方案。也无需使用numpy。

from sklearn import datasets
import pandas as pd

boston_data = datasets.load_boston()
df_boston = pd.DataFrame(boston_data.data,columns=boston_data.feature_names)
df_boston['target'] = pd.Series(boston_data.target)
df_boston.head()

作为通用函数:

def sklearn_to_df(sklearn_dataset):
    df = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names)
    df['target'] = pd.Series(sklearn_dataset.target)
    return df

df_boston = sklearn_to_df(datasets.load_boston())

dataframe转化成array:df=df.values

array转化成dataframe:
import pandas as pd
df = pd.DataFrame(df)

df=df.values.flatten() 需要的时候在末尾加一个flatten() 变成一行的方便统计分析

你可能感兴趣的:(技术,python)