argparse 模块对命令行接口的支持是围绕argparse.ArgumentParser 的实例建立的。
举个栗子:
import argparse
# 创建一个解析器对象
parser = argparse.ArgumentParser(description='A simple example of argparse usage')
# 添加参数
parser.add_argument('--input_file', type=str, required=True, help='Path to the input file')
parser.add_argument('--output_file', type=str, required=True, help='Path to the output file')
parser.add_argument('--num_epochs', type=int, default=10, help='Number of epochs to train the model')
parser.add_argument('--learning_rate', type=float, default=0.01, help='Learning rate for the optimizer')
# 解析参数
args = parser.parse_args()
# 使用参数
print(args.input_file)
print(args.output_file)
print(args.num_epochs)
print(args.learning_rate)
说明:我们首先创建了一个argparse.ArgumentParser对象,并添加了三个参数:input_file、output_file和num_epochs。然后,我们调用parse_args()函数来解析命令行参数,并将解析结果存储在args对象中。最后,我们可以使用args对象访问解析结果
Python 的元组与列表类似,不同之处在于元组的元素不能修改。
元组使用小括号,列表使用方括号。
元组创建很简单,只需要在括号中添加元素,并使用逗号隔开即可。
tup1 = ('physics', 'chemistry', 1997, 2000)
tup2 = (1, 2, 3, 4, 5 )
tup3 = "a", "b", "c", "d"
使用下标
#!/usr/bin/python
tup1 = ('physics', 'chemistry', 1997, 2000)
tup2 = (1, 2, 3, 4, 5, 6, 7 )
print "tup1[0]: ", tup1[0] # tup1[0]: physics
print "tup2[1:5]: ", tup2[1:5] # tup2[1:5]: (2, 3, 4, 5)
元组中的元素是不能修改的
但是可以对两个元组进行连接
#!/usr/bin/python
# -*- coding: UTF-8 -*-
tup1 = (12, 34.56)
tup2 = ('abc', 'xyz')
# 以下修改元组元素操作是非法的。
# tup1[0] = 100
# 创建一个新的元组
tup3 = tup1 + tup2
print tup3 # (12, 34.56, 'abc', 'xyz')
元组中的元素值是不能删除的
但是可以删除整个元组
tup = ('physics', 'chemistry', 1997, 2000)
del tup
L = (‘spam’, ‘Spam’, ‘SPAM!’)
字典是可变容器模型,且可存储任意类型对象。
字典的每个键值 key:value 对用冒号分割,每个键值对之间用逗号分割,整个字典包括在花括号 {} 中
d = {key1 : value1, key2 : value2 }
dic = {'name': 'Jack', 'age': 18, 'height': 180}
print(dic)
# 输出结果:{'name': 'Jack', 'age': 18, 'height': 180}
dic = dict(name='Jack', age=18, height=180)
print(dic)
# 输出结果:{'name': 'Jack', 'age': 18, 'height': 180}
lis = [('name', 'Jack'), ('age', 18), ('height', 180)]
dic = dict(lis)
print(dic)
# 输出结果:{'name': 'Jack', 'age': 18, 'height': 180}
dic = dict(zip('abc', [1, 2, 3]))
print(dic)
# 输出结果:{'a': 1, 'b': 2, 'c': 3}
dic = {i: i ** 2 for i in range(1, 5)}
print(dic)
# 输出结果:{1: 1, 2: 4, 3: 9, 4: 16}
通过key访问value
#!/usr/bin/python
tinydict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}
print "tinydict['Name']: ", tinydict['Name'] # tinydict['Name']: Zara
print "tinydict['Age']: ", tinydict['Age'] # tinydict['Age']: 7
分为三种:删除字典里面的某一个key的value、删除字典的所有key-value对、删除字典
tinydict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'}
del tinydict['Name'] # 删除键是'Name'的条目
tinydict.clear() # 清空字典所有条目
del tinydict # 删除字典
字典value可以是任意的类型,但是key有具体的要求
dic = dict.fromkeys(range(4), 'x')
print(dic)
# 输出结果:{0: 'x', 1: 'x', 2: 'x', 3: 'x'}
model_mlp = Sequential()
model_mlp.add(Dense(100, activation='relu', input_dim=X_train.shape[1]))
model_mlp.add(Dense(1))
model_mlp.compile(loss='mse', optimizer=adam)
model_mlp.summary()
class MLP(nn.Module):
'''
Multilayer Perceptron.
'''
def __init__(self):
super().__init__()
self.layers = nn.Sequential(
nn.Flatten(),
# input shape = 28*28
# neurons in first dense layer = 64
nn.Linear(28*28, 64),
# relu activation
nn.ReLU(),
# 64 = neurons in first dense layer
# 32 = neurons in second dense layer
nn.Linear(64, 32),
nn.ReLU(),
# 32 = neurons in second dense layer
# 10 = neurons in output layer (number of classes)
nn.Linear(32, 10)
)
def forward(self, x):
'''Forward pass'''
return self.layers(x)
pytorch创建神经网络模型(MLP)需要继承nn.Module类,有两个组成部分:init/构造函数、forward
其中在init里面通过nn.Sequential创建一个“存储仓”模型,一个接一个的往里填入layers,存储在变量self.layers中
print(f"Model structure: {model}\n\n")
for name, param in model.named_parameters():
print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n")
功能:时间序列预测的时候对数据进行预处理的“好帮手”
功能包括:
class pytorch_forecasting.data.timeseries.TimeSeriesDataSet(
data: DataFrame,
time_idx: str,
target: str | List[str],
group_ids: List[str],
weight: str | None = None,
max_encoder_length: int = 30,
min_encoder_length: int | None = None,
min_prediction_idx: int | None = None,
min_prediction_length: int | None = None,
max_prediction_length: int = 1,
static_categoricals: List[str] = [],
static_reals: List[str] = [],
time_varying_known_categoricals: List[str] = [],
time_varying_known_reals: List[str] = [],
time_varying_unknown_categoricals: List[str] = [],
time_varying_unknown_reals: List[str] = [],
variable_groups: Dict[str, List[int]] = {},
constant_fill_strategy: Dict[str, str | float | int | bool] = {},
allow_missing_timesteps: bool = False,
lags: Dict[str, List[int]] = {},
add_relative_time_idx: bool = False,
add_target_scales: bool = False,
add_encoder_length: bool | str = 'auto',
target_normalizer: TorchNormalizer | NaNLabelEncoder | EncoderNormalizer | str | List[TorchNormalizer | NaNLabelEncoder | EncoderNormalizer] | Tuple[TorchNormalizer | NaNLabelEncoder | EncoderNormalizer] = 'auto', categorical_encoders: Dict[str, NaNLabelEncoder] = {},
scalers: Dict[str, StandardScaler | RobustScaler | TorchNormalizer | EncoderNormalizer] = {}, randomize_length: None | Tuple[float, float] | bool = False, predict_mode: bool = False)
参数: