程序结构&&程序设计
程序结构&&程序设计(二)
程序结构&&程序设计(三) ——递归
程序结构&&程序设计(三)
程序结构&&程序设计(四)
设置掩码(0/1构成的数组或者矩阵)的目的是为了实现对原始数据的屏蔽(0)与选择(1)。
掩码生成的方式之一可以是:使用二项分布binomial distribution,
def get_corrupted_input(self, input, corruption_level):
mask = self.theano_rng.binomial(
n=1,
p=1-corruption_level,
size=input.shape,
dtype=theano.config.floatX
)
return mask*input
import timeit
start_time = timeit.default_timer()
for j in xrange(epochs):
for i in xrange(n_train_batches):
...
end_time = timeit.default_timer()
print('Elapsed time %.2fm'%(end_time-start_time)/60.)
if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
defalut_time = time.time
我们在更外层调用timeit.default_timer()
函数时,是不必关心具体的操作系统细节的。所以,封装的目的是为了屏蔽更底层的细节。
一般流程为:
best_validation_loss = np.inf
for j in range(epochs):
this_validation_loss = calc()
if this_validation_loss < best_validation_loss:
best_validation_loss = this_validation_loss
# 以最小为最好
更为完整的代码如下:
best_validation_loss = np.inf
for j in range(epochs):
for i in range(n_train_batches):
iter = j*n_train_batches+i
if iter % validation_freq == 0:
validation_losses =
[valid_model(i) for i in range(n_valid_batches)]
this_validation_loss = np.mean(validation_losses)
if this_validation_loss < best_validation_loss:
best_validation_loss = this_validation_loss
我们对上述问题进一步细分,也即再多加一个判断:
improvment_thresh = 0.95
if this_validation_loss < best_validation_loss:
if this_validation_loss < best_validation_loss*improvment_thresh:
# 也即当前迭代改进幅度足够大,
也即是该值以下不予追究,姑妄听之。该值以上,则要采取另外的动作,也即「我的忍耐是有限度的」。判断是否到达耐心值,可以使用if判断,也可采用取模运算。如下代码所示:
patience = 10000
validation_freq = min(n_train_batches, patience/2)
for j in range(epochs):
for minibatch_idx in n_train_batches:
iter = j*n_train_batches + minibatch_idx
if iter%100 == 0:
print 'training @ iter = ', iter
cost_ij = train_model(minibatch_idx)
if (iter+1)%validation_freq == 0:
...
validation_freq = min(n_train_batches, patience/2)
for j in range(epochs):
for i in n_train_batches:
iter = j*n_train_batches+i
if iter % validation_freq == 0:
...
考虑如下的双条件(and的关系)循环:
epoch = 0
done_looping = False
while epoch < epochs and (not done_looping):
epoch += 1
。。。
外层表示epoch,一次epoch,表示对全体样本进行一次学习或者叫训练。内层对全体样本进行分块mini_batch,内层循环表示的对样本块的遍历。
for j in epochs:
for mini_batch_idx in n_train_index:
iter = j*n_train_index + mini_batch_idx
# iter 以mini_batch为单位
if iter%100 == 0:
print 'training @ iter = ', iter
for j in xrange(epochs):
for i in xrange(n_train_batches):
iter = j*n_train_batches+i
if iter % 100 == 0:
print 'training @ iter = ', iter
if (iter+1) % validation_freq == 0:
print 'epoch {}, minibatch {}/{}'.format(j, i+1, n_train_batches)
当表达双条件限制时,可能while会比for更方便点:
# 单条件限制
for j in range(epochs):
epoch = 0
while j < epochs:
j += 1
# 双条件限制时
j = 0
done_looping = False
while j < epochs and not done_looping:
def main():
pass
if __name__ == '__main__':
main()
在进行每次的epoch时,我们需要记录执行完当前epoch时的一些中间结果(如使用list容器,training\_cost, training\_accuracy=[], []
),便于最后的过程可视化,如全局代价函数随epoch次数的变化情况,以及分类精确率随epochs次数的变化情况,顺便地此时也需要定义相关的calc_cost
以及calc_accuracy
,在程序运行的过程中,我们也可以在控制台打印相关的信息,print('Epoch {}: cost: {}, accuracy: {}'.format(j, cost, accuracy))
。
training_cost, training_accuracy = [], []
for j in range(epochs):
cost = calc_cost()
accuracy = calc_accuracy()
training_cost.append(cost)
training_accuracy.append(accuracy)
print('Epoch {}: cost: {}, accuracy: {}'.format(j, cost, accuracy))
sum(int(y_pred == y_test) for y_pred, y_test in zip(predications, test_label))
n = X.shape[0]
batches = [X[k:k+batch_size, :] for k in range(0, n, batch_size)]
在SGD(stochastic gradient descent)的学习算法中,常常在分块之前需要将数据集shuffle的过程:
n = X.shape[0]
// C++, map
std::map<std::string, size_t> words;
std::ifstream ifs(filename);
std::string word;
while(ifs >> word)
{
++words[word];
}
# python, defaultdict
from collections import defaultdict
densities = defaultdict(float)
digit_counts = defaultdict(int)
for image, digit in zip(training_data[0], training_data[1]):
digit_counts[digit] += 1
densities[digit] = sum(image)
这里 w 是二维矩阵,而 a 和 b 是向量, σ(⋅) 处理一个向量,得到的向量 a ,又反过来作为程序的输入,可通过几乎所有程序设计语言的赋值运算进行实现:
for w, b in zip(weights, biases):
a = sigma(np.dot(w, a)+b)
这种对参数向量还是矩阵的考虑,在进行机器学习相关算法的编程实践中,是需要注意的一个问题。这本身又跟python(numpy)对数组(一维还是二维)的处理方式不太一致有关。