Pytorch训练过程中C盘缓存不断增加的问题

Pytorch训练过程中C盘缓存不断增加的问题

之前在训练的过程中,C盘内存会随着训练的轮次越来越少,特别影响训练效果,今天参考大佬的博客终于把问题解决了,记录一下

划重点:训练完后清空内存

def train():
	*
	*
	*
	del inputs, target, outputs, loss
	torch.cuda.empty_cache()

如果说中间变量太多了修改代码比较麻烦,可以只在训练结束后清除cache中的缓存,即

def train():
	*
	*
	*
	torch.cuda.empty_cache()

案例:

def train(epoch):
	running_loss = 0.0
	for batch_idx, data in enumerate(train_loader, 0):
		inputs, target = data
		inputs=inputs.to(device)
		target=target.to(device)
		optimizer.zero_grad()#将module中的所有模型参数的梯度设置为0.
		# forward + backward + update
		outputs = model(inputs)
		loss = criterion(outputs, target)

		loss.backward()
		optimizer.step()
		running_loss += loss.item()
		if batch_idx % 300 == 299:
			print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 2000))
			running_loss = 0.0
	del inputs, target, outputs, loss
    torch.cuda.empty_cache()

你可能感兴趣的:(深度学习)