one of the variables needed for gradient computation has been modified by an inplace operation
importtorchimporttorch.optimx=torch.tensor([3,6],dtype=torch.float32)x.requires_grad_(True)optimizer=torch.optim.SGD([x],lr=0.1,momentum=0)f=(x**2).sum()foriinrange(100):optimizer.zero_grad()f.backwar