torch.autograd.backward(tensors,
grad_tensors= None,
retain_graph = none,
create_graph=False)
功能: 自动求取梯度
tensors: 用于求导的张量,如loss
retain_graph: 保存计算图
create_graph : 创建导数计算图,用于高阶求导
grad_tensors: 多梯度权重
w = torch.tensor([1.],required_grad = True)
x = torch.tensor([2.],required_grad = True)
# 设置计算图
a = torch.add(w, x)
b = torch.add(w, 1)
y = torch.mul(a, b)
y.backward()
print(w.grad)
最后print出来的w,应该是5
y0 = torch.mul(a,b) # y0 = (x+w) *(w+1)
y1 = torch.add(a,b) # y1 = (x+w) + (w+1) dy1/dw = 2
loss = torch.cat([y0,y1], dim=0)
# grad tensor 是运用于多个梯度之间的权重设置
grad_tensors = torch.tensor([1,2])
loss.backward(gradient = gtad_tensors)
torch.autograd.grad(outputs,
inputs,
grad_outputs= None,
retain_graph = none,
create_graph=False)
功能:求取梯度
outputs: 用于求导的张量,比如loss
inputs: 需要梯度的张量
grad_outputs:多梯度权重
# 举例:利用 grad求二阶导数
x = torch.tensor([3.],requires_grad= True)
y = torch.pow(x, 2) # y = x**2
#必须create_graph设置为True,才能求二阶级导
grad_1 = torch.autograd.grad(y,x,create_graph= True) # grad_1 = dy/dx = 2x = 2* 3 = 6
print(grad_1)
grad_2 = torch.autograd.grad(grad_1[0], x) # grad_2 = d(dy/dx)/dx = d(2x)/dx = 2
print(grad_2)
1.梯度不自动清零
2.依赖于叶子结点的节点, requires_grad 默认为 True
3.叶子结点不可执行in-place操作