nn
optim
SGD

SGD

Implements stochastic gradient descent.

Learn more about SGD (opens in a new tab)

Usage

from tinygrad.nn.state import get_parameters
from tinygrad.nn import optim
from models.resnet import ResNet
 
model = ResNet()
 
for _ in range(5):
  optimizer = optim.SGD(get_parameters(model), lr=0.01)
  # train, eval ...

Arguments

params

The parameters of the model to optimize.

lr (default: 0.001)

The learning rate of the SGD optimizer.

momentum (default: 0.0)

The momentum factor.

weight_decay (default: 0.0)

The weight decay factor.

nesterov (default: False)

The nesterov factor.

;