Application Scenarios:
There are some specific operations on the model that need to be checked to see if the code you wrote is correct i.e. if the parameters are updated the way you want them to be updated
Description of the problem
The final report requires the model to freeze at some point, meaning that the parameters cannot be allowed to update. So I needed to test after writing the code myself to see if the parameters had not been updated as I had hoped nor zeroed out by my own mistake, so I thought I would type out the training parameters and see if the values changed
The first method is the simplest and most convenient: i.e., iterating over all the parameters (if there were only weight and bias) would iterate over these 2, typing them out at each training session, but one of the less convenient aspects is that it outputs the parameters of all the layers.
for name, parameter in self.model.named_parameters():
print(name. parameter)
- 1
- 2
But in general we only want to see the output of the last layer, so that you can select the parameter updates for the specific layer you want to see
//This is my domain_encoder module.
self.domain_encoder = nn.Sequential(
nn.Linear(512, 512),
nn.BatchNorm1d(512),
nn.ReLU(),
nn.Linear(512, 512),
nn.BatchNorm1d(512),
nn.ReLU(),
nn.Linear(512, 512),
nn.BatchNorm1d(512),
nn.ReLU()
)
# I'd like to type out the updated status of the parameters (weight and bias) after the sixth layer comes out.
print(self.model.domain_encoder[6].state_dict())
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
To be more specific, if you want to look directly at one of the parameters of your own selection of the middle tier
print(self.model.category_classifier[0].weight.grad)
- 1