caffe loss does not seem to decrease -
some of parameters
base_lr: 0.04 max_iter: 170000 lr_policy: "poly" batch_size = 8 iter_size =16
this how training process looks until now:
the loss here seems stagnant, there problem here or normal?
the solution me lower base learning rate factor of 10 before resuming training solverstate snapshot.
to achieve same solution automatically, can set "gamma" , "stepsize" parameters in solver.prototxt:
base_lr: 0.04 stepsize:10000 gamma:0.1 max_iter: 170000 lr_policy: "poly" batch_size = 8 iter_size =16
this reduce base_lr factor of 10 every 10,000 iterations.
please note, normal loss fluctuate between values, , hover around constant value before making dip. cause of issue, suggest training beyond 1800 iterations before falling on above implementation. graphs of caffe train loss logs.
additionally, please direct future questions caffe mailing group. serves central location caffe questions , solutions.
i struggled myself , didn't find solutions anywhere before figured out. hope worked me work you!
Comments
Post a Comment