lua - 'Unshare' parameters in torch -


i building multi-stream neural network, , looks this:

local model = nn.maptable(10, true) model:add(modelofsinglestream) 

where modelofsinglestream includes implementation of single stream. code, have neural network copied ten times horizontally, these networks share weights.

within definition of modelofsinglestream, use batchnorm-layer so:

model:add(nn.spatialconvolutionlocal(64, 64, 16, 8, 1,1))     :add(nn.spatialbatchnormalization(64))     :add(nn.relu()) 

and

model:add(nn.linear(inputdim, outputdim))     :add(nn.batchnormalization(outputdim, nil, 0.9))     :add(nn.relu()) 

however, not want share moving average, , moving standard-deviation between streams. when set 'clone' parameter of nn.maptable however, happen (from i've understood). scaling factor , additive factor in batchnorm (gamma , beta), shared expected.

how can allow each stream (each input within maptable) have different moving average , moving standard-deviation?


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -