Back-propagate of "generated" features"
I have a convolutional layer when I use max pooling over the output of the convolution. Instead of the max-pooling layer outputting only the maximum value of previous layer, I would also like it to output the position of the maximum value in the signal. I believe this is a good feature for my data and task.
My problem is - how do I back-propagate such a value? When the back propagation reaches the max-pooling layer, I know that I need to pass down the error on the value to the selected max value neuron, but what should I do with the position I added? Should I just ignore it?