This paper presents a novel adaptive vector quantization scheme based on the SOFM neural network. All adaptation is performed directly from the quantized image with no explicit adaptation information transmitted or stored. Thus the network learns an input distribution it has never actually seen. Training sets are generated from the received image by scaling the image to approximate the statistics of the original image and selecting blocks in such a way as to capture edges and other image features. This data is fed to a SOFM neural network to update the codebook. A new method is also presented for ensuring that all neurons are well used, by estimating directly from the quantized image how much distortion each neuron introduces. The ability of this scheme to adapt successfully is verified by simulation.
Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, Western Australia, Australia, 27 November-01 December 1995,
Vol. 4, pp. 2071-2076