In this paper, we consider this computation’s information-theoretic properties and provide testable conditions for its efficiency that are both simple and general, making them applicable across many of the aforementioned settings. This ubiquitous array of functions begs the question of what overarching objective the divisive normalization computation achieves. It is further used in neural network models of the visual system ( 32, 33) as well as in computer vision and image compression ( 34). The nonlinear computation has also been suggested to play a role in attentional modulation ( 12, 26, 27), the modulation of response variability ( 28), the representation of visual uncertainty ( 29), and probabilistic inference ( 30, 31). In addition, divisive normalization has been shown to play an important role in value representations ( 17, 18) and for choice behavior, where it has been proposed to account for violations of the independence of irrelevant alternatives (IIA) axiom of rational choice ( 19 – 23 but see refs. Originally proposed for individual neurons in the primary visual cortex ( 1, 4, 5), this computation has since also been observed at the population level in the primary visual cortex ( 6 – 8) and throughout the visual hierarchy ( 9, 10), as well as in several other neural systems including olfactory pathways ( 11), the middle temporal area ( 12, 13), the inferotemporal cortex ( 14), the hippocampus ( 15), and in multisensory integration ( 16). This gain control mechanism (according to which the response of a neuron to its preferred stimulus is suppressed by the intensity of nonpreferred stimuli) permits the representation of potentially unbounded stimuli by biophysically feasible bounded firing rates. An important mechanism by which this can be achieved is divisive normalization ( 1, 2), which is thought to be a canonical computation in the brain ( 3). The brain has to make efficient use of its limited resources to represent and respond to the wide range of stimuli in its environment. Our theoretical finding also yields empirically testable predictions across sensory domains on how the divisive normalization parameters should be tuned to features of the input distribution. We demonstrate that this efficiently encoded distribution is consistent with stylized features of naturalistic stimulus distributions such as their characteristic conditional variance dependence, and we provide empirical evidence suggesting that it may capture the statistics of filter responses to naturalistic images. Our result suggests that divisive normalization may have evolved to efficiently represent stimuli with Pareto distributions. We generalize this multivariate analog of histogram equalization to allow for arbitrary metabolic costs of the representation, and show how different assumptions on costs are associated with different shapes of the distributions that divisive normalization efficiently encodes. We provide a theoretical result that makes the conditions under which divisive normalization is an efficient code analytically precise: We show that, in a low-noise regime, encoding an n-dimensional stimulus via divisive normalization is efficient if and only if its prevalence in the environment is described by a multivariate Pareto distribution. Divisive normalization is a canonical computation in the brain, observed across neural systems, that is often considered to be an implementation of the efficient coding principle.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |