In vision science, cascades of Linear+Nonlinear transforms explain a number of perceptual experiences [Carandini&Heeger Nat.Rev.Neur.12]. However, the conventional literature is usually too focused on only describing the forward input-output transform. Instead, in this work we present the mathematical details of such cascades beyond the forward transform, namely the derivatives and the inverse. These analytic results (usually omitted) are important for three reasons: (a) they are strictly necessary in new experimental methods based on the synthesis of visual stimuli with interesting geometrical properties, (b) they are convenient to analyze classical experiments for model fitting, and (c) they represent a promising way to include model information in blind visual decoding methods. Besides, statistical properties of the model are more intuitive by using vector representations. As an example, we develop a derivable and invertible vision model consisting of a cascade of modules that account for brightness, contrast, energy masking, and wavelet masking. To stress the generality of this modular setting we show examples where some of the canonical Divisive Normalizations [Carandini&Heeger Nat.Rev.Neur.12] are substituted by equivalent modules such as the Wilson-Cowan interaction [Wilson&Cowan J.Biophys.72] (at the V1 cortex) or a tone-mapping [Cyriac et al. SPIE 15] (at the retina). We illustrate the presented theory with three applications. First, we show how the Jacobian w.r.t. the input plays a major role in setting the model by allowing the use of novel psychophysics based on the geometry of the neural representation (as in [Malo&Simoncelli SPIE 15]). Second, we show how the Jacobian w.r.t. the parameters can be used to find the model that better reproduces image distortion psychophysics. Third, we show that analytic inverse may improve regression-based brain decoding techniques.