Model reduction aims to construct reduced models of large-scale numerical solvers with dramatically smaller complexity. They are extremely useful when solving many-query problems involving complicated physical systems described by parametrized partial differential equations. However, classical model reduction methods rely on the linear low-dimensionality of the solutions, and so are inefficient when applied to wave propagation problems. This talk will focus on two recently proposed ideas that can overcome this limitation. The first is a nonlinear generalization of reduced models that take on the form of deep neural networks, called reduced deep networks (RDNs). We will show that they exhibit a type of depth-separation in one spatial dimension. The second is the use of the Radon transform to exploit a similar structure in multi-dimensional problems, leading to a generalization of the classical Lax-Philips representation. An efficient discretization of the Radon transform, called the approximate discrete Radon transform (ADRT), and its key properties will also be discussed.