Parametrized mathematical models are ubiquitous in science and engineering. However, the often infinite-dimensional output of these models may only be observable through a finite number of possibly indirect measurements. Existing operator learning methods, which map function spaces to function spaces, are not suited for approximating such parameter-to-observable (PtO) maps. This talk introduces Fourier Neural Mappings (FNMs) as a principled generalization of Fourier Neural Operators to finite-dimensional input spaces or output spaces, or both, and proves universal approximation theorems for the method. Often the underlying PtO map is defined implicitly through the solution operator of a partial differential equation. In this case, is it more data-efficient to learn the PtO map end-to-end or to first learn the solution operator itself? A theoretical analysis of Bayesian nonparametric linear functional regression suggests that the end-to-end approach actually has worse sample complexity. Going beyond the theory, numerical results for the proposed FNMs applied to three nonlinear PtO maps demonstrate the benefits of the operator learning perspective that this talk adopts.