Bayesian inference provides an optimal framework to learn models from data with quantified uncertainty. The dimension of the model parameters is often very high or infinite in many practical applications with models represented by, e.g., differential equations or deep neural network. It is longstanding challenge to accurately and efficiently solve high-dimensional Bayesian inference problems due to the curse of dimensionality—the computational complexity grows rapidly (often exponentially) with respect to the parameter dimension. In this talk, I will present a class of transport-based projected variational methods to tackle the curse of dimensionality. We project the high-dimensional parameters to intrinsically low-dimensional data-informed subspaces, and employ transport-based variational methods to push samples drawn from the prior to a projected posterior. I will present error bounds for the projected posterior distribution measured in Kullback–Leibler divergence. Numerical experiments will be presented to demonstrate the properties of our methods, including improved accuracy, fast convergence with complexity independent of the parameter dimension and the number of samples, strong parallel scalability in processor cores, and weak data scalability in data dimension.