Mean field games (MFG) and mean field control (MFC) describe the behavior of agents interacting in a symmetric fashion when the number of agents grows to infinity. The first theory captures a notion of Nash equilibrium for selfish players, whereas the second one focuses on a notion social optimum for cooperative agents. In this talk, we propose several numerical methods for mean field control problems and mean field games, both in the ergodic setting and the finite time horizon setting. These methods are based on machine learning tools such as function approximation via neural networks and optimization relying on stochastic gradient descent. We investigate the numerical analysis of these methods and prove bounds on the approximation error. We then consider numerical test cases, including examples which are difficult to tackle with deterministic methods such as numerical schemes based on finite differences. If time permits, we will also discuss model-free methods for mean-field problems in a reinforcement learning framework. This is based on joint work with Rene Carmona (Princeton University).