Recently there has been an explosion of interest around studying the effects of low-precision computation on machine learning applications. This is because purpose-built, low-precision hardware accelerators can lower both the time and energy needed to complete a task. Despite this, the statistical effects of low-precision computation during training are not well understood. This is due to a tradeoff typically found with low-precision training algorithms: as the number of bits is lowered, noise that limits statistical accuracy is added. But is this tradeoff fundamental to computation using low-precision arithmetic? In this talk, I will describe a technique called “bit centering” that avoids this tradeoff by using a small amount of infrequent high-precision computation to reduce the error caused by low-precision computation. I will also describe a new training algorithm called High-Accuracy Low-Precision (HALP), which combines variance reduction with bit centering. On strongly convex problems, HALP converges at a linear rate down to a level of accuracy limited only by the precision of the high-precision numbers used, even though all the computations done in its inner loop use low-precision arithmetic.