Enabling Training of Neural Networks on Noisy Hardware


Abstract:  "Deep neural networks (DNNs) are typically trained using the conventional stochastic gradient descent (SGD) algorithm. However, SGD performs poorly when applied to train networks on non-ideal analog hardware composed of resistive device arrays with non-symmetric conductance modulation characteristics. Recently we proposed a new algorithm, the Tiki-Taka algorithm, that overcomes t... » read more

Easier And Faster Ways To Train AI


Training an AI model takes an extraordinary amount of effort and data. Leveraging existing training can save time and money, accelerating the release of new products that use the model. But there are a few ways this can be done, most notably through transfer and incremental learning, and each of them has its applications and tradeoffs. Transfer learning and incremental learning both take pre... » read more

AI Training Chips


Kurt Shuler, vice president of marketing at Arteris IP, talks with Semiconductor Engineering about how to architect an AI training chip, how different processing elements are used to accelerate training algorithms, and how to achieve improved performance. https://youtu.be/4cnBCX-9jlk     See other tech talk videos here. » read more