Dr. Philip Wong
Professor of Electrical Engineering
Willard R. and Inez Kerr Bell Professor in the School of Engineering
Stanford University, Stanford, CA
Reaching for the N3XT 1,000× of Computing Energy Efficiency
Advances in brain-inspired computing are making rapid progress to meet the demands of abundant-data processing using a variety of techniques, including spiking neural networks, hyperdimensional computing using sparse vectors, deep neural nets, deep belief nets, restricted Boltzmann machines, and their variants. It is therefore crucial to create a scalable and flexible brain-inspired technology platform that can support all the essential elements, and can be adapted for a wide variety of neural computational model.
The key elements of a scalable, fast, and energy-efficient computation platform that may provide another 1,000× in computing performance (energy-execution time product) for future computing workloads are: massive on-chip memory co-located with highly energy-efficient computation, enabled by monolithic 3D integration using ultra-dense and fine-grained massive connectivity. There will be multiple layers of analog and digital memories interleaved with computing logic, sensors, and application-specific devices. We call this technology platform N3XT – Nanoengineered Computing Systems Technology. N3XT will support conventional computing architectures as well as computation methods that embrace sparsity, stochasticity, and device variability, including those that are neuromorphic and learning-based.
In this talk, I will give an overview of nanoscale memory and logic technologies for implementing N3XT. In particular, I give an overview of the use of nanoscale analog non-volatile memory devices for implementing brain-inspired computing. I will give examples of nanosystems that have been built using these technologies, and provide projections on their eventual performance.