Welcome everyone to the 2020 ValleyML AI Hardware Conference series, focused on Accelerator Hardware Solutions for AI
As the volume of cloud data explodes the ability to analyze, categorize, translate and pattern identify information becomes exponentially challenging. The traditional server CPU has gone through decades of design optimization for fast decision making but sadly is not at all appropriate for AI ML tasks.
CPU “accelerator” processors have been around for a while and have been used for
The Graphics processor (GPU) pixel shader engine became the first device to complement the processing capabilities of a CPU by performing repetitive high throughput data processing tasks. This adaption of a GPU for data processing has expanded to include Field Programmable Gate Arrays (FPGA’s) as CPU data accelerators.
Until recently, data accelerators have been re-purposed processors. We now see emergence of “purpose built” data center appropriate hardware accelerators starting to be deployed for machine learning and inference applications.
The Accelerator conference series includes Keynotes from Dell, Cerebras and Groq. The conference also includes presentations from ODSA and Rain Neomorphics on innovative topics you may not be aware of.
Join us to find out more, ask the tough questions, show creative solutions, and help inspire innovation in Acelerator AI Hardware.
Bill Orner is the Director of Systems Engineering for Esperanto Technologies, a semiconductor startup. Bill 30+ years’ experience in the Electronics industry in silicon and product design projects. He has worked for GoPro, Philips, Transmeta, MIPS, Lexicon as well as several startup companies.
Bill has a BSEET from Northeastern University and a MSTM from Pepperdine University. He currently serves on the Board of Governors for the IEEE CT Society and is/was participating in standards for JEDEC, CTA, USB and VESA.