As storage demands explode and per device capacity continues to grow, the manufacturing test process needs to get more efficient and accurate; this growth involves significant capital expenditure in areas like factory floor space and outlays for test equipment. To address this growing challenge, a form of Narrow AI is gaining traction in HDD manufacturing – an ML model that helps manage specific tasks in the test process. These problems have been typically approached with high end edge or cloud data center resources that communicate with the testing environment.
Recently, we have enforced policy coordination beyond this Narrow AI using the HDD’s own compute resources. We embed low-footprint NN inference capabilities on legacy products without C++11 support, without additional hardware assist or offloading the tasks to testers, host, or network resources.
By utilizing a high volume of resource constrained processors, we demonstrated how an ML-native operation can be robustly used to augment or replace a legacy operation in a mission critical environment. A tangible increase in backend test throughput on legacy platforms has now been realized. In this talk we will discuss our case study, lessons learned, best practices developed for this class of application.