Intel FPGAs Power AI in Microsoft Azure
Microsoft Corporation has introduced the Azure Machine Learning Hardware Accelerated Models powered by Project Brainwave and integrated with the Microsoft Azure Machine Learning SDK for preview. Customers gain access to industry-leading artificial intelligence (AI) inferencing performance for their models using Azure's large-scale deployments of Intel FPGA field programmable gate array (FPGA) technology.
This will enable data scientists and developers to easily use deep neural networks for a variety of real-time workloads, including those in manufacturing, retail and healthcare, across the world's largest accelerated cloud. They can train a model, then deploy it on Project Brainwave, leveraging Intel FPGAs, either in the cloud or on the edge.
Project Brainwave unlocks the future of AI by unleashing programmable hardware using Intel FPGAs to deliver real-time AI. The FPGA-fueled architecture is economical and power-efficient, with a very high throughput that can run ResNet 50, an industry-standard deep neural network requiring almost 8 billion calculations, without batching. AI customers do not need to choose between high performance or low cost.
Using the Azure Machine Learning SDK for Python, customers will be able to specialize image-recognition tasks by retraining ResNet 50-based models with their data. For real-time AI workloads, the compute intensity requires a dedicated hardware accelerator. Intel FPGAs allow Azure to configure the hardware exactly for the task to deliver peak performance.
FPGAs can be further refined or completely repurposed based on the specific requirements of the Azure workload. Azure's architecture developed with Intel FPGA and Intel Xeon processors enables innovation with accelerated AI on the user's terms for custom software and hardware configuration. Customers can access the Project Brainwave public preview.