Type a word to start your search

Loading

Design and fabrication of a power-efficient AI-Processor for Edge computing

Artificial Intelligence (A.I.) is an emerging technology that is slowly making its way to day-to-day life. Currently, A.I. is mostly in the cloud computing space where thousands of specialized computing circuitry are utilized in a localized enclosure to compute AI-related tasks as-fast-as possible and connected over the internet. In the future, A.I. is going to be present in every possible tiny electronic device. Posing a significant challenge in reducing the carbon emission footprint of the energy consumed in computing such an enormous amount of A.I. tasks, be it on the cloud, internet-of-things (IoT), or computations on an edge computing device. For a ballpark estimate of power, every google search we do takes about the power of a 60W light turned on for 17 seconds [1]-[4]. Some estimates have pointed out that a single google search is equivalent to emitting seven gms of CO2 [1]-[4]. Now multiple the number of google-search we do per day and the total population using the internet on earth per day by the number of days in a year. That is just a basic search. Now try imaging the other operations we take for granted over the internet. We can estimate the amount of power required to support this growing human need and, more relevant to the carbon footprint generated. A.I. is going to be another significant component that will add to this massive power-hungry mechanism. Aarish has innovated a hardware computing solution that is fundamentally different in approach from the rest of the parallel-processing paradigm that typically uses load-store instruction set architecture solutions available. Enabling us to provide the highest compute/power density for machine learning applications on the planet. Aarish's approach to optimizing A.I. computation has a two-prong process optimizing both software-algorithm and hardware-computation. Every tiny electronics device implementing A.I. can avoid communicating with the cloud and can compute locally in a highly power-efficient manner. The power efficiency will reduce internet traffic and reduce demand for the computation cloud servers, resulting in the massive carbon footprint reduction that A.I. alone can contribute. From a market entry strategy, we shall consider two specific applications where we shall deploy our AI-Processor solution. There are several applications where the AI-Processor can potentially show great results, however, in the initial phase we shall consider the following. For the first application we shall consider GPUs being used in data centers. These GPUs perform a lot of AI inferencing applications which are large enough that cannot be deployed at the edge devices. The GPUs have been engineered for general purpose heavy computation engines and consume a lot of power and are highly expensive. With our AI-Processor solution offering 1000X power to performance advantage over GPUs and not to mention cost benefits, we intend to shift the GPU load on to our AI-Processors. This will result in a total reduction in carbon footprint and reduce cost of existing data centers. Our AI-Processors will be made available on PCIe cards with multiple AI-Processor ICs on a single card for the data centers to use. As a second application, we shall consider applying our AI-Processor at the edge devices. This will allow off loading the AI inferencing tasks from the cloud servers on to our AI-Processor. At present lot of AI algorithms intended for and IoTs and edge devices are being computing on the cloud. It is not only expensive to operate such systems, requires robust internet connectivity all the time, prone to privacy issues as they are prone to hacker attacks and most importantly such systems have a very large carbon footprint which is not easily visible to the human society at large. We show that our power savings estimates can result in a net savings of over 83K Tonn of equivalent CO2 emissions and over $133M in total value over a period of 10 years.

Zeljko Zilic

Professor
Université McGill

CRIBIQ's contribution

$ 135 381


Partners

Industrial participants :

Aarish Technologies

QPRI*
*Quebec public research institutes :

Université McGill