" "

Qualcomm kicked off its annual AI Day convention in San Francisco with a bang. It took the wraps off of three new system-on-chips certain for smartphones, tablets, and different cellular units, and if that weren’t sufficient, it introduced a product tailored for edge computing: the Qualcomm Cloud AI 100.

“It’s an entire new sign processor that we designed particularly for AI inference processing,” mentioned senior vp of product administration Keith Kressin, including that sampling will start within the second half of this yr forward of manufacturing subsequent yr. “[We’re not] simply reusing a cellular chip within the datacenter.”

The Cloud AI 100 — which will likely be out there in quite a lot of totally different modules, type components, and energy ranges from unique machine producers — integrates a full vary of developer instruments, together with compilers, debuggers, profilers, displays, servicing, chip debuggers, and quantizers. Moreover, it helps runtimes together with ONNX, Glow, and XLA, in addition to machine studying frameworks like Google’s TensorFlow, Fb’s PyTorch, Keras, MXNet, Baidu’s PaddlePaddle, and Microsoft’s Cognitive Toolkit.

Qualcomm estimates peak efficiency at three to 50 instances that of the Snapdragon 855 and Snapdragon 820, and it’s claiming that, in contrast with conventional field-programmable gate arrays (FPGA) — built-in circuits designed to be configured after manufacturing — it’s about 10 instances quicker on common in inferencing duties. Furthermore, measured in tera operations per second (TOPs) — a typical efficiency metric used for high-performance chips — the Cloud AI 100 can hit “far better” than 100 TOPs. (For comparability’s sake, the Snapdragon 855 maxes out at round 7 TOPs.)

" "

“FPGA or GPUs [can often do] AI inference processing extra effectively … [because] a GPU is a way more parallel machine, [while] the CPU is extra serial machine, [and] the parallel machines are higher for AI processing,” Kressin defined. “However nonetheless, a GPU is extra so designed for graphics, and you may get a major enchancment in the event you design a chip from the bottom up for AI acceleration. There’s about an order of magnitude enchancment for a CPU to FPGA or GPU. There’s one other order of magnitude enchancment alternative for custom-built AI accelerator.”

" "

Qualcomm’s foray into cloud inferencing comes after a chief rival, Huawei, unveiled what it mentioned was the trade’s highest-performance Arm-based processor, dubbed Kunpeng 920. In SPECint — a benchmark suite of 12 applications designed to check integer efficiency — that chip scored over 930, or virtually 25 p.c increased than the trade benchmark, whereas drawing 30 p.c much less energy than “that supplied by trade incumbents.”

It’s hardly the one one.

In January on the Shopper Electronics Present in Las Vegas, Intel detailed its forthcoming Nervana Neural Community Processor (NNP-I), which is able to reportedly ship as much as 10 instances the AI coaching efficiency of competing graphics playing cards. Google final yr debuted Edge TPU, a purpose-built ASIC for inferencing, and Alibaba introduced in December that it aimed to launch its first self-developed AI inference chip within the second half of this yr.

On the FPGA aspect of the equation, Amazon lately took the wraps off of its personal AI cloud accelerator chip — AWS Inferentia — and Microsoft previewed a comparable platform in Venture Brainwave. Fb in March open-sourced Kings Canyon, a server chip for AI inference, and simply this month, Intel introduced a household of chipsets — Agilex — optimized for AI and large knowledge workloads.

However Qualcomm is assured that the Cloud AI 100’s efficiency benefit will give it a leg up in a deep studying chipset market forecast to succeed in $66.three million by 2025.

“So many are placing community hardware on the cloud edge, like a content material supply community for several types of processing, whether or not it’s cloud gaming, or AI processing. So that is actually one other key pattern. And Qualcomm has the chance to take part all the best way from the top person enter know-how, all the best way till the cloud edge,” Kressin mentioned.

Its different potential benefit? Ecosystem assist. In November, Qualcomm pledged $100 million towards a startup fund centered on edge and on-device AI, particularly in autonomous vehicles, robotics, pc imaginative and prescient, and web of issues domains. And final Could, it partnered with Microsoft to create a imaginative and prescient AI developer equipment for the AI accelerators embedded inside a lot of its system-on-chips.

“By way of market dimension, inferencing [is] changing into a significant-sized marketplace for silicon,” Kressin mentioned. “[As] time progresses, [we expect that] inference [will become a] larger a part of it — over 2018 to 2025, about 10 instances development. We’re fairly assured we’ll be ready to be the facility efficiency chief or AI processing and the information.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here