Home Learning & Education This AI Supercomputer Has 13.5 Million Cores—and Was Built in Just Three Days

This AI Supercomputer Has 13.5 Million Cores—and Was Built in Just Three Days

by WeeklyAINews
0 comment

Synthetic intelligence is on a tear. Machines can communicate, write, play video games, and generate authentic photographs, video, and music. However as AI’s capabilities have grown, so too have its algorithms.

A decade in the past, machine studying algorithms relied on tens of millions of internal connections, or parameters. As we speak’s algorithms often attain into the lots of of billions and even trillions of parameters. Researchers say scaling up nonetheless yields efficiency beneficial properties, and fashions with tens of trillions of parameters could arrive briefly order.

To coach fashions that huge, you want highly effective computer systems. Whereas AI within the early 2010s ran on a handful of graphics processing models—pc chips that excel on the parallel processing essential to AI—computing wants have grown exponentially, and high fashions now require lots of or hundreds. OpenAI, Microsoft, Meta, and others are constructing devoted supercomputers to deal with the duty, and so they say these AI machines rank among the many quickest on the planet.

However at the same time as GPUs have been essential to AI scaling—Nvidia’s A100, for instance, continues to be one of many quickest, mostly used chips in AI clusters—weirder options designed particularly for AI have popped up lately.

Cerebras gives one such different.

Making a Meal of AI

The scale of a dinner plate—about 8.5 inches to a aspect—the corporate’s Wafer Scale Engine is the most important silicon chip on the earth, boasting 2.6 trillion transistors and 850,000 cores etched onto a single silicon wafer. Every Wafer Scale Engine serves as the center of the corporate’s CS-2 pc.

Alone, the CS-2 is a beast, however final yr Cerebras unveiled a plan to hyperlink CS-2s along with an exterior reminiscence system known as MemoryX and a system to attach CS-2s known as SwarmX. The corporate stated the brand new tech might hyperlink as much as 192 chips and practice fashions two orders of magnitude bigger than at the moment’s greatest, most superior AIs.

See also  Zephyr-7B : HuggingFace’s Hyper-Optimized LLM Built on Top of Mistral 7B

“The business is shifting previous 1-trillion-parameter fashions, and we’re extending that boundary by two orders of magnitude, enabling brain-scale neural networks with 120 trillion parameters,” Cerebras CEO and cofounder Andrew Feldman stated.

On the time, all this was theoretical. However final week, the company announced they’d linked 16 CS-2s collectively right into a world-class AI supercomputer.

Meet Andromeda

The brand new machine, known as Andromeda, has 13.5 million cores able to speeds over an exaflop (one quintillion operations per second) at 16-bit half precision. Because of the distinctive chip at its core, Andromeda isn’t simply in comparison with supercomputers operating on extra conventional CPUs and GPUs, however Feldman told HPC Wire Andromeda is roughly equal to Argonne Nationwide Laboratory’s Polaris supercomputer, which ranks 17th fastest in the world, in accordance with the newest Top500 checklist.

Along with efficiency, Andromeda’s speedy construct time, value, and footprint are notable. Argonne began installing Polaris in the summertime of 2021, and the supercomputer went live about a year later. It takes up 40 racks, the filing-cabinet-like enclosures housing supercomputer elements. By comparability, Andromeda value $35 million—a modest worth for a machine of its energy—took just three days to assemble, and makes use of a mere 16 racks.

Cerebras examined the system by coaching 5 variations of OpenAI’s massive language mannequin GPT-3 in addition to Eleuther AI’s open supply GPT-J and GPT-NeoX. And in accordance with Cerebras, maybe crucial discovering is that Andromeda demonstrated what they name “near-perfect linear scaling” of AI workloads for big language fashions. In brief, which means as extra CS-2s are added, coaching occasions lower proportionately.

Usually, the corporate stated, as you add extra chips, efficiency beneficial properties diminish. Cerebras’s WSE chip, however, could show to scale extra effectively as a result of its 850,000 cores are linked to one another on the identical piece of silicon. What’s extra, every core has a reminiscence module proper subsequent door. Taken collectively, the chip slashes the period of time spent shuttling information between cores and reminiscence.

See also  Mistral AI, a Paris-based OpenAI rival, closed its $415 million funding round

“Linear scaling means if you go from one to 2 techniques, it takes half as lengthy on your work to be accomplished. That could be a very uncommon property in computing,” Feldman instructed HPC Wire. And, he stated, it will probably scale past 16 linked techniques.

Past Cerebras’s personal testing, the linear scaling outcomes have been additionally demonstrated throughout work at Argonne Nationwide Laboratory the place researchers used Andromeda to coach the GPT-3-XL massive language algorithm on lengthy sequences of the Covid-19 genome.

In fact, although the system could scale past 16 CS-2s, to what diploma linear scaling persists stays to be seen. Additionally, we don’t but know the way Cerebras performs head-to-head in opposition to different AI chips. AI chipmakers like Nvidia and Intel have begun participating in regular third-party benchmarking by the likes of MLperf. Cerebras has but to participate.

House to Spare

Nonetheless, the strategy does look like carving out its personal area of interest on the earth of supercomputing, and continued scaling in massive language AI is a primary use case. Certainly, Feldman told Wired last year that the corporate was already speaking to engineers at OpenAI, a frontrunner in massive language fashions. (OpenAI founder, Sam Altman, can be an investor in Cerebras.)

On its launch in 2020, OpenAI’s massive language mannequin GPT-3, modified the sport each by way of efficiency and dimension. Weighing in at 175 billion parameters, it was the most important AI mannequin on the time and shocked researchers with its skills. Since then, language fashions have reached into the trillions of parameters, and bigger fashions could also be forthcoming. There are rumors—simply that, up to now—that OpenAI will launch GPT-4 within the not-too-distant future and it will likely be one other leap from GPT-3. (We’ll have to attend and see on that rely.)

See also  What's being built in generative AI today?

That stated, regardless of their capabilities, massive language fashions are neither good nor universally adored. Their flaws embody output that may be false, biased, and offensive. Meta’s Galactica, skilled on scientific texts, is a recent example. Regardless of a dataset one would possibly assume is much less liable to toxicity than coaching on the open web, the mannequin was simply provoked into producing dangerous and inaccurate textual content and pulled down in simply three days. Whether or not researchers can remedy language AI’s shortcomings stays unsure.

Nevertheless it appears seemingly that scaling up will proceed till diminishing returns kick in. The following leap might be simply across the nook—and we could have already got the {hardware} to make it occur.

Picture Credit score: Cerebras

Source link

You may also like

logo

Welcome to our weekly AI News site, where we bring you the latest updates on artificial intelligence and its never-ending quest to take over the world! Yes, you heard it right – we’re not here to sugarcoat anything. Our tagline says it all: “because robots are taking over the world.”

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 – All Right Reserved.