this post was submitted on 21 Mar 2024
5 points (61.9% liked)

nvidia

758 readers
1 users here now

This is an unofficial forum for nvidia products and technologies.

All posts must be primarily related to Nvidia. This means the article must be talking specifically about Nvidia as a company, Nvidia's product, or other products using Nvidia's technology.

founded 1 year ago
MODERATORS
 

#What is Nvidia Blackwell? Named after David Blackwell, the first black scholar inducted into the National Academy of Science, Blackwell (GB200, which stands for Grace-Blackwell 200) follows the success of its predecessor, Hopper(GH100, which stands for Grace-Hopper 100), which boosted Nvidia’s sales.

For anybody who is not familiar with the naming conventions of Nvidia’s chip world, G or Grace is the CPU used in Nvidia’s AI computing modules called DGX, and B or Blackwell is a GPU, which is an upgrade from the previous H or Hopper.

Nvidia Blackwell and the whole DGXGB200 SuperPOD AI computer Nvidia Blackwell and the whole DGXGB200 SuperPOD AI computer Apart from the apparent benchmark scores that Nvidia proudly wears on its chest, the most assuming part is that Nvidia has taken two dies or chips and just merged them without any loss of computational capabilities, giving birth to GB200, which is insane from an engineering perspective.

One of the direct competitors, AMD MI300, will indeed be shivering when they hear these numbers.

The new Blackwell ship is 4–5 times faster than its predecessor, Hopper. But when integrated into its environment of DGX Module, it becomes a beast and outputs almost 30x the inference performance, yes, not 30% but 30 times, and wait, wait, there is more, all while consuming 25X less power again that is not 25% its 25 times. This allows LLM’s models to train on over 27 trillion parameters. For comparison, the famous ChatGPT 4.0 was trained on 1.7 trillion parameters. Blackwell will be made in partnership with Nvidia’s best friend, TSMC, during a fantastic 3 nm fabrication process. It has 208 billion transistors, allowing data speeds up to 10Tb/sec. Amazing right?

#How is Blackwell(GB200) better than Hopper GPU(GH100)?

The previous generation of AI-optimized GPUs was called Hopper. Blackwell is between 2 and 30 times faster, depending on how you measure it. Huang explained that it took 8,000 GPUs, 15 megawatts, and 90 days to create the GPT-MoE-1.8T model. With the new system, you could use just 2,000 GPUs and 25% of the power.

Blackwell is genuinely a very successful replacement for the Hopper GPU because it builds upon GH100’s strengths and fills in the gaps or limitations, like improved communication bottlenecks for a larger number of racks. It has no Memory locality issues or cache storage issues.

This essentially means there are much lower bottlenecks when sharing data between servers, dramatically reducing the idle time for each GPU on each server. Thus, efficiency is boosted since more stuff gets done in the same clock cycles.

GB200 offers improved connectivity and data processing for AI tasks and is part of Nvidia’s “super chip” lineup, complementing its central processing unit, Grace.

Nvidia will definetly help AI to reach new heights. Click the link to visit and read the full blog.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 7 months ago (1 children)

What the fuck is 25x less power supposed to mean?

[–] [email protected] 1 points 7 months ago

It means if previous chip, hopper used 25 watts to perform a task, Blackwell can do it for 1 watt in the same time