Samsung has just announced a higher capacity memory that should make AI training and inference faster than before. This new type of memory is called HBM3E 12H DRAM and it comes with advanced TC NCF technology. For those unaware, the ‘HBM’ in the title stands for “high bandwidth memory” and that is exactly what it is.

The abbreviation “12H” is simply the number of chips that have been stacked on top of each other for this memory. In this case, there are 12 vertically stacked chips. This allows Samsung to fit more memory into a small package. The company has achieved 36GB with 12H, which is 50% more than 8H. The bandwidth, however, remains unchanged at 1.2 terabytes per second.

As for the TC NFC technology, the acronym stands for Thermal Compression Non-Conductive Film, which is the material sandwiched between all the chips. Samsung’s latest efforts have successfully slimmed down this film to a mere 7µm, enabling the 12H stack to align in height with the conventional 8H stack, thus facilitating the use of identical HBM packaging.

TC NFC also brings better cooling thanks to thermal upgrades which should keep the chip from throttling under stress. An additional perk of this innovation in the new HBM3E 12H DRAM technology is the notable improvement in production yields, marking a significant advancement in chip manufacturing.

Samsung projects that the augmented capacity of its innovative 12H design will expedite AI training processes by 34%, and enable inference services to accommodate “over 11.5 times” the user volume previously possible.

This type of memory will largely be used to train AI models, which require a substantial amount of RAM. Samsung is a notable supplier for Nvidia when it comes to high-bandwidth memory and the company is responsible for making some mind-boggling designs.

Nvidia’s newly announced H200 Tensor Core GPU used in AI data centers comes with 141GB of HBM3E that runs at a total of 4.8 terabytes per second. To put this into perspective, a consumer-grade GPU used for gaming comes with up to 16 GB of memory on average. The top-of-the-line RTX 4090 from Nvidia comes with 24GB of GDDDR6 that runs at just 1 terabyte per second.

Nvidia has recently hit a valuation of $2 trillion, putting it among the ranks of Apple and Microsoft.