The tech world heard one of its biggest announcements, courtesy of Cerebral Systems. The Los Altos based tech manufacturer announced that created the world’s most powerful processor. In terms of comparisons, the Cerebras’s new processor is faster, stronger and bigger than anyone of its competitors, including Nvidia. Even though this was big news, the company did not stop there. Known for its hardware capabilities in Artificial Intelligence, it announced the creation of a specialized server for the chip, which would be solely directed at enabling, improving and building different AI capabilities and applications.
The new chip from Cerebral has been titled as the ‘Wafer Scale Engine’. Just to give you a quick glimpse of its might and processing prowess, the company claims that in terms of mere comparisons, the WSE is nearly fifty-seven times mightier than Nvidia’s biggest offering. This means that the world has never seen a General Processing Unit (GPU) on the scale of the WSE. As the chip is expected to power strong AI-based applications, it boasts of 3000x on-chip memory capabilities. The new chip is expected to churn terabytes of data with ease. Its main application will be in cloud servers and large-scale SaaS businesses.
Cerebral CEO, Andrew Feldman told Data Center Knowledge that the major applications of the new WSE chip will enable tech businesses to take their AI capabilities to the next level. The Founder of former chip company Sea Micro (Feldman sold the same to AMD for a record $334 Million in 2012) told the journal that this is the most revolutionary chip that has come out from any of the major manufacturers in recent times.
Does size matter?
Semiconductor companies of the world have spent decades developing ever tinier chips. These can be combined to create super-powerful processors, thus why create a standalone AI mega-chip?
According to Cerebral, the answer is that hooking lots of small chips together creates delays that slow down training of AI models causing a vast industry bottleneck. The company’s chip binds 400,000 cores or parts that operate processing which is tightly linked to one another to speed up data-crunching. It can also transfer data between memory and processing extremely fast.
Fault tolerance:
But if this incredible chip is going to overcome the AI world, it will have to show that it can overcome some significant hurdles. One of these is in manufacturing. If contaminants sneak into a wafer being used to assemble lots of tiny chips, few of these may not be affected by impurity; but if there’s just one mega-chip on a cracker, the entire thing may have to be damaged severely. Cerebral alleges it’s found innovative ways to assure that contaminants won’t endanger a whole chip, but we don’t yet know if these will work in quantity production.
Power play:
Another difficulty is energy performance. AI chips are spectacularly power-hungry, which has both economic and environmental assumptions. Shifting data between lots of tiny AI chips is a massive power suck, so Cerebras should have an asset here. If it can help solve this energy challenge, then the startup’s chip could determine that for AI, big silicon is pretty.
Cerebral has done its homework when it comes to the WSE. Before making the announcement, the company has already started working with some companies with regard to the chip’s AI capabilities. CEO Feldman was proud of the fact that the next-gen chip would be particularly helpful in enabling researchers in several fields including those studying neural networks to make the most of this next-gen technology. In other words, the impact of this chip for the next levels of human advancements would be huge, to say the least.
The principal analyst of Tritias Research, Kevin Krewell stated that these are interesting and exciting times for the industry. With Cerebral pushing the boundaries of what is possible in chip development and advancement. Several technologies and studies, which were limited and curtailed till now, would be able to power through with the help of WSE chips.www.wordcounttool.com
Read Also: