Engineering Simulation Exploit GPUs | IEEE Computer Society

September 21, 2022 by No Comments

[ad_1]

Engineering simulation exploit GPUsOnce the domain of specialists with high-powered computers, CAE is becoming a workstation application for designers. Earlier this year, Jon Peddie Research conducted a series of interviews with leading CAE software vendors such as Altair, Ansys, Dassault Systèmes, Hexagon, and Siemens Digital Industries Software. JPR also worked with Nvidia to understand how the industry is changing in response to GPU acceleration now available in many CAE applications and workflows. The results of those interviews are available in an e-book titled Accelerating and Advancing CAE.

The transition has not been a quick one. GPUs were introduced at the end of 1999 in response to high demand from enthusiastic gamers who were fervently embracing 3D gaming. The first GPUs were expressly designed for games, and game developers could simply write to the GPU’s built-in functions. As application programming interfaces and APIs evolved, those functions multiplied exponentially. The effect was immediate; the number of games written for GPUs increased rapidly, games ran faster, and they got more beautiful.

With the benefit of hindsight, we now know the same transformation was on the way for engineering and scientific calculations, but a lot had to happen before the design industry was ready. All the software had been built for CPUs and customers were used depending on the power of their systems’ CPUs to run complex, resource-heavy simulation and analysis software. They were also used to analyze software being complex and taking a very long time to run.

 


 

Want More Tech News? Subscribe to ComputingEdge Newsletter Today!

 


 

Everything goes faster


As the pace of innovation accelerates, each successive generation of GPU gets new features that are useful for CAE including hardware accelerated matrix math and AI; memory is faster, and bandwidths are higher. Also, software tools for programming GPUs are proliferating. Introduced in 2007, Nvidia’s CUDA enabled the development of specialized libraries for uses across the computing universe. AMD has also been working on open software approaches, and so has Intel which is introducing new high-powered GPUs to complement its CPUs.

In our interviews with the CAE companies, we were told that GPUs are outperforming CPUs by many multiples depending on the specific tasks, but despite this obvious advantage, we were also told that some engineers worry that GPU-accelerated applications might have to pay for the speed boost with accuracy. That has not proved to be the case. Instead, developers and their customers are finding the results from GPU accelerated calculations to be as accurate as those performed on CPU-based solvers.

GPU acceleration is enabling workstations to do the jobs that were previously performed by HPC machines. As a result, performing those jobs can be less expensive in terms of energy use and financial costs. The ability to perform more iterations, less expensively and more sustainably, is enabling more designers to take advantage of simulation earlier in the design process and to have confidence in the results.

Industries do not transition overnight, and the CAE industry is a particularly good example. Some products are based on very old code, originally written for CPUs in the 60s and 70s. How these companies take advantage of GPUs may vary. The companies we interviewed told us that they’re in the early stages of working with GPUs for CAE. They’ve been aware of the benefits of GPUs for a long time, but they’re even more aware of the pitfalls of moving too fast. Developers must consider the installed hardware base at their customer sites and what kind of problems they are trying to solve.

Since their introduction, GPUs have been evolving to support all digital industries. They have gotten more transistors, larger memory, and faster bandwidths. New types of accelerator cores such as Nvidia’s CUDA Cores and AMD’s Stream Processors have been developed, and Tensor cores have been added to accelerate AI/ML applications. Real-time ray tracing cores have been introduced and a plethora of developer software tools and libraries have been developed. The net result is the GPU has leaped ahead of the CPU in reducing the time needed to process simulation meshes, as characterized in Figure 1.

Engineering simulation 2
Figure 1, As the core count increases the time to compute decreases, and when properly employed the GPU provides astonishing acceleration. Source: Jon Peddie Research

Some of the ISVs are sticking with CPUs, and some newer companies and new programs are all on GPU. The most sensible approach in our opinion is a hybrid approach that lets the user employ whatever GPU capabilities they have.

There are several clear takeaways from this project, the use of GPU acceleration in established CAE products is increasing. And we’re also seeing the development of new products written from the ground up to take advantage of GPUs. Sustainability has become an important consideration for developers and customers. And finally, CAE has the potential to become a more integrated part of the design process, which leads to better designs and more sustainable products.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *