Note: This is Part 1 of a continuing series to give the subject of CFD validation comprehensive coverage. Please subscribe to the blog or check back on a regular basis for subsequent installments.
As simulation consultants, CFD software is one of the primary tools of our trade. As with any tool, we benchmark the software on a regular basis against reliable test data. Of course, software vendors do the same thing with each new release or update, so why bother with our own testing?
Although the occasional bug may slip through the QA process, the main reason is to validate our technique. Other benefits include having a complete understanding of the impact of any changes or new features (e.g., turbulence models).
A good analogy is welding, where the welding machine is comparable to the CFD software. Both welding and CFD simulation require an operator to provide the correct technique and input settings to get reliable results. For critical applications, welds are often validated using non-destructive (x-ray, ultrasonic) testing. For CFD, we rely upon comparing results with reliable physical data or closed form solutions.
Welding and CFD are similar in that they both require a skilled operator, using the proper techniques, to get reliable results.
When benchmarking, the first step is to make certain that the results being used for comparison are reliable before you attempt to replicate them in simulation software. I recall two interesting cases in my career, where by chance, both dealt with transient thermal simulation. While working at Algor (acquired by Autodesk in 2009) early in my career, a customer submitted a support case where the FEA results were not matching a textbook formula. It was determined that the textbook had a typo in the formula, which we submitted to the publisher. Later, while working at CFdesign (acquired by Autodesk in 2011), I had a services project where my thermal results were not matching the client’s measured data. I triple-checked all of my inputs to no avail. Finally, I decided to tweak the transient input curve provided by the client until the output matched. When I shared the adjusted input curve, that prompted the client to check their test rig and they found an instrumentation error.