I’ve spent much of the past semester studying for an aerospace engineering degree primarily practicing building models. Engineers are behooved by the physical world to devise ways of modeling phenomena such as to understand and take advantage of them. The reality of the situation is, of course, that every model must approximate reality. Thus, we theorize, mathematize, create, simulate, analyze, and iterate until the machinations of materials or fields or fluids or what-have-you are elucidated enough to create whatever the specific project has defined. That, at least in my pea-sized undergraduate brain, is the crux of engineering: ideate, define, ideate, approximate, ideate, iterate. And so it goes.
Notwithstanding the freaky nature of quantum field effects like the indeterminacy principle, approximations are necessary because of not only resource constraints (often a critical factor, though) but also because the tools we use simply are imperfect. Including, and perhaps in particular, the computer and the evaluation of mathematical models for phenomena in the real-world numerically. In practice, this basically means using computational prowess and specific software to evaluate rather involved and complex mathematical equations or systems of equations iteratively. Instead of extracting symbolic answers from equations, an approximation is given within some error bound. With this, engineers (and really anyone with a regular computer) can approximate the way a design or material will respond to real-world physics, without building it. The implications of this paradigm, starting around the 70s, have been profound and there is still much room to grow.

With the rapid increase in computing power available in the 20th and 21st centuries, this type of analysis has been increasingly applied in engineering and physical science to solve expressions for the mechanics of planets and other celestial bodies, but also for biological sciences, finance, or business applications as the models that powerful computers can crunch become more complex and therefore more suitable for implementation and proper convergence. More on convergence in a moment, but briefly some instances of the interesting applications I note:
Computational biology is already vast and ever expanding field, though one of the coolest applications is definitely the application to neuroscience and using numerically optimized methodologies to filter and qualify neuroimaging data and increasingly build models of complexity great enough to infer inter-node connections within the brain structure from empirical data (LINK). Here is an absolute treasure trove of research on dynamical models and controversies in computational neuroscience research.
Numerical approaches have been utilized in finance and derivatives markets since the early 70s, when Eduardo Schwartz realized he could use numerical schema such as the finite difference method to price options (LINK), solving the Black-Scholes partial differential equation. The downside to this was that the solution for a very large number of values to the underlying needed to be computed, which led to researchers in the late 70s and the 80s showing that Monte Carlo path simulation is the optimal methodology for compounding uncertainty. Provided there is no closed form solution for valuing the derivative and the relative complexity of the model is low, value is optimally solved using numerical methodologies, provided the resources are available for suitable convergence. Here is a fantastic academic report on using common numeric schemes for options pricing that I hope to return to in a later post.
I also wanted to highlight the use of numerical approaches in operations: airline ticket prices, crew assignments and fuel needs are famously indeterministic (the optimal fare for a flight between two locations is similar to Boolean satisfiability which is NP-complete and is a problem that cannot be solved within the lifetime of the universe). In 1972, Koopman showed that traffic through the world’s busiest airports at the time can be bounded by mathematical “worst” and “best” case estimates, which ultimately gave rise to the earliest numerical approximation schemes for computing airport delays. In the spirit of this, airlines operations has been at the forefront of numeric operations optimization, with airside operations, air traffic, capacity, and pricing along with delays to test numerically driven insights with OR algorithms (LINK). No small chance this has been a direct contributor the explosion of air transportation traffic over the past 50 years.

What does the future of numerical simulation hold? I will save much of this discussion for a later, much deserved post on Moore’s law as it relates heavily to the future of computing, but a rule of thumb for many cases for now is that no defined expectation of accuracy will lead to someone getting paid to increase their, or their institution’s own, definition of accuracy. Many of these numeric solutions are done with efficiency and speed as paramount—they are used to compete. In slower settings, many simulations will likely simply leverage computing power to become increasingly detailed and “accurate” for approximations. This is exactly why it is crucial to constantly ground numerical findings and design in empirical measurements: much added realism from academic focus is drowned out by measurement uncertainties external to models (i.e. loads, boundary stiffness)—Garbage in, garbage out. In my opinion, much of the tangible improvement in the next decade or so will stem from increasingly sophisticated modelling convergence automation (provided it isn’t BS) and much more system level optimization and engineering (LINK) as projects begin to stomach a higher dimension of optimization in order to gain an edge.
Of course, the ever-widening availability of tools that enable essentially virtual design-and-develop loops will cause ever more traditional, industrial engineering projects to mirror the rapid iterative loops and profitability that has characterized software endeavors for the past 20+ years.
Computational Fluid Dynamics
I drank from the CFD firehose this semester, jumping into the simulatory fluid dynamics (pool) party via OpenFOAM, a free open source CFD software with functionality that rivals Ansys (a standard in industry), a large user base and adequate documentation. In my course, we utilized the resources of the Texas Advanced Computing Center (TACC), the fastest academic supercomputer and the fifth fastest in the world.
I won’t get into the particulars of some of the simulations that I have run on OpenFOAM via TACC, though most of my work so far has been building off of the tutorials available on OpenFOAM’s online database. Much of what my work explored was the discretization of the fluid dynamics equations that I was using the software to solve and applying it to various flows, such as laminar flow entering a pipe or viscous flow within a cavity. I will highlight one, however, because I want to briefly discuss discretization and convergence.
When simulating the equations governing fluid dynamics (and often many dynamical equations), there are two “dimensions” we have to discretize, often in several dimensions of their own. Essentially, the space the fluid moves in gets divided up into a grid in 2D or 3D space (whatever suits the goal of the sim best) and the time the fluid moves in also gets divided up into a “grid,” or distinct units. The size of each of these discretization is arbitrary but extremely influential on the convergence of the simulation: given a flux velocity of the variable of interest, the discretization size can make or break the convergence of the simulation, obvious after a given number of time steps or spatial steps. The factors are related mathematically to a constant known as the amplification factor, generally referred to as the Courant number. Convergence is simply the idea explained earlier in brief that the numeric solutions when iterated on by the governing dynamic equations approach a steady, constant value. This value is the approximation often used by engineers for design or researchers for data, and if solutions do not converge, then it does not exist. The crux of this is that numeric simulation quickly becomes something of an art (I use that term loosely and with conviction, all engineering is creation), as shown below, various numerical methodologies are sensitize to flux and discretization in different ways and this must also be analyzed

In the work done in my course with a few classmates on a report in particular exploring the effect of discretization on the simulation of flow moving around a cylindrical body in 2D, there are very obvious effects that stem from playing with the discretization of the spatial and temporal domains (which for various numerical solutions, could be any domain, though one is usually time).
In the first figure, we can see the x-velocity magnitude heat map and the contour plots for a coarse (large mesh) spatial discretization. In the latter, we see it for the same external parameters (boundary conditions) but for a much finer spatial discretization.


Using the results from the latter simulation data, we are able to see the velocity streamlines (lines of constant velocity) derived from the solution of the Euler Equations using OpenFOAM.

These streamlines are converged as we get real solutions, but it is important to note that they are only as accurate as any empirical test will show them to be. Here is another visual of my simulation compared to the Karman Vortex Street, an artifact of wake instability and vortex creation (shown above) that is not entirely well understood, generated by the interaction of atmospheric winds and Mount Halla on the island Jeju. The period of oscillation is off by a half a period here, but I think the point is fairly obvious.


Here is a nice little animation of the Karman Vortex Street I was able to generate in Paraview, a visualization software that can read and manipulate OpenFOAM files quite well:

These numerical solutions are, ultimately, no substitute for the real world, the universe is not a computer, at least not like this (perhaps another post). But with a bit of programming overhead, I can flip a switch and run a bluff body through a virtual wind tunnel thousands of times in a few days if I want. The virtualization of engineering has already happened and with opensource tools like OpenFOAM, the power to innovate beyond software, to prototype virtually, is now more democratized and inexpensive than it has ever been. I think we are seeing the second order effects of this now, from SpaceX’s rapid Starship iteration saga (LINK) to the stark increase in “unsexy” industrial startups and manufacturing demand growth to match (LINK). Prototyping will be in an interesting place for the next decade and on. Traditional materials will likely become less expensive as the vertical becomes demand constrained, but it might be the case that this opens up vast new channels for demand of rapid manufacturing and prototyping techniques. Cough, cough, Starship. Here, it will clearly be the case that those with the most frictionless channels or possibly even industrials best vertically integrated will take the pie.
My own work on numerical simulations is rudimentary: you can find what I have done for course work and research based in much of the initiatory materials on the OpenFOAM website & wiki. But I have found it quite fascinating, and I hope that the breadth of this field has captured your interest as it has mine.
Further Reading
Below is a post that is far superior to mine on the topic and lots of great discussion in the comments.

Leave a comment