Preamble
Introduction
Our next speaker is Professor David Zing from the University of Toronto. Professor Zing specializes in flow solvers, aerodynamics, shape optimization, and unconventional aircraft configurations, driven by the need to mitigate climate change impacts.
In his talk, he will discuss nonlinearly stable high-order methods with improved efficiency. Professor Zing will present the innovative work of two of his students, who have revitalized an existing concept to develop new methodologies.
Thank you for the introduction, and I appreciate the invitation to speak at this esteemed symposium. I will now proceed directly to my presentation, as this audience is well-versed in high-order methods and does not require introductory explanations.
In this presentation, I will discuss the summation by parts property, which may be familiar to some of you. However, I hope to introduce it to those who are encountering it for the first time.
We are gathered here for three main reasons. I would like to acknowledge CJ and Tonga, but I only have a slide prepared for Anthony. I appreciate your understanding, as I have not included a slide for you, Jam.
High-Order Methods For CFD
Today, I will discuss the summation by parts property. Some of you may have heard me address this topic before, while others may be encountering it for the first time. We are gathered here for three key reasons, and I apologize to CJ and Tonga for not having a slide dedicated to them; I only have one for Anthony, as he is a prominent figure in our field.
Anthony Jamieson, though he may not be present today, is a remarkable individual. It is challenging to offer new insights about him that haven't already been shared. Here, I have compiled a list of some of his academic mentees. I apologize to anyone who is not included; this is not intended to be a comprehensive list, but rather a selection of individuals I am familiar with.
When I first attended conferences as a newcomer, Anthony appeared to be an imposing figure. He always dressed elegantly, often wearing an ascot, and his talks were so popular that attendees had to arrive early to secure a seat, as the rooms were typically crowded. Engaging with him was difficult, as he was often surrounded by admirers. Fortunately, I was able to connect with him through mutual friends, Tom Poliam and Bob McCormick, which allowed me to see a different side of him. Contrary to his public persona, Anthony is a thoughtful, engaging, and warm individual.
Regarding high-order methods, I do not have much to add, and I am not promoting anything specific at this time.
Anthony Jamieson is a notable figure in our field, though he is unfortunately not present today. He has had a significant influence on many scholars, and while I cannot list everyone, I want to acknowledge a few of his academic mentees.
When I first attended conferences, Anthony's presence was quite commanding. He was always impeccably dressed, often in an ascot, and his talks were highly sought after, leading to crowded rooms where early arrival was essential. Engaging with him was challenging due to the many admirers surrounding him. However, I was fortunate to connect with him personally through mutual friends, Tom Poliam and Bob McCormick. Contrary to his public persona, I found Anthony to be a warm and engaging individual.
Now, turning to our research focus, we aim to explore the utility of methods on simplices. While there is ongoing debate about the appropriate mesh types, our premise is that simplex methods hold specific advantages in certain contexts. We noted that higher-order methods on quadrilaterals and hexahedra exhibit greater efficiency compared to those on simplices. Our goal is to bridge this efficiency gap by leveraging the tensor product structure found in quadrilaterals and hexahedra to enhance simplex methods.
Entropy Stable High-Order Methods
In this section, we will discuss two methods that utilize simplices. Our premise is that simplex methods have specific applications where they can be beneficial. While there is ongoing debate regarding the choice of mesh, our focus is on demonstrating the utility of simplex methods.
We observed that higher-order methods applied to quadrilaterals and hexahedra exhibit significantly greater efficiency compared to those on simplices. Our objective is to bridge this efficiency gap. To achieve this, we have sought inspiration from the tensor product structure found in quadrilaterals and hexahedra, aiming to enhance the efficiency of simplex methods by leveraging this structure.
A key aspect of our approach is the Summation by Parts (SBP) property, which is essential for ensuring provable entropy stability. We are committed to developing efficient operators on simplices that incorporate the SBP property. In the following discussion, we will explore the advantages of the SBP property and its implications for our work.
The summation by Parts Property is defined as a method that satisfies discrete integration by parts. This principle is fundamental to the weak formulation of problems, allowing for a scheme that simultaneously addresses both the strong and weak forms, which is highly beneficial.
While traditionally associated with finite difference methods, the summation by Parts Property is also applicable to Spectral Element Methods and discontinuous spectral elements. It facilitates general proofs of stability by replicating the well-posedness proof of the continuous problem. However, it is important to note that proofs of well-posedness for the Navier-Stokes equations remain elusive, and further research is needed in this area.
The property extends to nonlinear problems through entropy stability, utilizing Simultaneous Approximation Terms for boundary conditions. Unlike typical stability proofs presented in CFD courses, which often rely on idealized scenarios, these proofs apply to the actual problems being solved, including cases with distorted meshes.
Linear Stability Analysis
The summation by parts property is a discrete analogue of integration by parts. This property is fundamental to the weak formulation of problems, allowing for a scheme that simultaneously addresses both the strong and weak forms. This dual approach is particularly advantageous.
Traditionally, the summation by parts property has been associated with finite difference methods; however, its application extends to spectral element methods and discontinuous spectral elements. This property facilitates general proofs of stability by replicating the proof of well-posedness for continuous problems. While well-posedness for the Navier-Stokes equations remains unresolved, future research may provide insights that allow us to adapt these proofs.
Additionally, extensions to nonlinear problems can be achieved through entropy stability. We utilize simultaneous approximation terms for boundary conditions. Unlike conventional computational fluid dynamics (CFD) teaching, which often focuses on stability proofs in idealized scenarios, these stability proofs apply directly to the actual problems being solved, including situations involving distorted meshes.
To illustrate this concept, consider the linear convection equation in one dimension, defined over a specific domain with an initial condition and a boundary condition at the inflow. By multiplying the equation by the solution and integrating over the domain, we derive a relationship indicating that the total energy, represented by the integral of ( u^2 ) over the domain, is determined solely by the boundary conditions—specifically, what enters and exits the domain.
To ensure that our discrete formulation adheres to this principle, we introduce a derivative operator matrix. This matrix can be expressed as the product of the inverse of a symmetric positive definite norm matrix and another matrix ( Q ), which satisfies the required conditions in the simplest one-dimensional case.
This slide focuses on the discrete approximation of the linear convection equation. We analyze the matrix ( Q + Q^T ), where ( Q ) is nearly skew-symmetric. By applying a similar approach as in the continuous case, we define ( H ) as an approximation or quadrature.
When we multiply ( H ) by the solution and integrate over the domain discretely, we arrive at a formulation that mirrors our previous continuous analysis. Utilizing the properties of ( Q ), we can express this in a form that represents the discrete version of the original equation.
The expression ( Q + Q^T ) represents a matrix where ( Q ) is nearly skew-symmetric. By applying a similar approach as in the continuous case, we find that ( H ) serves as an approximation or quadrature. When we multiply this by the solution and integrate over the domain discretely, we replicate the process used in the continuous case. Utilizing the properties of ( Q ), we derive a form that corresponds to the discrete version of the equation.
Additionally, the total approximated integral of ( u^2 ) within the domain is determined solely by the values at the boundaries, specifically by what enters and exits the domain.
Introducing a Strongly Admissible Technique (SAT) to enforce the boundary condition preserves the boundedness and stability observed in the continuous solution.
In one dimension, integration by parts is represented as follows. We replicate this process in a discrete setting, which allows us to achieve the desired property. It is important to note that this approach does not lead to truncation error; rather, it holds true for any mesh, similar to the principle of conservation. Thus, conservation is maintained not only in the limit of mesh refinement but also on a finite mesh.
In this context, we observe that while the individual components do not exactly approximate the desired result, their summation yields the correct outcome. Here, D represents the derivative operator, H approximates a volume integral, and E approximates a surface integral. In one dimension, the surface integral is less apparent as it corresponds to just two points. However, in two dimensions, the integration by parts becomes more complex, and the surface integral is clearly represented by E.
The Multidimensional SBP operator must accurately approximate derivatives. It should be decomposable using the SPD matrix. Additionally, it is essential that the relationship ( Q + Q^T = E ) holds, where ( E ) approximates the surface integral.
The Multidimensional SBP operator is designed to approximate derivatives with a specified level of accuracy. It must be decomposable with the SPD matrix, and it must satisfy the condition Q + Q transpose equals E, where E approximates the surface integral.
In the context of multidimensional operators on simplices, we categorize them into three classes. The first class, SBP-Ω, features distinct volume quadrature nodes represented by circles and facet quadrature nodes represented by orange squares. In the second class, SBP-Γ, some volume quadrature nodes are located on the facets, but they do not necessarily coincide with the facet quadrature nodes. The third class, SBP-E, has volume quadrature nodes on the facets that coincide with the facet quadrature nodes, leading to a diagonal matrix that can enhance computational efficiency.
While the previous discussion focused on linear convection and its linear stability, our interest now shifts to nonlinear stability. The slide presents the equation for entropy, expressed as an equality for smooth solutions. This auxiliary equation must be satisfied in addition to the conservation equations, which facilitates various proofs.
Entropy Stability And Conservation
Bounding the entropy provides an L2 bound on the solution, provided that the thermodynamic variables remain positive. However, ensuring positivity is often challenging. When we consider the possibility of non-smooth solutions, this relationship becomes an inequality.
The foundational work by Fisher and Carpenter combined the SBP property with two-point flux functions to develop provably entropy-stable schemes. This approach was subsequently extended to simplices by Jared Crean, a student of Jason Higgins at Rensselaer.
Bounding the entropy provides an L2 bound on the solution, provided that the thermodynamic variables remain positive. However, ensuring the positivity of these variables is challenging. When we consider non-smooth solutions, the problem becomes an inequality.
The foundational work of Fisher and Carpenter established a connection between the SBP property and two-point flux functions, leading to the development of provably entropy stable schemes. This work was further advanced by Jared Crean, a student of Jason Higgins at Rensselaer, who extended these concepts to simplices.
One disadvantage of entropy stable schemes is the high computational cost associated with two-point flux functions. It may seem counterintuitive, but it is necessary to compute the two-point flux between every pair of points within an element. This is where the tensor product structure proves advantageous, as it reduces the penalty associated with multidimensional operators.
For a polynomial degree ( P = 2 ), consider a single node; it requires a two-point flux calculation with every other node in the element. As the polynomial degree increases to ( P = 4 ) or ( P = 6 ), the number of two-point flux functions grows as ( P^{2D} ). In contrast, for a tensor product operator, such as those used on quadrilaterals or hexahedra, the number of functions scales as ( D + 1 ). This more favorable scaling is one reason why simplex schemes are less efficient.
To address these inefficiencies, my students proposed two strategies that leverage the tensor product structure. The first idea involves using collapse coordinates, while the second suggests converting triangles into quadrilaterals or tetrahedra into hexahedra. I will elaborate on each of these approaches.
New SBP Methods On Simplices
In this discussion, I would like to reference a well-known concept: Nektar++, which utilizes collapsed coordinates. Our contribution is to extend this framework to incorporate the Summation by Parts (SBP) property, facilitating more straightforward entropy stability proofs. It is important to note that the concepts of spectral element formulations and collapsed coordinates are established ideas, not original to our work, and they contribute to the efficiency of the code. Furthermore, the advantages of this approach become increasingly significant at higher polynomial degrees.
To clarify, the collapsed coordinate transformation is precisely what its name suggests.
The concept of collapsed coordinates involves transforming a quadrilateral into a triangle by repositioning a vertex. For a hexagon to be converted into a tetrahedron, the process is more complex and requires multiple transformations. This results in a singularity at one vertex in the triangular case, which is why we typically avoid placing a volume quadrature node at that location.
By utilizing this approach, we can apply the same tensor product algorithm, which reduces the number of flux evaluations needed on the simplex. I will provide a high-level overview today, and at the conclusion of my presentation, I will reference relevant papers for those seeking more detailed information.
To transform a quadrilateral into a triangle, you move a specific node to a new position. In contrast, converting a hexagon to a tetrahedron is more complex, requiring multiple adjustments. This process results in a singularity at one node in the triangle case, which is why we typically avoid placing a volume quadrature node there.
This approach allows us to utilize the same tensor product algorithm, which reduces the number of flux evaluations on the simplex. I will provide a high-level overview, and at the end of the presentation, I will reference relevant papers for further information.
In our discussion of the Chandra Gauss nodal distribution on the reference element for p equals 4, we note that the two-point flux evaluation scales as p to the power of (p + 1). Using a co-located formulation leads to significant time step restrictions due to node spacing. To address this issue, we opted for a modal formulation, which alleviates the problem.
A key factor in enhancing efficiency is the weight-adjusted approximation developed by the next speaker, Bessie Chan.
The Chandra Gauss nodal distribution is utilized on the reference element for ( p = 4 ). The two-point flux evaluation scale is ( p^{p+1} ). When employing a co-located formulation, we encounter significant time step restrictions due to node spacing. To address this issue, we adopted a modal formulation, which alleviates these constraints.
A key aspect of enhancing efficiency is the weight-adjusted approximation developed by Bessie Chan, which will be discussed by the next speaker.
In the accompanying slide, dashed lines represent entropy conservative fluxes, while solid lines indicate entropy stable fluxes. Although the entropy stable method introduces a slight error, all results are symmetric. The upper section of the slide presents 2D results, while the lower section shows 3D results based on the Euler equations.
The comparison focuses on the collapse coordinate terms of product implementation versus a truly multidimensional approach. The primary takeaway is that both methods yield identical accuracy, as evidenced by the overlapping curves. Thus, we have not sacrificed accuracy by utilizing collapsed coordinates. The next consideration is the associated cost.
The slide presents a comparison of two flux evaluation methods: entropy conservative fluxes, represented by dashed lines, and entropy stable fluxes, indicated by solid lines. The results, which are symmetrical, are shown for both 2D (top) and 3D (bottom) cases based on the Euler equations.
The focus of the comparison is between the collapsed coordinate terms of the product implementation and a fully multidimensional approach. The key takeaway is that both methods achieve identical accuracy, regardless of element size, indicating that transitioning to collapsed coordinates does not compromise performance.
Furthermore, the analysis of the number of two-point flux evaluations highlights the advantages of the tensor product coordinate approach. It demonstrates a favorable trend toward expected asymptotic scaling. The multidimensional scheme reflects a 2D complexity, while the tensor product scheme exhibits a complexity of D + 1, where D represents the number of dimensions. The benefits of the tensor product approach increase significantly with the degree, showing an order of magnitude improvement for B equals 10.
The slide presents the number of floating point operations involved in local operator evaluation, highlighting similar trends as observed previously. While actual CPU time would provide more insight, the complexities of coding necessitate the use of these metrics instead.
At lower degrees, the benefits are minimal; however, a significant advantage is evident at higher degrees, particularly in three-dimensional or tetrahedral contexts. Additionally, it is important to note that the curve for the tensor product approach extends further than the multidimensional scheme. This is due to the finite number of quadrature rules applicable to multiple dimensions on simplices, in contrast to one-dimensional scenarios where the degree can increase indefinitely. By leveraging the tensor product, we can utilize one-dimensional quadrature rules, allowing for much higher degrees of accuracy.
Operator Based On Split Simplices
The second concept we are exploring is the idea of splitting synthesis, a method that has been considered previously. For instance, a triangle can easily be divided into three quadrilaterals. This raises the question: why would one use a triangular mesh instead of simply subdividing it? Similarly, a tetrahedron can be transformed into hexahedra.
Previous attempts to generate a fully unstructured simplex mesh and then subdivide it have shown that while this approach works well for regular meshes, it struggles with highly distorted meshes. To address this, we propose a different strategy: instead of merely chopping up the simplices, we will split them into quadrilaterals or hexahedra. We will then apply tensor product SBP operators within these quadrilaterals or hexahedra where feasible. The final step involves reassembling these components into a genuinely multidimensional operator using a continuous interface formulation.
This method ensures continuity across the interfaces, maintaining discontinuity only at the actual element boundaries. The SBP property facilitates a straightforward extension to entropy stability. It is important to note that previous attempts to split elements did not yield optimal results, thus underscoring the necessity for this refined approach.
The concept of splitting synthesis is a well-established yet innovative approach in mesh generation. For instance, a triangle can be easily divided into three quadrilaterals, and a tetrahedron can be separated into multiple hexahedra. Historically, attempts have been made to create fully unstructured simplex meshes and then subdivide them. However, this method tends to perform poorly with highly distorted meshes.
To address this issue, we propose a different strategy: splitting simplices into quadrilaterals or hexahedra and mapping tensor product SBP operators onto these shapes where applicable. This process allows us to reassemble the components into a comprehensive multidimensional operator using a continuous interface formulation. Importantly, our approach maintains continuity at the interfaces, with discontinuities occurring only at the actual element boundaries.
The extension of the SBP property to ensure entropy stability is also straightforward. It is crucial to note that previous attempts at element splitting have not always yielded favorable results.
The following slide presents various nodal distributions that illustrate the underlying tensor product structures in three quadrilaterals. To achieve successful implementation, the one-dimensional operator must be one order higher than the desired output, specifically D minus one orders higher for the final multidimensional operator.
The results shown compare two-dimensional data on the left with three-dimensional data on the right. The notation used includes TPSS, which stands for Tensor Product Split Simplex, and SVP, which refers to a specific type of spatial variation principle.
Here are various nodal distributions that illustrate the underlying tensor product structures across three quadrants. To implement this effectively, your one-dimensional operator must be one order higher than the desired final multidimensional operator, specifically D minus one orders higher.
In the results presented, we compare 2D representations on the left with 3D representations on the right. The notation used includes TPSS, which stands for Tensor Product Split Simplex, and SVP, which refers to the Split Boundary Operator.
The open symbols represent the multidimensional operator, while the solid symbols denote the new operators. By matching colors, you can observe a significant advantage in freedom and solution error.
This example focuses on an isotropic vortex problem. Notably, we applied extensive warping and skewing to the meshes. Interestingly, despite these distortions, we are not encountering the expected issues typically associated with highly distorted meshes. Additionally, we present normalized comparisons of residual computation times.
The presented data compares the performance of two advanced computational methods: the Tensor-Product Split-Simplex operator and the Multidimensional SVP operator. In a best-case scenario with a 3D polynomial degree of 5, we observe significant reductions in solution error. Specifically, for the same computational effort, the error can be reduced by approximately a factor of four. Alternatively, when targeting a specific error level, the time savings can be as much as a factor of 20.
Additionally, we note that the performance varies with different polynomial orders. Both methods are designed to implement high-order schemes on simplices, ensuring entropy stability. This positions them as competitive options for solving complex problems, including the isotropic vortex problem, even when dealing with highly distorted meshes.
Concluding Remarks
In our analysis, we observe consistent trends. For instance, in the best-case scenario where 3D p equals 5, we can utilize the Tensor-Product Split-Simplex operator alongside the Multidimensional SVP. By comparing the same workload, we can assess the reduction in error, which is approximately a quarter. Alternatively, if we focus on achieving a specific error, we can save time by a factor of 20.
The results vary depending on the order and the specific methods employed. We have identified two effective approaches for implementing high-order schemes on simplices that are entropy stable and may offer competitive advantages.
Thank you for your attention. For further details, please refer to the provided resources.
In this presentation, we explored the development of entropy-stable high-order schemes on simplices. These advanced schemes aim to enhance competitiveness with existing methods.
Thank you for your attention. For further details, please refer to the provided resources.