Advanced modeling and simulation in engineering sciences

Vol. 5, num. 1, p. 1-54

DOI: 10.1186/s40323-018-0109-4

Date of publication: 2018-12

Abstract:

An entire design-through-analysis workflow solution for isogeometric B-Rep analysis (IBRA), including both the interface to existing CADs and the analysis procedure, is presented. Possible approaches are elaborated for the full scope of structural analysis solvers ranging from low to high isogeometric simulation fidelity. This is based on a systematic investigation of solver designs suitable for IBRA. A theoretically ideal IBRA solver has all CAD capabilities and information accessible at any point, however, realistic scenarios typically do not allow this level of information. Even a classical FE solver can be included in the CAD-integrated workflow, which is achieved by a newly proposed meshless approach. This simple solution eases the implementation of the solver backend. The interface to the CAD is modularized by defining a database, which provides IO capabilities on the base of a standardized data exchange format. Such database is designed to store not only geometrical quantities but also all the numerical information needed to realize the computations. This feature allows its use also in codes which do not provide full isogeometric geometrical handling capabilities. The rough geometry information for computation is enhanced with the boundary topology information which implies trimming and coupling of NURBS-based entities. This direct use of multi-patch trimmed CAD geometries follows the principle of embedding objects into a background parametrization. Consequently, redefinition and meshing of geometry is avoided. Several examples from illustrative cases to industrial problems are provided to demonstrate the application of the proposed approach and to explain in detail the proposed exchange formats.]]>

Advanced modeling and simulation in engineering sciences

Vol. 5, num. 1, p. 1-40

DOI: 10.1186/s40323-018-0113-8

Date of publication: 2018-12

Abstract:

The paper presents a robust algorithm, which allows to implicitly describe and track immersed geometries within a background mesh. The background mesh is assumed to be unstructured and discretized by tetrahedrons. The contained geometry is assumed to be given as triangulated surface. Within the background mesh, the immersed geometry is described implicitly using a discontinuous distance function based on a level-set approach. This distance function allows to consider both, “double-sided” geometries like membrane or shell structures, and “single-sided” objects for which an enclosed volume is univocally defined. For the second case, the discontinuous distance function is complemented by a continuous signed distance function, whereas ray casting is applied to identify the closed volume regions. Furthermore, adaptive mesh refinement is employed to provide the necessary resolution of the background mesh. The proposed algorithm can handle arbitrarily complicated geometries, possibly containing modeling errors (i.e., gaps, overlaps or a non-unique orientation of surface normals). Another important advantage of the algorithm is the embarrassingly parallel nature of its operations. This characteristic allows for a straightforward parallelization using MPI. All developments were implemented within the open source framework “KratosMultiphysics” and are available under the BSD license. The capabilities of the implementation are demonstrated with various application examples involving practice-oriented geometries. The results finally show, that the algorithm is able to describe most complicated geometries within a background mesh, whereas the approximation quality may be directly controlled by mesh refinement.]]>

Advanced modeling and simulation in engineering sciences

Vol. 5, num. 1, p. 1-20

DOI: 10.1186/s40323-018-0122-7

Date of publication: 2018-12

Abstract:

In this paper we present a collection of techniques used to formulate a projection-based reduced order model (ROM) for zero Mach limit thermally coupled Navier–Stokes equations. The formulation derives from a standard proper orthogonal decomposition (POD) model reduction, and includes modifications to improve the drawbacks caused by the inherent non-linearity of the used Navier–Stokes equations: a hyper-ROM technique based on mesh coarsening; an implicit ROM subscales formulation based on a variational multi-scale (VMS) framework; and a Petrov–Galerkin projection necessary in the case of non-symmetric terms. At the end of the article, we test the proposed ROM formulation using 2D and 3D versions of the same example: a differentially heated cavity.]]>

Advanced modeling and simulation in engineering sciences

Vol. 3, num. 1, p. 1-22

DOI: 10.1186/s40323-016-0078-4

Date of publication: 2016-12

Abstract:

The work deals on computational design of structural materials by resorting to computational homogenization and topological optimization techniques. The goal is then to minimize the structural (macro-scale) compliance by appropriately designing the material distribution (microstructure) at a lower scale (micro-scale), which, in turn, rules the mechanical properties of the material. The specific features of the proposed approach are: (1) The cost function to be optimized (structural stiffness) is defined at the macro-scale, whereas the design variables defining the micro-structural topology lie on the low scale. Therefore a coupled, two-scale (macro/micro), optimization problem is solved unlike the classical, single-scale, topological optimization problems. (2) To overcome the exorbitant computational cost stemming from the multiplicative character of the aforementioned multiscale approach, a specific strategy, based on the consultation of a discrete material catalog of micro-scale optimized topologies (Computational Vademecum) is used. The Computational Vademecum is computed in an offline process, which is performed only once for every constitutive-material, and it can be subsequently consulted as many times as desired in the online design process. This results into a large diminution of the resulting computational costs, which make affordable the proposed methodology for multiscale computational material design. Some representative examples assess the performance of the considered approach.]]>

Advanced modeling and simulation in engineering sciences

Vol. 2, num. 1, p. 1-19

DOI: 10.1186/s40323-015-0048-2

Date of publication: 2015-12

Abstract:

Friction Stir Welding (FSW) process is a relatively recent welding process (patented in 1991). FSW is a solid-state joining process during which materials to be joined are not melted. During the FSW process, the behaviour of the material is at the interface between solid mechanics and fluid mechanics. In this paper, a 3D numerical model of the FSW process with a non-cylindrical tool based on a solid formulation is compared to another one based on a fluid formulation. Both models use advanced numerical techniques such as the Arbitrary Lagrangian Eulerian (ALE) formulation, remeshing or the Orthogonal Sub-Grid Scale method (OSS). It is shown that these two formulations essentially deliver the same results.]]>

Advanced modeling and simulation in engineering sciences

Vol. 2, num. 1, p. 1-30

DOI: 10.1186/s40323-015-0050-8

Date of publication: 2015-11-26

Abstract:

Model order reduction (MOR) is one of the most appealing choices for real-time simulation of non-linear solids. In this work a method is presented in which real time performance is achieved by means of the o-line solution of a (high dimensional) parametric problem that provides a sort of response surface or computational vademecum. This solution is then evaluated in real-time at feedback rates compatible with haptic devices, for instance (i.e., more than 1kHz). This high dimensional problem can be solved without the limitations imposed by the curse of dimensionality by employing Proper Generalized Decomposition (PGD) methods. Essentially, PGD assumes a separated representation for the essential eld of the problem. Here, an error estimator is proposed for this type of solutions that takes into account the non-linear character of the studied problems. This error estimator allows to compute the necessary number of modes employed to obtain an approximation to the solution within a prescribed error tolerance in a given quantity of interest.

Model order reduction (MOR) is one of the most appealing choices for real-time simulation of non-linear solids. In this work a method is presented in which real time performance is achieved by means of the o -line solution of a (high dimensional) parametric problem that provides a sort of response surface or computational vademecum. This solution is then evaluated in real-time at feedback rates compatible with haptic devices, for instance (i.e., more than 1kHz). This high dimensional problem can be solved without the limitations imposed by the curse of dimensionality by employing Proper Generalized Decomposition (PGD) methods. Essentially, PGD assumes a separated representation for the essential eld of the problem. Here, an error estimator is proposed for this type of solutions that takes into account the non-linear character of the studied problems. This error estimator allows to compute the necessary number of modes employed to obtain an approximation to the solution within a prescribed error tolerance in a given quantity of interest.]]>

Advanced modeling and simulation in engineering sciences

Vol. 2, num. 1, p. 1-14

DOI: 10.1186/s40323-015-0052-6

Date of publication: 2015-11-25

Abstract:

The Proper Generalized Decomposition (PGD) requires separability of the input data (e.g. physical properties, source term, boundary conditions, initial state). In many cases the input data is not expressed in a separated form and it has to be replaced by some separable approximation. These approximations constitute a new error source that, in some cases, may dominate the standard ones (discretization, truncation...) and control the nal accuracy of the PGD solution. In this work the relation between errors in the separated input data and the errors induced in the PGD solution is discussed. Error estimators proposed for homogenized problems and oscillation terms are adapted to asses the behaviour of the PGD errors resulting from approximated input data. The PGD is stable with respect to error in the separated data, with no critical amplification of the perturbations. Interestingly, we identified a high sensitiveness of the resulting accuracy on the selection of the sampling grid used to compute the separated data. The separation has to be performed on the basis of values sampled at integration points: sampling at the nodes defining the functional interpolation results in an important loss of accuracy. For the case of a Poisson problem separated in the spatial coordinates (a complex diffusivity function requires a separable approximation), the final PGD error is linear with the truncation error of the separated data. This relation is used to estimate the number of terms required in the separated data, that has to be in good agreement with the truncation error accepted in the PGD truncation (tolerance for the stoping criteria in the enrichment procedure). A sensible choice or the prescribed accuracy of the PGD solution has to be kept within the limits set by the errors in the separated input data.

The Proper Generalized Decomposition (PGD) requires separability of the input data (e.g. physical properties, source term, boundary conditions, initial state). In many cases the input data is not expressed in a separated form and it has to be replaced by some separable approximation. These approximations constitute a new error source that, in some cases, may dominate the standard ones (discretization, truncation...) and control the nal accuracy of the PGD solution. In this work the relation between errors in the separated input data and the errors induced in the PGD solution is discussed. Error estimators proposed for homogenized problems and oscillation terms are adapted to asses the behaviour of the PGD errors resulting from approximated input data. The PGD is stable with respect to error in the separated data, with no critical amplification of the perturbations. Interestingly, we identified a high sensitiveness of the resulting accuracy on the selection of the sampling grid used to compute the separated data. The separation has to be performed on the basis of values sampled at integration points: sampling at the nodes de fining the functional interpolation results in an important loss of accuracy. For the case of a Poisson problem separated in the spatial coordinates (a complex diffusivity function requires a separable approximation), the final PGD error is linear with the truncation error of the separated data. This relation is used to estimate the number of terms required in the separated data, that has to be in good agreement with the truncation error accepted in the PGD truncation (tolerance for the stoping criteria in the enrichment procedure). A sensible choice or the prescribed accuracy of the PGD solution has to be kept within the limits set by the errors in the separated input data.]]>