Introduction
It is well known that the mechanical properties of engineering materials depend on the composition, processing and resultant microstructure features. The macroscale mechanical behaviour is determined by a combination of physical processes happening at the lower-scale, involving for example line defects such as dislocations or grain boundaries within the crystal. These mechanisms need to be fully understood in order to predict material behaviour. In the process of designing a product, it is very important to understand how the material performance at the component level changes during service. Linking properties to chemistry, crystallographic and microstructure morphological attributes, and determining how they change during the manufacturing processing is an important step to designing materials for a specific purpose.
Over the years, many techniques have been developed to characterise such phenomena both experimentally and computationally. Nowadays, it is possible to interrogate behaviour at the atomic scale for example using Atom Probe Tomography, or using modelling tools to uncover insights into important mechanisms. Despite the success obtained by these techniques in rationalising anomalous phenomena, they remain used mainly for research purposes and they are not fully adopted by industry. The main obstacle is the practicability of these techniques. They are very expensive tools in term of cost and time. They also require skilled practitioners who understand the physics and modelling tools.
These technologies are also very important on the digitalisation of manufacturing process and operational components. The digital twins aim to assure performance from the manufacturing stage to service and at the same time increase efficiency. One of the main challenge is the digitalisation of these process is to assure a certain level of accuracy between the virtual representation and the real system. This requires that information of the system established through modelling and/or experimental characterisation are accurate enough to trust the prediction made. This need to be achieved by an understanding and quantification of the uncertainty of these models and how they are propagated to the digital twin.
This research activity focuses on breaking down the barrier between advanced scientific approaches, usually adopted by academia, and the practical approaches used in industry for alloy design and material characterisation techniques. This will be achieved by reviewing the state-of-the-art in material modelling and characterisation methods from nanoscale to macroscale and identifying viable activities for alloy design that can be exploited in practice in industry.
Material Design
It compromises a series of engineering decision-making processes to select or development materials to ensure the required performance in term of chemical-physical response for a specific engineering application. The manufacturing process needs to be considered during this process since it also effects the final properties changing the microstructure feature. In order to respond to constrains such as cost and sustainability, the material and process design need to be integrated in the product design.
Traditional Engineering Design
Engineering Design is the decision making process used by engineers to create functional product and process, driven by basic sciences, mathematics and engineering science. The initial identification of one or more working principles is very important stage, and it impacts on the product life cost. Traditionally, the choice of materials and manufacturing process are not integrated at this stage, but it is usually made later in the design process where the specifications of the product are already defined and the design engineer is working on the refinement and more complex description of product. As a result, the relationship between product specification and material/process specification is very weak, leading to an excess of conservatism, undefined safety margins and higher cost.
Design for Material
The material can either be chosen through a screening process or it can be design from scratch. The first approach is called Material Selection, it is the classical method where the material is chosen to assure the performance and minimise the cost based on the designed geometry, application and operational environment, following the process illustrated by Ashby [1]:
- reducing the number feasible materials based on product geometry, loading and environment condition through preliminary screening and ranking techniques;
- elimination of materials with poor performance on the overall product application based on expert evaluation, handbook information, specialised software calculation;
- final choice based on local condition.
The selection approach requires that materials have been extensively characterised in term of physical-chemical properties and their applications.
The other approach is Materials Design, where a new material is developed to fulfil design requirement of the product. This is pursued through an exploratory research of the chemistry, physical and biological properties of the new material, where the possible applications are identified only later.
The material identification for a product or application is not an easy task. It requires a deeply understand of the connection between composition-structure and physical-chemical response. Historically, both selection approaches rely on experimental techniques for materials property characterisation. The complexity of physical and chemical aspects behind the material performance makes these experimental studies very expensive, tedious and not trivial. The development, in recent year, of Multi-Scale Modelling techniques in combination with Advanced Characterisation have allowed to cut the cost and time of these processes.
Ashby Chart for Materials Selection: Young’s Modulus vs Density Chart. Material Family Chart. Chart created using CES EduPack 2019, ANSYS Granta © 2020 Granta Design.
Design for Manufacture
The manufacturing process from raw material to the final product is characterised by different stages. Considering all the manufacturing steps can be very complex to design. The common approach is to select each process at once. A single manufacturing step is defined as the process that determines a change in the material features, such as microstructure, and/or product/component shape. Each process step is designed considering the implications on chemical variation and morphology in the material. This method simplified the design process, but it makes hard to satisfy the demand for decrease the environment impact and the use of recyclable and reusable technologies.
Integrated Material-Process-Product Design
In recent years, the necessity of more green technologies is changing conventional design paradigm. The design process does not focus only on meeting performance and reducing cost, but it aims to extent the product life, reduce materials waste, energy consumption and CO2 emission during manufacturing process, and replace of traditional materials with recyclable ones. This can be achieved through Integrated Material-Process-Product Design, where the material and manufacturing process designs are integrated with the final product or technology. Material Design and Manufacturing Process design become now application-requirement-driven processes. The design goal is to find the most suitable combination of composition/structure that assures the chemical-thermal-physical properties for the application considered. This is a very complex problem which depends on different chemical-physic mechanisms. For example, the microstructure depends on the material chemistry but it is also changed by the manufacturing processes. At the same time, the material chemistry may limit the choice of manufacture process chain and process parameters for each step. The manufacture also effects the final geometry of product and the material choice can determine the final product weight. These logical relations are usually summarised by a Venn diagram, that show the links between these three design sets. This methodology requires a deep understand of the relationship that links the chemistry and structure with the mechanical behaviour of the material.
Process-structure-properties-performance (PSPP) relation
The integrated approach is based on the linear connection between process-structure-properties-performance shown by Olson [2], where the flow from process to performance is the deductive cause-and-effect or bottom-up method, while the opposite flow is the inductive goal-means or top-down. The bottom-up approach is applied by material scientist to understand the links between composition/microstructure and properties. The top-down method instead is usually used in engineer design where the material characteristics are designed to absolve a particular purpose.
Venn diagram of integrated Material-Process-Product design.
Schematic representation of the process-structure-properties-performance link. This relationship is investigated using bottom-up or top-down approaches.
REFERENCES
[1] Ashby, M. F. “Materials selection in conceptual design.” Materials science and technology 5, no. 6 (1989): 517-525. [2] Olson, G. B. “Computational design of hierarchically structured materials.” Science 277, no. 5330 (1997): 1237-1242.
Advanced Characterisation
The constant push to the limit of material performance and optimisation have seen a constant development of experimental technologies to facilitate the understanding, forecasting and performance controlling of material physics. Knowing that the material behaviour strictly depends on the hierarchical physic phenomena, these tools help to understand these processes by disclosing important aspects, informing and validating mathematical/physical models.
Multi-Scale Modeling
Multi-Scale modelling frameworks aim to bridge together simulations across different length scales in order to study lower-scale phenomena which occur inside the materials and affect the final material properties. These approaches are very useful to understand the composition-structure-performance link, help the material and process design, and ultimately allow make prediction on material behaviour at different life stage, from production to end of life.
Multi-Scale Modelling
Multi-Scale modelling is very well known solving-problem approach that is used in different fields to solve very complex problems such as weather forecasting (Global Environment Multiscale model [1]) or cancer evolution [2].
Materials are very complicated systems, their performance in real world depends on physical and chemical phenomena that can be controlled by electronic interaction, or mesoscale entities dynamic like dislocations. Solve this puzzle at once is too complicated and not achievable with the present technologies. The multi-scale model approach reduces the complexity using ‘divide and conquer’ concept. Instead of modelling the entire system, the solution is obtained by connecting together smaller simulations of mechanisms that happen at nanoscale, microscale and macroscale. Connecting each little piece of information in way to be able to understand and predict chemical-physical behaviour of the real size component, is still very challenging. The ability of bring together all these knowledge is the core of multi-scale approaches, and it is usually referred as scale bridging.
Scale Bridging
In Multi-Scale Modelling, scale bridging methods are models or algorithms that couple together various scale-size processes enabling to retain information from one scale to another. The methodologies used go from sampling and homogenisation to constitutive models, just to cite few examples. Which approach to use depends on the applications. However, two strategies are usually used: hierarchical and concurrent.
In the hierarchical scale bridging, the lower scale is separated from the higher scale and the necessary information are passed from one scale to another. This approach can be applied when the lower-level approach is well known and it is understood how it effects the higher level mechanism. The quality of the final output relies on the on the reliability and robustness of the all the simulations, the correctness and completeness of the information passed. Example of this are statistical approaches like homogenisation, sampling and constitutive model.
When there is no a priori knowledge of the relation between the lower scale phenomena and the higher scale behaviour, concurrent methods are used. In this case, the domain is divided in two or more subdomains which are solved simultaneously by different models. The lower resolution is applied to the area of interested while the other subdomains are treated with lower accuracy. This allows to improve the efficiency in term of computational and time resources. For example, some techniques couple atomistic simulations with continuum region descriptions [3].
Schematic representation of modelling methods classification based on the entity outlined.
Review of modelling technique
Most of the modelling methods used in materials science overlap across the length and time scales. Therefore, using a a scale system to classify material modelling methods may not be practical. However, material behaviour can be described by the behaviour of four fundamental entities: electrons, atoms, particles and continuum volume. The modelling technologies can then be better classified by which of these four entities they intend to describe.
Electronic models are used to follow the evolution at quantum level of atomistic or molecule systems. They describe electronic interactions, where the position, momenta and spins of electrons in the system are represented by electronic wave functions, whose Hamiltonian operators represent the total energy.
The Schrödinger equation is used in most of the electronic models, with an approximation of the Hamiltonian or of the wave-function. Other approaches instead are based on the Kohn Sham theorem which solve the many-particle by approximating the electrons interaction with an external potential. Alternatively, the many-body lattice problem can be solved by mapping it onto a single-site problem, known as impurity model. This method uses the same ideas behind the classical mean field theory but at the quantum level. Respect to many-body problem, the impurity model is more easy to solve in this case. Other techniques apply Green’s functions to solve the Schrödinger equation.
The outputs of these models are used to develop parameters for atomistic, mesoscopic or continuum simulations, such as chemical reaction coefficients, diffusion coefficients and activation energies. These models are also used to interrogate thermodynamic stability and kinetic of atomic defects. They are useful method for the calibration of interatomic potentials or partial atomic charge distributions used for atomistic simulations. Their applications also include the evaluation of electronic, conductive and optical properties.
Atomistic models use molecular or classic mechanics to describe the behaviour of atoms and molecules. The electronic field is ignored, instead force fields or interatomic potentials are implemented to describe how atoms interact. These potentials allow to simulate larger system (102-109 atoms) respect to the electronic based methods.
Two main groups can be identified: statistical and deterministic methods. Atomic system can be treated as statistical large population, where the atomic forces applied determine the system. Depending on which variables are kept constant, the atom ensemble is referred to as micro-canonical (constant number of particles, temperature and total energy), canonical (constant number of particles, temperature and volume) or grand canonical (constant chemical potential, temperature and pressure). Most common statistical methods are Monte Carlo (MC) based approaches like Markovian chain MC or kinetic Monte Carlo. Molecular Dynamic methods are instead deterministic method that solve the classical dynamic equation to determine the atom motions.
Some examples of application of atomistic models are the simulation of diffusion mechanisms, phase transformations (evaporation, melting or solidification), evaluation of surface and interface energies or characterisation of fundamental dislocation properties such as core energies.
Mesoscale models describe the behaviour of nanoparticles, grains or molecules in a length scale that goes from nm to mm. The inside detail of atomic motions is not resolved, instead they are averaged out or replaced by stochastic terms.
Some of these models focus on the prediction of microstructural evolution. Phase field method is an example of microstructure evolution model, where a diffusion based interface model follows the interfacial topological changes. Precipitation models are based on theories of nucleation, growth and coarsening. Cellular automata method is also applied in this field, the results depend on the set of rule chosen for the nucleation and growth of grains/phases. Another class of mesoscale models simulates the material mechanics behaviour. For example, Discrete Dislocation Dynamics methods simulate the motion and defect-interactions of dislocations under the application of an external load.
Continuum models treat the material as it occupies entirely a region of the simulation domain in continuous way. The continuum volume is divided either in finite volumes, cells or elements. These models can simulate processes from nanoscale to macro level, so sometimes they are divided in micro and macro models. The micromodels can be used to generate the input for the macro-models.
Five main types of models can be identified depending on the application. Solid mechanics for example interrogates the behaviour of solid matter under one or more externals condition such as thermal and/or mechanical loading, or electrochemical reactions. Fluid mechanics regards the motion of fluids including liquids, gases or dense plasmas. Some methods focus on study energy conservation (thermodynamics models), chemical reactions, (chemistry reaction models), or electrically charged particles, (electromagnetism). All these models require the formulation of a set of equations that describe the process under investigation and a constitutive law describing the material behaviour. For example, fluid mechanics theory is based on solving the conservation laws of energy, momentum and mass to follow the variation of speed, temperature, pressure and density. While, linear elasticity is a material property, that describes the material strain response as directly proportional to the stress applied.
REFERENCES
[1] Côté, J., Gravel, S., Méthot, A., Patoine, A., Roch, M., and Staniforth, A. “The operational CMC–MRB global environmental multiscale (GEM) model. Part I: Design considerations and formulation.” Monthly Weather Review 126, no. 6 (1998): 1373-1395. [2] Deisboeck, T. S., Wang, Z., Macklin, P., and Cristini, V. “Multiscale cancer modeling.” Annual review of biomedical engineering 13 (2011): 127-155. [3] Kohlhoff, S., Gumbsch, P., and Fischmeister, H. F.. “Crack propagation in bcc crystals studied with a combined finite-element and atomistic model.” Philosophical Magazine A 64, no. 4 (1991): 851-878.
Uncertainty Quantification
The uncertainty effecting the data may not be always reducible, like for example the materials heterogeneity in term of composition and microstructure. However, the accuracy of the information gathers through multi-scale modelling and experimental characterisation effect the outcome of material design, or digital integration. Therefore, methodologies have been developed to quantify the output errors and how they affect the model prediction.
Introduction
Any physical measurements, experimental data and modelling outputs are effected by errors. These uncertainties arise from the stochastic nature of the system examined, the impossibility to observe the whole system, the lack of knowledge or understanding of the system physics. They can be reduced only to some extent, therefore it is important quantified the uncertainties of any modelling and characterisation study outputs to understand the accuracy of the predictions made for example in Material Design, Process Design or on the construction of the Digital Twin.
Uncertainty quantification (UQ) is a scientific method to measure the uncertainties effecting the data produced either through modelling or experimentation. Two types of uncertainties are usually identified aleatoric and epistemic.
Aleatoric uncertainty is an intrinsic uncertainty due to the natural variability of the quantity like for example the variation of microstructure features (grain size, orientation) due for example to the process parameter fluctuation. This type of uncertainty can not be reduced and it is usually represented by a probability. Conducting a UQ on these variations is vary important because they result in fluctuations in the performance of the materials. in this case, UQ investigation is deductive type from the inputs to the outputs.
Epistemic uncertainty is due to the lack of knowledge of the system, model simplifications, numerical error or model inputs that are difficult to measure or unknow. These uncertainties can be reduced by improvement the modelling tools, the experimental techniques and our understand of the physics. These uncertainties are more difficult to measure due to the unknown nature of them. UQ is performed in a inductive way from the outputs to inputs, and it can help to uncover new mechanisms, or relationships.
Modelling For UQ
The modelling tools used in UQ can be implemented along the side the model, which is treated as black-box. These type of algorithm are called non-intrusive. In intrusive approaches, the model is modified to internally take in account the uncertainties.