Fast statistical homogenization procedure (FSHP) for particle random composites using virtual element method

Mechanical behaviour of particle composite materials is growing of interest to engineering applications. A computational homogenization procedure in conjunction with a statistical approach have been successfully adopted for the definition of the representative volume element (RVE) size, that in random media is an unknown of the problem, and of the related equivalent elastic moduli. Drawback of such a statistical approach to homogenization is the high computational cost, which prevents the possibility to perform series of parametric analyses. In this work, we propose a so-called fast statistical homogenization procedure (FSHP) developed within an integrated framework that automates all the steps to perform. Furthermore within the FSHP, we adopt the numerical framework of the virtual element method for numerical simulations to reduce the computational burden. The computational strategies and the discretization adopted allow us to efficiently solve the series (hundreds) of simulations and to rapidly converge to the RVE size detection.


Introduction
Particle-based composites, used in many engineering applications or present in nature, exhibit a microstructure made of randomly distributed inclusions (particles) embedded into a continuous matrix; examples of such materials are polymer, ceramic, metal matrix composites, but also granular materials, concrete, masonry made of crushed stones casually arranged in the mortar and even porous rocks. A key aspect, recently investigated by many researchers, is the evaluation of appropriate mechanical properties to be adopted for the study of their behaviour [11,23,25,30,31], also with the purpose of designing innovative smart materials [13,22].
Homogenization procedures have been widely adopted for the definition at the macroscale of material properties emerging from the internal periodic microstructure [1,8,9,21], that provided significant results also in the framework of nonclassical continuum theories [7,14,33,35,36,40].
In the case of materials with random microstructure, the lack of periodicity in the particle disposition makes it diffi-B M. Pingaro marco.pingaro@uniroma1.it 1 Department of Structural and Geotechnical Engineering, Sapienza University of Rome, via Gramsci 53, Rome, Italy cult to perform the homogenization process, with particular reference to the possibility of identifying a representative volume element (RVE) and proper boundary conditions to apply at the REV boundary. Since many years in the literature a variety of approaches aimed at defining and implement the effective properties of non-uniform composite microstructures have been proposed [6,12,15,16,[18][19][20]24,26,27,43], also referred to non-classical continua [28,37,38]. Among these models, we focus our attention on the possiblity of approaching the RVE using finite-size scaling of intermediate control volume elements, named statistical volume elements (SVEs), and proceed to homogenization (e.g. Ostoja-Starzewski [24]).
Here a homogenization procedure, consistent with the Hill-Mandel condition [17], is adopted in conjunction with a statistical procedure, by which scale-dependent bounds on classical moduli are obtained using Dirichlet and Neumann boundary conditions (BCs) for solving boundary value problems (BVPs). The statistically-based homogenization procedure is suitable to determine the effective elastic moduli of random particles composite. The procedure here developed and implemented, introduced in [38], is focused on the study of the scale-dependent effective response of a heterogeneous random material described as a two-phase composite. This procedure has provided significant results [37][38][39], with particular reference to the debated problem of the convergence in the presence of materials with very low (or very high) contrast, defined as the ratio between the elastic moduli of inclusions and matrix [24,26,32]. The limits of the procedure are essentially ascribable to the huge number of simulations needed to define the RVE size, that compromises the applicability of the method [28].
In this paper, an improvement of the above described procedure is proposed, limited for the sake of simplicity to the classical continuum homogenization. This allows us to perform series of parametric analyses to compute the effective moduli of random composites. The improvement regards both the statistical homogenization procedure itself and the computational tools adopted.
On the one hand, all the steps of the procedure have been automatized and integrated in a completely in house specifically developed code. This allows performing, quickly and efficiently, a high number of parametric analyses, making the adoption of the procedure attractive for engineering applications. The automated procedure has been named 'Fast statistical homogenization procedure' (FSHP).
On the other hand, an innovative computational method, such as the virtual element method (VEM) [3,4], has been adopted to improve the computational efficiency. The application of this approach to the problems of structural mechanics is getting more and more interesting [2,10,41,42].
The VEM methodology has many computational advantages such as robust stiffness matrix (that can be exactly computed in precision machine) and accuracy versus the number of degrees of freedom (DOFs). The admissible VEM elements are polygons of any shape with n-nodes, in particular it is possible to use elements with internal inclusion, concave elements and aligned nodes (hanging nodes). Thanks to their characteristics, the adoption of VEM entails a significant reduction of the computational burden: a single virtual element is used for each inclusion, while the matrix is meshed with finite elements according to a standard Delaunay triangulation. This aspect plays a relevant role in materials with very high (or very low) contrast, for which large windows are needed to determine the RVE. Using standard FEM codes, in fact, the number of DOFs becomes high and the computational cost significantly increases. The adoption of a single virtual element for each inclusion, instead, strongly reduces the DOFs.
The results obtained by adopting the FSHP, inclusive of a combined VEM-FEM methodology, are here compared with the results previously obtained by some of the authors [28,38] using a standard commercial FEM code. Several simulations are then performed by modifying the material contrast: the size of the RVE and the effective moduli of various kind of two-phases random composites-either with inclusions stiffer or softer than the matrix-are determined.
The outline of the paper is as follows. In Sect. 2 the homogenization approach adopted and the FSHP developed are described in detail. In Sect. 2.1 the constitutive model is defined, while in Sect. 2.2 the attention is focused on both the automation of the procedure and the mixed VEM-FEM model adopted. In Sect. 3 basic assumption of VEM and the weak formulation of the elastic problem are reported. Section 4 provides the results of parametric analyses on materials with stiff and soft inclusions and increasing or decreasing the material contrast. Moreover, in order to evaluate the improvement in terms of computational costs, a comparison between analyses performed adopting virtual elements for the inclusions and finite elements for the matrix versus finite elements for both matrix and inclusions is proposed. In Sect. 5 some final remarks are presented highlighting the expected advantages by parametric studies performed by varying the material contrast, representing series of particle composites materials.

Constitutive model
The homogenization procedure for defining the constitutive response of a non-periodic heterogeneous material requires the definition of the size of a representative volume element (RVE) larger than the microscale characteristic length, corresponding to the diameter of inclusions, d. A scale parameter, δ = L/d, L being the side of a square test window, is introduced. According to the approach presented in Trovalusci et al. [38], as well as in [12,25], the presented procedure requires the statistical definition of a number of realizations called statistical volume elements (SVEs), of the possible microstructure, sampled in a Monte Carlo sense, which allows determining series of scale-dependent upper and lower bounds for the overall elastic moduli and to approach the RVE size, δ RVE , using a statistical criterion to stop the procedure.
At both the macroscopic and microscopic scale a classical two-dimensional continuum is adopted. The analysis is developed within a kinematic linearised framework. The governing equations are formally the same of the ones defined at the microscopic level, except for the constitutive law that is not 'a priori' defined at the macroscopic level, but directly descends from the lower level as result of the homogenization procedure. In the following, lower case letters always refer to the microscale while upper case letters to the macroscale. We assume that at the lower level each material phase is characterized by linear elastic isotropic behaviour with the stress-strain relations written as: where ε i j , σ i j (i, j, k = 1, 2) are the components of the microstrain and microstress tensors, and λ and μ are the Lamé constants, with δ i j the Kronecker symbol. At the macroscopic level, the general anisotropic 2D stress-strain relations, read: where E i j , i j (i, j = 1, 2) are the components of the macrostrain and macrostress tensors and C i jhk are the equivalent moduli, components of the macroscopic elastic tensor. They are obtained via a homogenization procedure based on the Hill's macrohomogeneity condition [17]: which states the equivalence between the stress density power in the macroscopic model and the mean power, that is evaluated at the microscopic level within a domain (window), B δ , occupying a region of area A δ . The set of Dirichlet and Neumann boundary conditions (BCs) to apply, at the microscale, on the boundary ∂B δ , for the solution of the BVPs, directly derive from the fulfilment of the macrohomogeneity condition (3). The Dirichlet BCs are: u i being the components of the displacement vector and x j (i, j = 1, 2) j-th coordinate of the generic point on the boundary, ∂B δ , with respect to a reference system with origin in the geometric center of the window B δ ; while the Neumann BCs are: where t i are the components of the traction vector and n j of the outward normal to ∂B δ (i, j = 1, 2).

Fast statistical homogenization procedure
The procedure has been implemented in an integrated computational code which allows us to automatically perform all the steps. The estimation of the average of an effective modulus C i jhk δ , (i, j, h, k = 1, 2), for a given scale parameter δ, is reached performing a series of 'moving window ' simulations using a control window B δ of size δ. It coincides to simulate various representations of the microstructure, obtained following a Poisson point process [38]. When the value of this modulus falls within an interval corresponding at a confidence level set at 95% over a normal standard distribution, Fig. 1 Schematic rappresentation of the convergence trend for the upper and lower bounds of an elastic modulus the estimation is obtained. More precisely, fixing a tolerance value Tol depending on the material data dispersion, the minimum number of simulations necessary to stop the procedure is obtained as reported below. In other words, we stop the simulations when the results in terms of average elastic moduli do not change within the selected tolerance interval.
The simulations are repeated for increasing values of the window size, δ, and the convergence to the RVE size is finally achieved at a value, δ RVE , for which N min is less than a convenient small value. The effective moduli are then estimated as the mean value between the two convergence values obtained by applying to the control window B δ RVE the Dirichlet and Neumann boundary conditions. Note that the Dirichlet and Neumann values respectively provide upper and lower bounds for the solution as schematically shown in Fig. 1.
As the homogenized elastic tensor has mainly found to be isotropic, we focus our attention to the bulk modulus as a representative material parameter, that for the 2D material writes: The mesh corresponding to each realization of the procedure has been optimized using a mixed VEM/FEM discretization: with a single virtual element for the inclusions and triangular finite elements for the matrix, determined using a code for generating random mesh of Dealunay type (Triangle [34]), perfectly linked with the implemented software. The Delaunay triangulation has been properly created: fine near the inclusions and coarse away from the inclusions; this allowed us to refine the so-called hard core regions, where the stress gradients are high. The adopted procedure also permits the automatic cutting of the inclusions over the limit of the windows' edges, accounting for the presence of inclusions that randomly cross the windows' edges-as required by the randomness of the medium-and refinements in the 'hard core regions', where the stress gradients are high (Fig. 2b). It is worth noting that to neglect the presence of inclusions that intersect the windows' edges, a less realistic hypotesis widely used in literature [29], provides results significantly different from the results obtained taking into account cut inclusions at the windows'  [37]. An example of realization of the miscrostructure and the related generated mesh is shown in Fig. 2a, b, respectively.
All the steps of FSHP are completely integrated in a specifically in-house developed Python code. The entire computational stochastic homogenization procedure is structured as follows and resumed in the flow-chart 4.
Step 1 Set the nominal volume fraction of the medium (ρ ≤ 40%) as the ratio between the total area of the inclusions and the area of a test window. The particles are disk-shaped inclusions, with diameter d, randomly distributed on the base of a Poisson point field process with 'hard-core' distance, s, between the particles, fixed in order to avoid very narrow necks between inclusions. Define the dimensionless scale factor δ = L/d. Fix the mechanical parameters of each phase: Young modulus of inclusions and matrix, E i and E m ; Poisson coefficient, ν i and ν m . Set the minimum number of simulations for convergence, N lim , and a tolerance parameter, Tol, based on data dispersion, as defined above.
Then simulate random dispositions of disks' centres, according to a uniform distribution, that is the realizations SVEs of portions of the random medium. Each realization is supposed independent from any previous one (Fig. 3a).
Step 4 For each SVEs, generate the relative mesh (Fig. 2a) and solve both the Dirichlet (Eq. 4) and the Neumann (Eq. 5) BVP, sketched in Tables 1 and 2, and compute the homogenized constitutive parameters.
Step 5 Compute the average bulk modulus, K δ , the relative standard deviation σ (K δ ) and variation coefficient 2 , which ensures that the confidence interval of the average homogenized constitutive parameter set at 95%, evaluated over the normal standard distribution, is within the tolerance allowed (depending on data dispersion of the medium). Repeat Steps 3-4 until N min i < N .
Step 6 If the number of realizations necessary for ensuring the requirement at Step 5 is small enough, stop the procedure. Otherwise choose an increased value of δ and go to Step 3 (Fig. 3).
The fulfilment of the requirement at Step 6 means that the values of the homogenized constitutive coefficients are distributed around their averages with a vanishing variation   Fig. 1. The adopted statistical criterion allows us to detect the RVE [38] size also when the Dirichlet and Neumann solutions do not tend to the same value. The computational strategies and the discretization adopted allow us to very efficiently solve the series (hundred) of BVPs and to rapidly converge to the RVE solution. All steps of the procedure is resume in the Fig. 4.

Virtual element discretization
In this section we briefly recall the weak formulation of the classical linear elastic problem and describe the related virtual element space [3,4] as well as the construction of the bilinear form resulting form the weak form. In the following we use the vectorial notation, suited to the proposed formulation.

The linear elastic problem
Let be a two-dimensional domain, and let be its boundary. We consider the deformation problem of a linearly elastic body subjected to the volume force, represented by the vector f ∈ L 2 ( ) 2 (the standard Lebesgue space), and given boundary conditions, under the hypothesis of small deformations. We use homogeneous Dirichlet boundary conditions and consider the Sobolev space, V , of the admissible displacements fields, represented by the vector v. The weak form of the linear elastostatic problem, provided by the virtual work principle, reads: Find u ∈ V such that : In (6) the continuous bilinear form a(·, ·) : V × V → R reads: and the linear functional F (·) : V → R, reads: where ε(u) := ∇u + ∇u T /2 is the linearised strain tensor and C is the elastic tensor, R being the set of the real numbers. By using the linear elastic constitutive law it is possible to rearrange the Eq. (8) as follows: where λ and μ are the Lamé coefficients and, div is the divergence operator. The bilinear form a(·, ·) is symmetric, continuous and coercive on V so that problem (6) is well posed.

Virtual element formulation
In this subsection we introduce the virtual element discretization used in the following simulations.
In order to approximate the solution of the problem (6) we consider a decomposition T h of the domain into non overlapping polygonal elements E (such as in Fig. 5). In the following, we denote by e the straight edges of the mesh T h and, for all e ∈ ∂ E, n i (i = 1, 2) denotes the outward unit normal vector to e pointing outward to ∂ E (Fig. 6a). The symbol n represents the number of the edges of the polygon E. Let k be an integer ≥ 1. Let us denote by P k ( ) the space of polynomials, living on the set ⊆ R 2 , of degree less than or equal to k.
By the discretization introduced, it is possible to write the bilinear form (10) as: The discrete virtual element space, V h , is: where the local space, V h|E , is defined as: with where u (·),(·) denotes the partial derivative of the components u (·) with respect to the (·)-coordinate. A λ,μ u is the elliptic operator associated to the bilinear form a(·, ·). We can observe that, in contrast to the standard finite element approach, the local space V h|E is not fully explicit and v h has the following features: v h is a polynomial of degree ≤ k on each edge e of E; -v h is globally continuous on ∂ E; The problem (6) restricted to the discrete space V h becomes: where is the discrete bilinear form approximating the continuous form a(·, ·) and, F (v h ) is the term approximating the virtual work of the body forces, that in this case is null. By the above definition of the local space (13) the following important observations can be made: -the functions v h ∈ V h|E are explicitly known on ∂ E; -the functions v h ∈ V h|E are not explicitly known inside the element E; The related degrees of freedom for the space V h|E are: where y e j , j = 1, . . . , k − 1 are the edge internal nodes; k(k − 1) scalar moments of the unknown field over the element (i.e. 1 |E| E v h , where | E | is the area of the element), not associated with a specific location over E.
The dimension of the space V h|E then is: It can be noticed that virtual element of degree k = 1 are uniquely identified by the vertices of the polygon E (Fig. 6b) and contains linear polynomials (P 1 (E)) 2 ⊂ V h|E .

Projection operator and local stiffness matrix
As shown in Sect. 3.2 the space V h is not explicitly known and the bilinear forms a E h cannot be evaluated in the standard way by adopting finite element method (Gauss integration). The VEM proceeds by introducing the local projector operator The condition (18) defines ∇ E v h only up to the constant values [5]. These are fixed by prescribing a projection operator onto constant P 0 : V h (E) → (P 0 (E)) 2 and by requiring: Possible choices of P 0 v h are: We can show that ∇ E is computable from the degrees of freedom of the local space V h (E).

The construction of the global form a(u h , v h ) is based on the construction of local forms
where the bilinear form S E,∇ is a suitable stabilization term with the same scaling properties of the sum of the first and second terms and needed to preserve the coercivity of the system. The first two terms of the right-hand side of Eq. 21 represent the consistency term, that is explicitly defined by the definition of the projector ∇ E . For more details about the construction of the consistent and stabilization terms, see Beirão Da Veiga et al. [4] and Artioli et al. [2].
The main advantages of the adoption of the virtual element method are: -very efficient computation of the stiffness matrix-do not use Gauss points and compute the stiffness matrix in machine precision; -possibility of using polygonal element-meshing with a single element the inclusions; -robustness to distortion of the elements; -perfectly coupling between virtual elements and finite elements.
In this work, we use the lower virtual elements with degree k = 1, in which the projection of the strain and the stress are constant over the elements. When the virtual element is a triangle with degree k = 1 the element is equivalent to a constant stress triangle (CST) finite element. In order to perform the homogenization of the particle composite as described in Sect. 2.1, that is a the result of an average process, it is not essential to exactly know how stresses and strains vary over the inclusions. However, in order to check the accuracy in the estimation of the solution, authors are working to implement high order virtual elements.

Numerical simulations
In this section, parametric analyses performed on twodimensional patterns of random particles materials are provided, using the described computational fast statistical homogenization procedure (FSHP) in conjunction with VEM. The analyses are performed by varying the material contrast between the constituents (inclusions/matrix), defined as the ratio between the elastic moduli of inclusions, E i , and matrix, E m , respectively. Two material cases are studied: a stiff inclusions in a softer matrix and B soft inclusions in a stiffer matrix, with increasing and decreasing contrast, respectively.

VEM/FEM versus FEM/FEM discretization
A preliminary comparison between analyses performed by adopting virtual elements discretization for the inclusions and finite elements discretization for the matrix versus finite elements for both matrix and inclusions has been proposed (as stated in Sect. 3, triangular virtual elements of degree 1 are equivalent to CST finite elements). Our purpose is to evaluate the improvement globally provided by FSHP in combination with VEM, and as first step, to estimate the computational costs gained by the adoption of VEM. Two examples of meshes obtained for a test window of size δ = 15, are reported in Fig. 7. In the first case each inclusion is modelled with a single polygonal virtual element (Fig. 7a), while the matrix is modelled by a standard Delaunay triangular mesh of finite elements. In the second case, standard Delaunay triangular finite elements are used both for matrix and inclusions (Fig. 7b).
By modelling each inclusion with a single polygonal element the number of degrees of freedom reduces. For an assumed density of particles of 40%, and considering each inclusion modelled by a polygonal element of 20 sides, the    Table 3 the number of degrees of freedom (DOFs) is shown for different values of the window size, δ, and the relative number of inclusions. The improvement in terms of computational time by adopting virtual elements for the inclusions is reported in Fig. 8 where the time machine in seconds needed for a single analysis is plotted for increasing window sizes δ.

Parametric numerical analyses
Homogenization of random particle composite materials requires a very high number of realizations to detect the convergence trend of the overall elastic moduli and the size of RVE. In particular, for high or low material contrasts it may be necessary to perform more than hundreds of simulations considering windows of large size [28]. As mentioned in Sect. 1, this really compromises the applicability of the procedure. With the FSHP instead we can circumvent this obstacle and perform in real time all the simulations needed for the convergence to the RVE size. In this paragraph the results obtained using FSHP with the formulated and implemented in house computer code, that includes the VEM formulation described in Sect. 3, are compared with the results previously obtained by some of the authors [28,38], using a commercial code (COMSOL Multiphysics R ). In particular, a standard FEM discretization has been used, with unstructured meshes of quadratic Lagrangian triangular elements.
As an example, two material cases of type A and B, defined above, have been considered: materials A6 and B6, with contrast equal to 6 and 1/6 respectively, in order to refer to results in the work [38]. In Fig. 9a, b, the values of the average of the homogenized bulk modulus, K , obtained for increasing window size, δ, are reported. In the figures, blue dashed lines are referred to the solution of Neumann BVP, while the red solid lines to the solution of Dirichlet BVP. The results of the two approaches are in a good agreement, however it is possible to notice that the results obtained for the material B6 are closer to the results of the approach in [38] with respect to the ones of material A6. The results obtained for material B6 (soft inclusions in a stiffer matrix) converge faster than the ones  provided in [38], highlighting the reliability of the proposed FSHP with VEM. Instead, in material A6 (stiff inclusions in a softer matrix), the stiffness is concentrated in the inclusions.
As the inclusions are modelled by a single virtual element of degree one, the stress and the strain are constant over the element and this provokes an underestimation of the solution of the BVP. This aspect is more evident in the results provided by Neumann BCs. Authors believe that an adoption of high order virtual elements for the inclusions may provide more accurate results also for materials of type A. In order to exploit the opportunities provided by FSHP, several parametric numerical analyses are performed, with the purpose of extending the results obtained in [28]. Both materials of type A and B with varying contrast are considered, as reported in Table 4.
The statistical procedure described in Sect. 2.1 is applied in order to perform analysis aimed at identifying the appropriate RVE size, δ RVE , for the materials of type A and B and the corresponding elastic moduli defined in Eq. 3. The constitutive behaviour of the constituents, for both materials of type A and B is assumed linear elastic and isotropic. The adoption of FSHP allowed us to analyse of any size windows and perform a high number of simulations in a faster way. The results of parametric analyses performed are reported in the following figures. In the figures, blue solid lines are always referred to the solution of Neumann BVP, while the red dashed lines to the solution of Dirichlet BVP. Figure 10 provides the average value of the effective bulk modulus, K , obtained for materials of type A and B (normalized to the convergence value, K RVE ), versus the number of simulations performed for different values of the window size, δ, under the Neumann or Dirichlet boundary conditions. With regards to the average bulk modulus both the Dirichlet and the Neumann solutions converge to an average value K δ , as indicated in Sect. 2.1. The appropriate value of K RVE is then chosen by taking into account the larger δ RVE between Dirichlet and Neumann solutions. The adopted statistical convergence criterion is based on the number of simulations, N, needed to obtain average values of the bulk modulus that falls within the defined tolerance interval (Sect. 2.1) for any window size, δ: Fig. 11 provides this number of simulations versus the window size for the materials of type A and B. As expected, lower values of N are needed as the windows size increases, and larger values of N are needed as the material contrast increases. Figure 12 provides the coefficient of variation C V (K ) versus the window size, δ, for materials of type A and B, under Neumann and Dirichlet boundary conditions. It is worth noting that the trend of graphs of Figs. 11 and 12 are similar: this confirms that the criterion adopted, based on N is suitable to catch the effective convergence.
Finally, the results in terms of RVE size, δ RVE , are synthesized and reported in Fig. 13a, where the values of δ = δ RVE are plotted versus the different material contrasts considered,  Fig. 13b, where the number of simulations, N REV , needed to define the RVE size are plotted always versus the material contrasts. It can be observed that as the material contrast differs from the value 1, that is the contrast of homogeneous media, the δ RVE , as weel as the relative N REV needed, increase: the adoption of larger δ is needed in order to identify the appropriate RVE. These graphs provide a synthetic response that can be used for knowing the RVE size for any kind of two-phases particle materials.

Conclusions
In this work, a fast statistical homogenization procedure (FSHP) has been proposed for the homogenization of particle composites, made of disk shaped inclusions randomly distributed within a base matrix. As reported in earlier works [28,37,38,40], the homogenization of this typology of material requires a high number of simulations. In fact, as the representative volume element (RVE) is an unknown of the problem, hundreds of boundary value problems (BVPs) have to be solved, adopting both Dirichlet and Neumann BCs, for several realizations of the microstructure represented by 'moving test windows' of increasing size (statistical volume elemants, SVEs). The estimation of the effective constitutive parameters is obtained when, after performing few number of simulations, the results in terms of average elastic moduli do not change within a selected tolerance value. The main limits of the mentioned statistical procedure consist in the heavy computational burden that makes it inapplicable, especially when the contrast between the elastic moduli of the material constituents is very far from the unit value (homogeneous case). The aims of this work are both the automation of the homogenization procedure previously described both the the adoption of an innovative computational tools such as the virtual element method (VEM) for further reduce the computational burden. The fully automated fast statistical homogenization procedure (FSHP) here developed provides improvements that allows us to perform several simulations with a low computational cost. The analyses were limited to the Cauchy continuum and the extension to other generalize continua are under study. Several material cases have been considered: one with stiff inclusions (referred in the paper as material A) and the other with soft inclusions (referred in the paper as material B).
Some final remarks may be drawn: -All the steps of the procedure-from the simulations of each random realization of the microstructure (SVEs) to the solutions of the boundary value problems for the SVEs, up to the evaluation of the final size of the RVE for the homogenization of the random medium-have been automatized and integrated in a completely in house specifically developed code; -The adoption of virtual elements (VEM) for modelling the inclusions in combination with finite elements (FEM) for the matrix makes possible to both reduce the computational costs and to analyze larger test windows; -FSHP allows us to quickly and efficiently perform several parametric analyses, making the adoption of the procedure attractive for engineering applications; -The statistical convergence criterion adopted, based on the control of the minimum number of simulations, allows us to determine the appropriate RVE size to be used for performing homogenization, also at sensitive varying of the contrast.
Next step of this research is to improve the accuracy of the results by adopting higher degrees for VEM. Moreover, the proposed method will be implemented in order to: consider high order continua, such as Couple-Stress and Cosserat; that have been shown to better describe the behaviour of microstructured material, such as random composites with particles of different shape and orientation [40].