Numerical solution of two-energy-group neutron noise diffusion problems with fine spatial meshes

The paper presents the development of a strategy for the fine-mesh full-core computation of neutron noise in nuclear reactors. Reactor neutron noise is related to fluctuations of the neutron flux induced by stationary perturbations of the properties of the system. Its monitoring and analysis can provide useful insights in the reactor operations. The model used in the work relies on the neutron diffusion approximation and requires the solution of both the criticality (eigenvalue) and neutron noise equations. A high-resolution spatial discretization of the equations is important for an accurate evaluation of the neutron noise because of the strong gradients that may arise from the perturbations. Considering the size of a nuclear reactor, the application of a fine mesh generates large systems of equations which can be challenging to solve. Then, numerical methods that can provide efficient solutions for these kinds of problems using a reasonable computational effort, are investigated. In particular the power method accelerated with the Chebychev or JFNK-based techniques for the eigenvalue problem, and GMRES with the Symmetric Gauss-Seidel, ILU, SPAI preconditioners for the solution of linear systems, are evaluated with the computation of neutron noise in the case of localized perturbations in 1-D and 2-D simplified reactor cores and in a 3D realistic reactor core.


Introduction
Neutron noise in power nuclear reactors consists of fluctuations of the neutron flux. These fluctuations are small in comparison with the main trend of the neutron flux and they are induced by perturbations such as vibrations of reactor components, oscillations of coolant temperature or density, etc. The perturbations which act as neutron noise sources can be problematic for the operation of the nuclear reactors and their effects need to be monitored and analyzed. For this purpose one should determine the transfer function of the reactor which describes the system response to any possible perturbation. The reactor transfer function can also be used for diagnostic purposes, i.e. to identify the neutron noise source knowing the effect on the measured neutron flux (Williams (1974), Thie (1981), Pazsit and Demaziere (2010)). Consequently, the computation of the transfer function of nuclear reactors is of primary importance.
As for the simulations of the static flux in a nuclear reactor, computational capabilities for reactor neutron noise rely on the neutron transport equation. The solution of this equation is challenging for these kinds of problems because a nuclear reactor core is a very heterogeneous and large system. Then a typical, simplified approach for the full-reactor core computation of neutron noise is the diffusion approximation.
Most existing neutron diffusion-based simulators for neutron noise applications use finite-difference methods for the spatial discretization of the equations, direct methods for the solution procedure, and relatively coarse representations of the physical system. This approach can nevertheless be inadequate for neutron noise simulations. In fact neutron noise sources are often highly localized in space and induce in their vicinity strong gradients of the neutron flux. For an accurate prediction of this behavior, a very fine spatial discretization is therefore required.
The application of fine computational meshes leads to large systems of algebraic equations, whose solution with direct methods is questionable or even prohibitive in terms of computer memory and running time.
The objective of the work presented in this paper, is to investigate a strategy that can be used for the solution of the diffusion-based neutron noise equations over fine meshes with a feasible computational cost. Then, different numerical methods have been implemented and evaluated over three neutron noise problems based on a 1-D and a 2-D homogeneous reactor, and a 3-D realistic reactor.
The structure of the paper is as follows. Section 2 presents the governing equations of the problem. Section 3 discusses the challenges associated to the numerical solution of the neutron noise equations. Section 4 deals with the numerical methods studied in the current work. Section 5 presents the results for three test cases. In Section 6, some conclusions are drawn.
The spatial discretization is based on the approach used by (Demaziere, 2011a). Accordingly, the nuclear reactor core is described with N discrete volumes (nodes) over which the governing equations are spatially averaged. The diffusion operator is discretized with a finite-difference scheme together with a suitable relationship between the surface averaged neutron currents and the node-averaged scalar neutron fluxes. The discretization scheme is summarized in Appendix B.

Challenges of solving fine-mesh neutron noise problems
The simulation of neutron noise with the model introduced in Section 2 consists of two steps: first, the solution of Eq. (8) and then the solution of Eq. (10). After the spatial discretization, Eq. (8) can be summarized into the matrix form: or after rearrangement: where A crit and F are real matrices. The normalization factor k and the vector Φ crit represent the eigenvalues and the eigenvectors of matrix [A −1 crit F ], respectively. Only the largest k and its associated eigenvector are of interest because they correspond to the effective multiplication factor (k ef f ) and to the static neutron flux (i.e. the only eigenvector having the same sign throughout the entire system). The standard algorithm for solving the eigenvalue problem is the Power iteration or Power Method (PM) (Lewis and Miller (1984)) and it is presented in Alg. (1).
The main advantage of PM is that it guarantees the convergence to the largest eigenvalue and its associated eigenvector ("dominant" or "fundamental" eigenpair) and at the same time is very simple. For the generic iteration l, the vector of unknowns Φ (l+1) crit is obtained from the solution of the linear system A crit Φ (l+1) crit come from the previous iteration. Thus PM gives rise to a sequence of linear systems. Once the static neutron fluxes for the critical reactor configuration are known, Eq. (10) can be solved. The discretized form of the latter equation can be written as: where A noise is the complex coefficient matrix derived from the LHS of Eq. (10) and the vector S noise is the RHS of Eq. (10). The numerical efficiency of PM depends on two factors: its convergence speed and the computational effort required from solving the linear system at each iteration. The convergence speed is related to the spectrum of the iterative matrix A −1 crit F . More specifically, it is determined by the Dominance Ratio (DR) of the iterative matrix which is the ratio between the second largest and the largest (dominant) eigenvalue (k 1 /k 0 ); when DR ≈ 1, PM converges slowly. In nuclear reactor applications, it often happens that DR is greater than 0.95. Under this condition, PM might be too slow making the analysis of full-core configurations discretized with fine meshes even impossible.
For both the criticality and noise problems, the applied discretization algorithm generates sparse systems that in case of 3-D configurations with fine meshes are also large. The solution of these linear systems with direct methods may require a burdensome computational effort because of different reasons. First, the inversion and decomposition of the coefficient matrix generates less sparse matrices (the so-called fill-in phenomenon) imposing high storage requirements. Second, direct methods usually scale poorly with the size of the system in terms of operation counts (Benzi (2002)) increasing the computational time. Besides, the construction of the large coefficient matrices may be time-consuming and thus needs special attention.
To summarize, this work aims to solve fine-mesh 3-D full-core neutron noise diffusion problems with a reasonable computational cost. This induces the following challenges: • Fast construction of the coefficient matrices.
• Acceleration of PM for the eigenvalue calculation.
• Efficient solution of the linear systems.
In this context, a reasonable computational cost is such that the simulations can be performed in no more than a few hours with no use of parallel computing and no more than 10 GB of RAM.

Strategy for solving fine-mesh neutron noise problems
In this section the strategy and methods used to solve numerically the overall neutron noise problem over a fine mesh are introduced. The coefficient matrices and their construction are first discussed. Two methods are considered in order to accelerate PM when applied to Eq. (11), namely the Chebyshev method and a method based on the Jacobian Free Newton Krylov technique. The solution of the linear systems within PM as well as of Eq. (13) for determining the neutron noise, are obtained from the application of GMRES. For the improvement of the performance of the scheme, three preconditioners are combined with GMRES, i.e. Symmetric Gauss-Seidel (SGS), ILU and SPAI.

Construction of the matrices
The matrices A crit and A noise that appear in Eqs (11 & 13) are large sparse banded matrices of dimension 2N × 2N where N is the number of spatial nodes and 2 is the number of the energy groups. For the matrix construction the so-called "coordinate format" is used. This format stores the diagonals in three vector-arrays of known length: one containing the values of the nonzero entries, one integer array containing their row indices and another integer array containing their column indices. When all diagonals have been computed, the matrix is assembled and stored in a compressed sparse matrix format. Following this strategy, one operates on vector-arrays of known length during the construction phase, avoiding any dynamic allocation of a compressed sparse matrix that would reduce the speed of the process. The matrix F consists strictly of two diagonals. Thus, the storage of the coordinates of its elements is not required. The structure of the matrices is reported in Table (1). A crit,(j=i−Ny ) −c y g,n 2,3-D lower, bunch of diag.

Solution of the large eigenvalue problem
Once the matrices are built, the next step is to solve Eq. (11). As mentioned in Section 3, PM is applied. The method is known to often converge slowly, so the implementation of an acceleration technique is important. In nuclear reactor criticality calculations, PM is often accelerated with the technique of the Chebyshev polynomials. In this work the Chebyshev method is compared with an alternative, i.e. the method of nonlinear acceleration based on the Jacobian Free Newton Krylov (JFNK) algorithm. The latter being a generic Newton-based method, allows to obtain quadratic convergence rates. Exploring the performance of this option is of interest since JNFKbased algorithms have attractive features (see discussion in Section 4.2.2) and their implementation for reactor physics problems is an active research topic. Various JFNK-based algorithms for reactor criticality analysis have been proposed. The version used has been selected because of its easy adaptation to PM and the satisfactory performances reported in literature.

Chebyshev acceleration of the PM solver
The utilization of Chebyshev polynomials is a traditional method for accelerating PM (Saad (2011)) and it was first proposed for reactor criticality problems by Ferguson and Derstine (1977). The method of Chebyshev acceleration updates the neutron flux estimate of a PM iteration as a linear combination of the solutions of some previous iterations. The coefficients of the linear combination are determined from the minimization of the estimation error with the Chebyshev polynomials. Various alternatives of the method are available; the one implemented in this work is described by Hebert (1985). After some initial free PM iterations a sequence of Chebyshev acceleration cycles starts. Within each cycle, the neutron flux estimates are extrapolated from the previous estimates using the calculated coefficients. The method also requires DR estimates that are updated at the end of each acceleration cycle. A simplified presentation of the algorithm is given in Alg.
(2). For completeness, a detailed description of the Chebyshev acceleration algorithm used in this work is presented in Appendix C. Hereinafter, Chebyshev accelerated PM will be refered as PM-Cheb.

Nonlinear acceleration of the PM solver
A second option is tested for the acceleration of PM. It is based on a JFNK approach and follows the work of Gill and Asmy (2009). The method allows PM to be easily encapsulated into a Newton scheme without the need for any modifications. In addition, Gill and Asmy (2009) showed that it can provide higher performance in multigroup neutron diffusion problems, than other Newton-based approaches.
The basis of the method is that PM may be seen as a fixed point iteration of the form: The vector of the unknowns u contains the flux vector and the eigenvalue: While the ordinary PM updates the flux vector Φ crit and the eigenvalue separately, in two steps, Eq. (14) considers both components as inseparable parts of the solution vector u. The fixed point iteration f is denoted as: where k(u) represents the updating procedure for k. A non-linear system is defined by setting: where r is the vector of the nonlinear residuals. A linearized form of Eq. (17) is obtained using the Newton method: where J (u) = ∂r(u) ∂u is the Jacobian matrix, and δu is the Newton correction vector. The Newton iteration algorithm is shown in Alg. (3).
A JFNK method is used to avoid the high computational cost due to the construction of the Jacobian matrix J , and is summarized in Alg. (4). The matrix-vector products J v required in the Krylov method, are approximated as follows: where v is the Krylov vector and ε is a small perturbation parameter. Many options exist for the calculation of ε.
In this work the following formula (Knoll and Keys, 2004) is used: where d is the square root of machine roundoff, (2N + 1) is the dimension of the linear system. In the current scheme, an inexact JFNK variation is applied. The convergence of the linear system within each JFNK iteration l is monitored with the following criterion: where the residual norm is reduced by the factor γ. The procedure is inexact because Eq. (18) is not solved tightly using a very small γ. Instead, it starts with larger values which are progressively reduced. This strategy aims to avoid the phenomenon of "oversolving". This practically implies that a very accurate Krylov solution when the Newton one is far from convergence can cause negative effects to the convergence of the global solution.
Discussion about this matter can be found in Shadid et al. (1997) and Tuminaro et al. (2002). In this work, γ is experimentally selected to be constant and equal to 10 −1 . However, more sophisticated ways able to vary dynamically γ, exist (Eisenstat and Walker (1996)). Gill and Asmy (2009) reported that PM-JFNK converges to an eigenpair other than the dominant one when an initial guess close to that eigenpair is provided. In order to avoid this issue, a few free PM iterations are performed before starting PM-JFNK.
As discussed above, the implementation of this acceleration scheme only needs a JFNK solver in which the non-linear function is evaluated by the single PM iterations. Another advantage is that the preconditioner for the original PM can still be used within the evaluations of the non-linear function. For the solution of the linear system of Eq. (18), no preconditioning is included. The development of a preconditioner for this problem is not trivial and it is a subject for future research. In fact J is a function of u and changes with the Newton iterations. In addition, J is not explicitly constructed in a Jacobian-free approach, so it is not available. Hereafter, PM accelerated with JFNK is referred to as PM-JFNK.

Outer/inner iteration
The PM and PM-Cheb solvers consist of one level of outer/inner iterations. The term outer refers to the iterative process of updating Φ crit and k and the term inner refers to the iterations for the solution of the linear system.
PM-JFNK consists of two levels of outer/inner iterations. The first level consists of the Newton (outer) iterations needed for the convergence of the solution of the non-linear system, and the Krylov (inner) iterations for the solution of the linear system of Eq. (18) (see (Alg. (4)). The second level consists of outer iterations used for updating Φ crit and k within the non-linear function evaluations f , and inner ones for solving the linear systems within f .
Hereinafter, the terms outer/inner iterations are used for PM, PM-Cheb and the second level of PM-JFNK, and the terms Newton/Krylov correspond to the first level of PM-JFNK.

Convergence criteria
The following convergence criteria are employed for PM (Duderstadt and Hamilton (1976), Hebert (1986), Gill and Asmy (2009)): Algorithm 3 PM-JFNK: Newton iteration In the case of slow convergence the magnitude of the differences between successive iterations may be small even though the error of the solution is still large. For this reason, a residual-based criterion is also used: where the subscript 2 indicates the Euclidean norm. Eq. (26) checks the residual error obtained by entering the current estimate of the solution in the matrix equation to be solved.

Solution of large linear systems
For the current framework, linear systems need to be solved at different stages. Linear systems arise from each iteration of PM because of Eq. (11). If the PM-JFNK scheme is applied, additional ones are due to Eq. (18). Finally, the solution of the linear system Eq. (13) is required for the calculation of the neutron noise. These linear systems consist of sparse matrices and their size may be large when a fine mesh is used for the spatial discretization of the equations. Then iterative linear solvers are a more preferable choice since the storage requirements and the number of operations are less demanding in comparison to direct methods. On the other hand, they are often slow or even fail to converge, so preconditioning is necessary. The iterative GMRES method is selected and discussed below together with 3 preconditioners, namely SGS, ILU and SPAI.

GMRES method
For the solution of large, sparse and non-symmetric linear systems such as the ones involving the matrices A crit and A noise , the Krylov-class Generalized Minimal RESidual (GMRES) method (Saad and Schultz (1986)) is selected. The general idea behind GMRES is that a chain of orthonormal vectors can be generated to construct the basis of a Krylov subspace K j : where r linear 0 = b − Ax 0 is the linear residual and x 0 is the initial guess of x for the linear system Ax = b. The solution of the linear system is constructed iteratively as a linear combination of the Krylov vectors (Eq. (27)) and can be written as: where j is the Krylov iteration index and the scalars β i are calculated to minimize the residual.
The use of matrix-vector products (A) i r linear 0 by GMRES is advantageous when, for instance, the method is combined with the PM-JFNK algorithm described in Section 4.2.2. The estimation of these products can in fact replace the construction of the Jacobian matrix. In addition, the property is also helpful because it allows the implementation of the preconditioners in the form of matrix-vector products, a necessary feature when large linear systems are solved. The cost of one GMRES iteration grows with the number of iterations. Therefore, since this work targets large linear systems, GMRES is restarted after a number, say n, of iterations, with x n as the initial guess. For restarted GMRES the notation GMRES(n) is used.

Preconditioning
The purpose of preconditioning (Benzi, 2002) is to improve the spectral properties of the coefficient matrix and thus the performance of the linear solver. Having a clustered spectrum of eigenvalues around unity often results in rapid convergence. If Ax = b is a generic linear system and P is a non-singular matrix that approximates A −1 , then the left preconditioned form is: Generally, one can classify the preconditioners into physics-based and algebraic. The former are problem-dependent and thus very efficient. However, they may require a significant developmental effort. The latter may be less efficient but at the same time they are general-purpose and easily applicable requiring only information contained in A. This work focuses on algebraic preconditioners. A reasonable initial choice is the SGS preconditioner P SGS . The coefficient matrix A of the linear system is decomposed as: A = L + D + U where L is a strictly lower triangular matrix, U is a strictly upper triangular matrix and D is a diagonal matrix containing the main diagonal of A. From the split an incomplete factorization is derived: A ≈ (D + L)D −1 (D + U ). Based on this factorization, P SGS is defined as: The main advantage of P SGS is that, having A, the operation f = P SGS w can be replaced by two linear systems and one matrix-vector multiplication, i.e. (D + L)z = w, c = Dz and (D + U )f = c. Such linear systems are solved fast performing forward/backward substitution of variables because of the triangular form of the coefficient matrices. Thus, the fact that GMRES works with matrix-vector products, offers a great advantage because P SGS can be applied to vectors. The second choice is the Incomplete LU factorization (ILU) preconditioner P ILU . ILU is based on the approximation A ≈ LU where U is a lower triangular and U is an upper triangular matrix. Then P ILU is defined as: Its implementation in GMRES has the same advantage that was pointed out for the case of SGS. ILU preconditioners usually suffer from the fill-in phenomenon. When large linear systems are solved, as in this case, the fill-in phenomenon may create memory problems canceling the advantage that a sparse A offers. However, if a part of fill-in is discarded during the factorization process, simple but still powerful preconditioners can be generated. The no fill-in ILU factorization or ILU(0) which preserves the sparsity of the matrix A, is thus selected in this work. If other versions than ILU(0) are desired, then their applicability to large systems may become questionable because of the difficulty in parallelization (Benzi, 2002). The last option is the SPAI (SParse Approximate Inverse) preconditioner P SP AI developed by Grote and Huckle (1997). The central idea relies on the minimization of a suitable objective function to create an approximation of A −1 . Since the perfect right preconditioner P would satisfy the relationship AP = I with I the identity matrix, P SP AI is built by minimizing the Frobenius norm of the difference between I and AP SP AI : The objective function can be decoupled as: where e j is the jth column of the identity matrix and p j is the jth column of matrix P SP AI . Thus minimizing Eq. (32) is equivalent to minimizing the sum of n independent individual least-square problems. This is useful for parallel implementations. In this work, an adaptive version of SPAI is employed. In fact, the minimization is stopped when the 2-norm of each column of I − AP SP AI is less than a predefined tolerance SP AI . Therefore, the quality and the construction cost can be controlled by tuning SP AI . For this reason, the notation SPAI( SP AI ) is used.
Although the computational cost of SPAI preconditioner can be high, its performance can be powerful. Since the criticality problem of Eq. (11) generates a sequence of linear systems with the same coefficient matrix, an initial investment to an expensive but also efficient preconditioner seems attractive. Additionally, because GMRES can not be parallelized easily, a part of the burden of the linear system solution may be transferred to an efficient parallel preconditioner. In this work, SPAI is constructed with a third-party software (Grote and Hagemann (2006)). Since that version is limited to real numbers, this preconditioner is used only for the criticality problem.

Numerical results
The performance of the methods discussed in Section 4 is evaluated with three test cases respectively associated with a 2-D one-region homogeneous reactor core, a 1-D one-region homogeneous reactor core, and a 3-D Pressurized Water Reactor (PWR) fully-heterogeneous core. In all the cases, a localized, fluctuating perturbation is introduced in the system. The first case of a 2-D one-region homogeneous reactor core is relevant because a semi-analytical solution is available. The criticality and neutron noise solvers are investigated separately in order to highlight their specific needs and features. After a satisfactory fine mesh is found, the different solvers are compared. The second case of a 1-D one-region homogeneous reactor core is used to compare the behavior of GMRES with respect to the criticality and neutron noise calculations. The third case of a 3-D full reactor core allows to study the developed strategy when applied to a realistic configuration. The analysis also shows how a coarse-mesh approximation of the noise source affects the neutron noise predictions. The criticality solvers are tested with two different convergence criteria: a set of regular criteria where all tolerances ( 1 , 2 , 3 , 4 , see Section 4.2.4) are equal to 10 −6 and a set of tight criteria where all tolerances are equal to 10 −10 . A flat flux distribution and k = 1 are taken as initial guess for all criticality calculations. In the case of PM-JFNK, two free PM iterations are used at the beginning in order to ensure the convergence to the dominant eigenpair.

2-D one-region homogeneous nuclear reactor core
This test case considers a 2-D one-region system near to criticality (k ≈ 1). The 2-D configuration is obtained from a cylindrical reactor core, by suppressing the axial dimension and taking a circular slice. The diameter of the system is set to 2R=301 cm. The homogeneous macroscopic cross-sections, the diffusion coefficients and the   dynamic nuclear parameters are representative of a PWR core, and are given in Tables (2 & 3). The noise source is a point-like fluctuation of the macroscopic removal cross-section (Σ r ) at the center of the core with a frequency of 1 Hz. The reference semi-analytical critical and noise solutions have been derived by Demaziere and Andhill (2005).

Criticality problem
The three criticality solvers based on PM, PM-Cheb and PM-JFNK are applied to the case of the 2-D oneregion system. For the solution of the linear systems of Eq. (11), GMRES is preconditioned with SGS, ILU(0) and SPAI(0.2) for all PM versions. The spatial domain is discretized according to a relatively coarse Cartesian mesh, with 301×301 nodes, where ∆x = 1 cm and ∆y = 1 cm. This computational grid leads to a system of 2N = 140,962 equations (2 being the number of neutron energy groups, see Section 2). The results of the different criticality solvers are in good agreement with the semi-analytical solution. The same value, i.e. k=0.99395, of the dominant eigenvalue is estimated. As an example, Fig. (1) shows the comparison between the neutron flux distribution calculated with PM-JFNK preconditioned with SPAI(0.2) and the analytical one. The numerical solution reproduces the analytical solution satisfactorily. The significant discrepancy that exists at the boundary of the system is attributed to approximation of the circular boundary with spatial discretization.
The numerical performances of the 3 solvers coupled to each of the 3 preconditioners are reported in Table (4). The tight convergence criteria are applied. GMRES(30) is used with a tolerance selected empirically equal to 10 −9 so that the linear solution is accurate enough to allow the convergence of PM. Given the same preconditioner for all the solvers, PM-Cheb is the faster option. As regards the preconditioning of GMRES, SPAI(0.2) is the most efficient option. Fig. (2) provides details of the behavior of the preconditioners for the case of the simple PM solver, at the first iteration. ILU(0) and SGS also accelerate the calculations significantly, although the gain is smaller. For the analysis of the residual error with outer/Newton iterations, the example of the three versions of PM using ILU(0) is depicted in Fig. (3a). The decrease of the residual error is very slow for the free PM, while PM-JFNK converges rapidly. However PM-JFNK requires a high computational time because of its two outer/inner iteration levels (Section 4.2.3). Fig. (3b) emphasizes this aspect, showing how the cost of each Newton iteration is sufficiently high to make PM-JFNK slower than PM-Cheb.

Neutron noise problem
After the criticality calculation, the neutron noise solution is obtained with the same 301×301 mesh. Two versions of the solver for the linear system of Eq. (13) are tested. They rely on GMRES preconditioned with SGS and ILU(0), respectively. The comparison between the neutron noise predicted by GMRES together with ILU(0) and the analytical solution shows a large discrepancy close to the location of the neutron noise source, because of the coarse mesh (see Fig. (4)). More specifically, the maximum relative difference in the amplitude of the fast and thermal neutron noise is close to 25% and 12%, respectively. Regarding the phase, the difference is up to 10% in both the cases of fast and thermal noise. Therefore, a finer mesh is selected for the criticality and noise calculations. The new mesh has 903×903 nodes, being the nodes 9 times smaller than the previous ones. This corresponds to a system of equations with 2N = 1,268,658 (i.e. a factor of 9 larger than the coarser one). The improvement in the prediction of the neutron noise is shown in Fig. (5). The new mesh reproduces more satisfactorily the sharp shape    of the neutron noise distribution near the location of the noise source. The differences in the amplitude of the fast and thermal neutron noise are reduced to less than 12% and 4.5%. Similarly, the differences in phase decrease, i.e. being less than 2.5%. The findings of this paragraph suggest that the computation of neutron noise requires a finer mesh than the one of the static neutron flux, in the vicinity of the noise source. The numerical performance of the methods is shown in Fig. (6). ILU(0) leads to a faster convergence of GMRES, which is obtained in 80 min. In the case of the 903×903 mesh and tight convergence criteria, the criticality calculation using PM-Cheb with ILU(0) takes 37 minutes. In other words, the solution of the single neutron noise linear system is more time-consuming than the solution of the sequence of linear systems generated from the PM iteration. In fact, the linear systems of PM needs significantly less GMRES iterations compared to the noise problem. This finding is investigated in the next section.

1-D one-region reactor core: GMRES applied to criticality and neutron noise calculations
As mentioned above, the linear systems in the criticality problem need less GMRES iterations than the one associated with the neutron noise calculation. Then the spectral properties of A crit are expected to better favor convergence as compared to the spectral properties of A noise . The condition number of the matrix A crit of the 2-D problem is estimated as 4.4697 × 10 3 , whereas the one of the matrix A noise as 1.7663 × 10 6 confirming the hypothesis. The large difference is a first explanation, and indicates that GMRES coupled with the same type of preconditioner cannot solve both the criticality and the neutron noise problem with the same effort.
The next step is to study the eigenvalue distribution of the preconditioned A crit and A noise using ILU, as an example. For this purpose, a 1-D one-region homogeneous reactor core is considered (Demaziere, 2011c) so that the matrices are relatively small and easy to manipulate. The system is an infinite slab with width equal to 2a = 300cm. The homogeneous macroscopic cross-sections and diffusion coefficients are chosen to be representative of a PWR (Tables (2 & 3)). The noise source is a point-like 1 Hz perturbation of the macroscopic removal crosssection (Σ r ) located at x = −50cm. A mesh with 301 nodes and ∆x = 1 cm is used leading to 2N =602.
The eigenvalue distributions of the criticality matrix A crit and the noise matrix A noise both preconditioned with ILU(0), are shown in Fig. (7a). ILU(0) leads the eigenvalues of A crit to concentrate close to 1, which is the desired situation. On the other hand, the eigenvalues of A noise are spread away from unity. This is the reason for the poorer behavior of GMRES when applied to the computation of neutron noise.
In order to have a closer distribution of the eigenvalues, a more accurate and denser ILU preconditioner is investigated. Therefore the ILU(0) is compared with another version of ILU where the fill-in is controlled with a drop tolerance, namely the Crout version of ILU (denoted as ILUC). The following drop tolerances are used: 10 −5 , 10 −7 and 10 −10 . Figs (7) shows that the reduction in drop tolerance makes progressively the eigenvalue     spectrum as compact as the one of the criticality computation. Then, the performance of each preconditioner is tested on solving the noise problem. As illustrated in Fig. (8) by the variation of the residual error with the number of iterations, a drop tolerance of 10 −10 for ILUC provides a performance that is similar to the ILU(0) in the criticality calculations. However, such a drop tolerance for ILUC is very tight and not feasible for realistic reactor applications because it implies very high fill-in. These findings indicate that, although the noise problems studied in this work are solved with a reasonable computational cost with the selected methods, there is a significant potential for further improvements.

3-D heterogeneous PWR core
A 3-D heterogeneous full core near to criticality with diameter 2R = 344 cm and axial height H = 396.24 is considered. The noise source is a high-frequency localized absorber of variable strength, i.e. a perturbation in the macroscopic thermal absorption cross-section (Σ a,2 ) with a frequency of 1 kHz. The perturbation is assumed to be distributed over a volume with dimension 0.8958 cm × 0.8958 cm × 7.62 cm. Two Cartesian meshes are used. The first mesh, denoted as coarse, has a node size of ∆x = ∆y = 2.6875 cm, and ∆z = 7.62 cm. According to this resolution, the spatial distribution of the neutron noise source will be approximated over a larger computational volume. The second mesh, specified as fine, has a smaller node size of ∆x = ∆y = 0.8958 cm and ∆z = 7.62 cm, so it allows to define exactly the perturbation in space. The aim is twofold: first, to test the applicability of the solvers to realistic problems; second, to study the effect of the spatial resolution of the mesh on the modelling of highly localized neutron noise sources.

Criticality problem
The three criticality solvers based on PM, PM-Cheb and PM-JFNK are applied. For the solution of the linear systems of Eq. (11), GMRES is preconditioned with ILU(0) and SPAI(0.4) for PM-Cheb and PM-JFNK, and only with SPAI(0.4) in case of non-accelerated PM. For this test case a larger spai is used (0.4 instead of 0.2) in order to mitigate the computational effort of SPAI's construction. The spatial domain is discretized with the coarse mesh since as observed in Section 5.1, the criticality calculation does not require a very fine mesh. This discretization leads to a system of 2N = 1,252,328 equations.
The numerical performance of the methods when regular convergence criteria are used is reported in Table (5). GMRES(30) is applied with a tolerance equal to 10 −6 for PM with SPAI and 10 −5 for all the other cases. The GMRES tolerance is selected empirically to provide an inner solution that is accurate enough to the outer iteration. PM-Cheb with SPAI(0.4) is the faster option. PM-JFNK is slower than PM-Cheb, which agrees with the findings for the 2-D problem. PM without acceleration is not able to converge in 700 outer iterations, confirming its limited capability. PM-JFNK with SPAI(0.4) is slower than PM-JFNK with ILU(0) and a possible reason is that SPAI has about 5 times more non-zero entries than ILU(0). Hence, SPAI implies more expensive matrix-vector operations which may counterbalance the lower number of inner iterations. The results of the calculations performed with tight convergence criteria are reported in Table ( 6). In this set of simulations GMRES(50) is used with a tolerance chosen to be 10 −10 . The main finding is that PM-JFNK is the only method converging within 700 outer iterations.

Neutron noise problem
After the criticality analysis, the computation of noise follows. In order to study the effect of the mesh, two computations are performed: the first one estimates the neutron noise according to the coarse mesh with 128x128x52 nodes; and the second one (involving a new criticality calculation) is based on the fine 384x384x52 mesh which leads to 2N = 11, 261, 952. As discussed in Section 5.1, the application of GMRES together with the ILU(0) preconditioner provides satisfactory results for the calculation of the neutron noise in the 2-D reactor core. Then the same solver with tolerance equal to 10 −10 is also used for the solution of the neutron noise linear system in the case of the 3-D realistic reactor core. Figs (10a & 10b) show a comparison of the neutron noise amplitude normalized to the maximum value of the fine-mesh fast neutron noise. There is a good agreement of the two solutions except in the vicinity of the noise source where some discrepancy exists. This is more remarkable in the case of the thermal noise where the relative difference reaches 80%. These results suggest that a fine mesh is required in the close neighborhood of the noise source. A coarse resolution of the mesh does not allow an accurate modeling of the spatial distribution of the neutron noise source, and thus leads to a poor evaluation of the spatial variation of the induced noise close to the perturbation. On the other hand, the noise in the region lying farther can be predicted accurately even with a coarser mesh. Regarding the phase of the neutron noise, Figs (10c & 10d) illustrate that there is not any notable discrepancy between the two meshes. The computational time of the fine case is 2.6 h for the criticality calculation and 43 min for the noise problem with no more than 10 Gb of computer memory showing that realistic noise problems can be solved with a reasonable computational cost.

Conclusion
In this paper, a strategy for solving fine mesh realistic neutron noise problems in nuclear reactors is presented. The neutron noise model relies on the diffusion approximation of the neutron transport equation and requires the solution of both the criticality (eigenvalue) and neutron noise equations in the frequency domain. The equations are spatially discretized according to a finite-difference method.
The Power Method is used for the criticality calculation and it is accelerated with two alternatives: the Chebychev polynomials method and a non-linear technique based on the Jacobian-Free Newton Krylov approach. The restarted GMRES was selected for solving both the linear systems within the iterations of PM and the final linear system representative of the neutron noise problem, given a prescribed stationary perturbation. To improve the performance of the algorithm, GMRES was preconditioned with SGS, ILU and SPAI for the criticality part and with SGS and ILU for the neutron noise part of the problem.
The numerical analysis of the developed solvers is carried out for neutron noise examples in 1-D and 2-D homogeneous reactor cores and in a 3-D heterogeneous reactor core. A fine spatial discretization of the systems is applied for an accurate evaluation of the sharp gradients of the neutron noise that arise from the localized character of the perturbation. From the results of the comparative study, the following conclusions are drawn. For the criticality calculation, although the scheme that combines PM-Cheb and GMRES preconditioned with SPAI can be a very efficient option, it does not always converge. In these cases, the solver that consists of PM-JFNK and GMRES preconditioned with ILU or SPAI seems an advisable option because it can reach a converged solution even in the case of realistic reactor applications with very tight convergence criteria. This highlights the potential of JFNK-based techniques for the criticality problem. As regards the choice between ILU and SPAI, the computation of the first one is very cheap and fast, counterbalancing the somewhat higher reduction in number of iterations that SPAI achieves.
GMRES with ILU also shows a satisfactory behavior in the calculation of neutron noise. Nevertheless, the computational effort in the 2-D test-case was more demanding for the noise part of the problem than for the criticality part. This is due to the structure of the matrices involved. In particular, the eigenvalues of the neutron noise matrix may be widely spread affecting negatively the performance of GMRES. Such an outcome suggests that specific preconditioners should be investigated for neutron noise simulations.
Last but not least, it was shown that a fine mesh is required in the vicinity of the noise source to reproduce properly the strong gradients. In the regions lying farther, a coarser mesh is adequate. Further research on numerical schemes that allow to apply non-uniform meshes, being finer close to the neutron noise source and coarser elsewhere, could lead to a strong reduction of the computational effort without compromising the accuracy.
The matrix φ crit f is: The matrix φ a is: The column vector φ r is: Appendix B. Spatial discretization of the diffusion operator The spatial discretization scheme for Eqs. (8 & 10), is the same as the one used in (Demaziere, 2011a). Accordingly, a nuclear reactor consists of adjacent volumes or nodes. Given a three-dimensional Cartesian coordinate system, the generic node n can be identified by three indices (I,J,K) which are associated with the x-, y-, z-directions, respectively. The equations derived in Section (2) are discretized with respect to these nodes and spatially-averaged on each of them. For the sake of simplicity, the possible dependence on frequency is dropped. The notation and convention used through this paper are illustrated in Figs (B.11 & B.12).
Being Σ a generic macroscopic cross-section and φ the scalar neutron flux, the quantities averaged over the node n, for the g-th energy group, are defined as: Σ g,n = 1 Vn Vn Σ g (r) φ g (r) dr φ g,n (B.1) φ g,n = 1 V n Vn φ g (r) dr (B.2) where V n represents the volume of the node n. Considering that J g (r) = −D g (r) ∇φ g (r), using a spatial discretization scheme based on finite differences, and assuming that the scalar neutron flux in the middle of the nodes is equal to the node-averaged scalar neutron flux (box-scheme), the node-averaged streaming terms can be approximated as: 1 V n Vn ∇ · [D g (r) ∇φ g (r)] dr = − ℵ=x,y,z J ℵ g,n − J ℵ g,n−1 ∆ℵ = − ℵ=x,y,z a ℵ g,n φ g,n + b ℵ g,n φ g,n+1 + c ℵ g,n φ g,n−1 (B.3) In this equation, ℵ represents the direction x, y, or z; and ∆ℵ is the node width in the ℵ-direction. The subscripts n-1 and n+1 are related to the nodes on the two sides of the node n along one of the ℵ-directions (see Fig. (B.12)).

Appendix C. Chebyshev acceleration method
Various alternatives of the Chebyshev method are available for the acceleration of PM. The Chebyshev method implemented in this work is described in Hebert (1985), and it is based on a two-parameter formula with cyclical updating of DR. In particular, the two parameters used to control the acceleration procedure, are the initial number N of free PM iterations, and the number M of accelerated PM iterations per acceleration cycle. The scheme consists of 3 steps.
In step 1, a set of N free PM iterations are performed (see Alg. (1)). A DR estimate is calculated at the end of the l-th PM iteration with the formula: where R (l) = Φ (l+1) −Φ (l) . At the end of the N -th iteration the method checks if DR (N +1) > 0.5. If this condition is satisfied, the method sets N * = N and moves to step 2. If not, it sets N = N + 1 and another free PM iteration is performed.
In step 2, an acceleration cycle is performed including M extrapolated PM iterations: where m = 1, ..., M denotes the extrapolated iterations. The extrapolation factors are defined as: DR is estimated at the beginning of each cycle and it is kept constant within the cycle.
In step 3, The DR estimate is updated at the end of the acceleration cycle by applying the error reduction factor E and the theoretical error reduction factor E * so that: Finally the method sets N * = N * + M and moves to step 2 in order to start a new acceleration cycle. In this work each cycle is prolonged doubling the number of iterations since it was noticed that it has a positive effect to the acceleration of the convergence.