Uncertainty-Aware Procurement of Flexibilities for Electrical Grid Operational Planning

In the power system decarbonization roadmap, novel grid management tools and market mechanisms are fundamental to solving technical problems concerning renewable energy forecast uncertainty. This work proposes a predictive algorithm for procurement of grid flexibility by the system operator (SO), which combines the SO flexible assets with active and reactive power short-term flexibility markets. The goal is to reduce the cognitive load of the human operator when analyzing multiple flexibility options and trajectories for the forecasted load/RES and create a human-in-the-loop approach for balancing risk, stakes, and cost. This work also formulates the decision problem into several steps where the operator must decide to book flexibility now or wait for the next forecast update (time-to-decide method), considering that flexibility (availability) price may increase with a lower notification time. Numerical results obtained for a public MV grid (Oberrhein) show that the time-to-decide method improves up to 22% a performance indicator related to a cost-loss matrix, compared to the option of booking the flexibility now at a lower price and without waiting for a forecast update.


I. INTRODUCTION
I N RECENT years, the integration levels of renewable en- ergy sources (RES) have been steadily increasing since the concern about global warming and energy dependency pushed nations worldwide to set ambitious RES targets.In this context, Holttinen et al. reported RES curtailment and congestion problems in several power systems [1], e.g., wind energy dispatchdown level in Ireland rising from 5.3% to 11.4%; 30% increase of RES curtailment in Chile due to grid congestion and inflexible thermal generators; significant transmission constraints remain in Germany due to delayed grid reinforcements.A survey from CIGRE showed that RES is expected to have more impact on the need for short-term flexibility (from 15-min to 12-hours) compared to real-time and very short-term time horizons [2].
The literature about modern grid planning methodologies and ongoing revisions of the regulatory frameworks consider the use of active measures (flexibility from local grid resources) to postpone traditional grid investments while accommodating ambitious RES targets at the same time [3].This paradigm will require the design of flexibility markets for grid-centric services like congestion management and voltage control, which range from voluntary short-term procurement up to mid-/longterm tenders [4], complemented with regulated (or non-market) mechanisms like non-firm connection agreements, dynamic grid tariffs or bilateral contracts.
Therefore, it is necessary to revisit the traditional power system operating processes and software in control rooms, including human-machine interaction [5].In particular, the high RES integration levels require a risk-aware decision-making structure where the activation of flexibility can be planned to ensure sufficient capacity to handle the forecasted technical issues.In this sense, it is crucial to provide fast decision-aid to operators and reduce the volume of information, particularly under load and RES forecast uncertainty.

A. Literature Review
The optimal power flow (OPF) method (and its variants) is the standard approach for short-term management of grid flexibility.With the impact of RES forecast uncertainty in the grid operational planning, the risk-based OPF and OPF under forecast uncertainty problems are timely and relevant and, in [6], the state-of-the-art methods, are categorized as a) risk-neutral and risk-averse two-stage stochastic optimization, b) chanceconstrained optimization, c) robust optimization, and d) distributionally robust optimization.For the sake of completeness, we present several OPF-based approaches below.
In stochastic and chance-constrained (CC) optimization, the uncertainty is represented via a probability distribution.Zhang and Li proposed a CC-OPF with a back-mapping approach and a linear approximation of the power flow equations where uncertainty follows a multivariate Gaussian [7].Roald et al. described a CC-OPF model solved using a randomized optimization technique, representing RES and load forecast uncertainty via scenarios [8].This work was generalized to the AC OPF problem in [9].Mezghani et al. proposed a data-driven method based on sparse regression to reduce the number of scenarios required to solve the stochastic AC OPF [10].
Robust optimization does not make any specific assumptions on probability distributions, and the uncertain parameters are assumed to belong to a deterministic uncertainty set.Soares et al. described a two-stage robust AC OPF for a distribution system operator (DSO) contracting market-based DER flexibility [11], where uncertainty is defined as the convex hull.Guo et al. considered ambiguity sets for multi-period control policies robust to forecast and sampling errors, using different linearizations of the AC power flow equations [12].
Reinforcement learning is also emerging as an alternative to traditional mathematical optimization for the OPF problem, when formulated as a sequential Markov decision process [13].Yet, integration of forecast uncertainty remains an open challenge for RL, at least for large-scale grids.
The present article follows a different direction from the OPF-based methods and, instead of formulating a mathematical optimization problem, it proposes a decision-making sequence where the human operator is directly involved in the interpretation of action-cause relations leveraging from sensitivity indices, and its attitude towards risk is integrated to find the preferred solution.This approach is aligned with the industry's need to have simple and easily interpretable control rules and a limited number of actions, such as the MV generation curtailment algorithm, developed by EDF R&D for Enedis (French DSO) to deliver day-ahead constraints forecast and generators' limitation calculation until the real-time set-points are sent to generators [14].An approach based on sensitivity indices was proposed in [15] to forecast which RES power plants must be curtailed and to what extent and ranks the flexible resources solely based on the sensitivity value.
The present article also formulates the decision-making problem in a time-to-decide fashion where the human operator must decide to i) book flexibility now (and pay an availability price) or ii) wait for the next forecast update and book flexibility later (if necessary).This decision problem has been around in other domains, and one illustrative example is the timedependent version of the cost-loss ratio as described by Murphy and Ye in [16].Other examples are: Wanke and Greenbaum, when exploring airspace congestion resolution, formulated a three-stage decision tree to find the optimal time and type of action [17]; Jewson et al., in meteorological forecasts, proposed an extended cost-loss model to decide whether to base a decision on the first forecast or to wait for the second forecast [18].The same authors also suggested a method of effectively conveying information regarding forecast changes to human decisionmakers [19].This was achieved by introducing different metrics, e.g., mean absolute change and probability of change of size x.
In the energy domain, specifically in terms of energy trading, Tankov and Tinsi proposed stochastic differential equations to describe the evolution of the forecast error as new information becomes available [20].Bellenbaum et al. designed a two-stage stochastic regional flexibility market mechanism that integrates the trade-off between early procurement at low-cost and later procurement at a higher cost but with better forecasts and knowledge about the flexible load [21].Mühlpfordt et al. established a relation between the price of uncertainty and the total variational distance between the densities of the in-hindsight OPF and CC-OPF solutions [22].The price of uncertainty can be used to decide between solving technical problems in real-time or in predictive mode.

B. Contribution
Compared to the state-of-the-art discussed above, the main contributions from this article are the following: r A predictive grid management framework that uses sensi- tivity indices and risk metrics to rank flexibility options and provide a summarized view to human operators in multi-criteria problems, taking the form of risk vs cost curves.This approach stands in contrast to OPF-based methods [6] that, despite their mathematical interpretability, do not enable human operators to analyze action-cause relations extracted from sensitivity indices.Furthermore, it facilitates enhanced interaction between operators and the decision-aid tool, specifically by allowing them to rank and explore various flexibility solutions and adjust the risk level based on the decision stakes.This relationship between risk level and stakes has not been previously studied in prior works about stochastic [8], [9] and robust optimization [11], [12].As opposed to [14], the proposed approach includes information about forecast uncertainty and, conversely to [14], [15], it proposes a multi-criteria decision problem, ranking the flexible resources according to different criteria (instead of selecting resources solely based in the highest sensitivity).
r Introduces a time-to-decide formulation for the decision- making problem, which was overlooked in the state-ofthe-art formulations [9], [10], [11], [12], [14], [15], and that holds particular relevance in dynamic environments, where frequent forecast updates are necessary for adapting to changing conditions.While [21] introduces a flexibility market mechanism, it overlooks the methodology employed by the system operator to estimate flexibility needs.Thus, the proposed approach is complementary to [21] as it addresses this crucial aspect.Moreover, unlike [22], the proposed approach allows the operator to interpret risk-cost relations and make decisions that rely not solely on a distance metric.Additionally, this work presents a novel concept known as second-level forecasting.This novel approach involves employing a second-level model to forecast the uncertainty (represented by a set of quantiles) of future (uncertainty) forecasts.It was determined through experimentation that an encoder-decoder deep learning model showed higher accuracy than other approaches.This approach contrasts with [18], which applies a parametric approach inadequate for modeling RES forecast uncertainty.

C. Structure
The rest of this article is organized as follows: Section II describes the conceptual framework; Section III describes how knowledge regarding flexibility modeling is derived for utilization in the decision-aid phase of Sections IV; V presents the numerical results for the Oberrhein MV grid; the conclusions and future work are discussed in Section VI.

A. Flexibility Market Operation
Based on the reviewed commercial flexibility market platforms [4] and the survey recently conducted by the Joint Research Centre (JRC) with companies like EPEX SPOT or Enedis [23], this work assumes that both long-term and shortterm products will probably be requested in future flexibility markets, subject to the specific network needs in each case.
The long-term flexibility products are designed to defer network investment, while the short-term flexibility products target congestion management and the strengthening of network resilience.However, a significant challenge lies in determining the appropriate price caps (or annual flexibility budget), which is comparatively more straightforward regarding long-term products associated with investment deferral [24].
According to JRC, while the present state of local flexibility markets in Europe does not provide a conclusive outlook regarding their future characteristics, a notable finding from the interviews is the anticipation of transitioning towards short-term flexibility markets [23].The main reasons are [23]: a) by enabling smaller assets (e.g., EVs) to participate in the procurement process, there is enhanced liquidity that arises due to their ability to accurately forecast its flexibility potential only in the short-term; b) improved grid forecasts closer to real-time mitigate volume risk for network operators.Nevertheless, long-term contracts are expected to persist as a means of securing reliability and integrating flexibility into the long-term expansion of networks.
The focus of this work is on the short-term horizon, drawing inspiration from successful System Operator pilots such as the CoordiNet project Swedish pilot [25], sthlmflex project with NODES platform [23], and the Redispatch 2.0 in Germany [26].Additionally, insights are drawn from market operator initiatives such as Enera Flexmarkt, operated by EPEX SPOT [23], and OMIE with the IREMEL platform [4].It is important to note that this methodology extends beyond flexibility market procurement.It can also be applied for defining curtailment signals, particularly for RES with non-firm connection contracts (like Enedis in France [14]).
For this work, the following assumptions were made.Firstly, day-ahead and intraday flexibility markets occur before each energy trading session, which aligns with the flexibility market platforms offered by NODES, GOPACS, and Enera [4].The market operator collects the bids submitted by FSP and provides this information to Transmission System Operator (TSO) and/or DSO.The flexibility needs can be revised in the intraday sessions with updated load and RES forecasts and changes in the flexibility band.Secondly, availability and dispatch payments are used for contracted flexibility.Thirdly, a pay-as-bid pricing method is considered for flexibility payment, widely recognized as the most common payment scheme [4].
The availability price is a mitigation measure to ensure enough flexibility volume in the market [4].However, it is important to acknowledge that liquidity issues related to flexibility markets can still arise.For instance, the price cap established through the long-term analysis [24] may significantly limit revenue opportunities for Flexibility Service Providers (FSPs) in technical constraints management markets, particularly for those already involved in more lucrative markets such as wholesale energy trading and frequency control.While addressing liquidity problems falls beyond the scope of this work, the proposed methodology offers valuable contributions to mitigate this issue.Firstly, the implementation of second-level forecasting effectively reduces the overall cost associated with flexibility use, as shown in Section V.This approach minimizes expenses and discourages an open-book mentality in the process of flexibility procurement.Secondly, the proposed methodology also serves as a valuable tool to communicate the potential risk of flexibility scarcity to the operator, as illustrated in Section V-E.Furthermore, without loss of generality, this work considers hourly market intervals and three forecasting moments: i) dayahead forecast (for D+1) with numerical weather predictions (NWP) generated at 0h00, i.e., between t + 24|t and t + 48|t; ii) updated NWP data at 12h00, i.e., between t + 12|t and t + 36|t; iii) 2-hour forecast before delivery, which can be used for intraday participation, i.e., t + 2|t.
Finally, the study and integration of various TSO-DSO coordination schemes are outside the scope of this work.However, it is important to note that information from both TSOs and DSOs can be effectively integrated into the proposed methodology, as elaborated in the subsequent section.

B. Methodology
Fig. 1 presents the building blocks of the proposed methodology.It departs from a large volume of information, namely the full electrical grid, RES/load uncertainty forecasts, and a set of flexibility options, i.e., dispatchable distributed generation, RES curtailment, demand response (DR), storage, network reconfiguration, on-load tap changer (OLTC) and capacitor banks/reactor shunts (CB/RS).Step by step, all this information is filtered and finally condensed into a risk-cost curve from where the human operator can select a preferred solution.This approach reduces the cognitive load of human operators and offers simplicity when analyzing the multiple flexibility options under uncertainty.Moreover, it enables simultaneous analysis of multiple lines/buses with a high probability of technical problems.The load and RES forecast uncertainty is represented by random vectors (or scenarios) that can be generated by physical or statistical approaches and capture the spatial dependency of forecast errors.
The first step focuses on running a power flow for each scenario, and computing, for each node and branch, the probability of having an over/under-voltage and congestion issue.The operator can set a minimum probability to select and analyze a reduced set of critical buses/lines.For this subset of elements, sensitivity indices (see Section III-A) relating active/reactive power and the line's current and node's voltage are computed.Then, in a second step, the flexibility options are filtered based on a sensitivity index minimum threshold to identify the most "relevant" options for a specific technical problem.The third step includes performing a flexibility ranking with the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method [27] (Section IV-A), considering a set of risk metrics that measure how effective a flexibility option is in solving a technical problem under uncertainty.It can also include probabilistic information (risk metrics) from the TSO or DSO associated with conflicting actions.Finally, the final step focuses on the design of a risk-cost curve (Section IV-B) by combining a subset of flexibility options (e.g., 3 to 5 options) selected by the operator and considering only the non-dominated solutions.The flexibility capacity of these options may also be constrained in the curves due to conflicting actions arising from the activation of flexibility by both DSO and TSO.The risk can be represented by different metrics, and without loss of generality, this work uses the probability of having a congestion or under/over-voltage problem.To obtain a preferred solution, we consider two decision-making paradigms: one based on the maximum risk threshold and the other based on a trade-off value (Section IV), where the risk threshold and trade-off values are conditioned by the stakes level (i.e., risk of cascading failure) and operator's preferences.Note that this relation between risk and stakes borrows the fundamental idea behind confidence-based decision-making theory [28].
The methodology above does not provide information about the best moment to book flexibility, considering that i) forecast uncertainty will decrease with time and ii) the flexibility availability price might increase with a shorter notification time.Thus, a time-to-decide problem is formulated based on the second-level forecasting concept (Section III-D), where the goal is to forecast how the RES and load uncertainty forecasts change over time, as new information is available.The term second-level forecast comes from the fact that we are, in essence, computing a forecast of forecast uncertainty.For the flexibility price, it is assumed that the SO has a forecast or that the price offers with different notification times are available in advance.
The second-level forecast of conditional quantile q α t+k|t , (1), generated with information available at time instant t (forecast launch time) for time interval t + k, with nominal proportion α (0 < α < 1), and a forecast update rate of z time intervals, corresponds to the conditional expected value of quantile qα n,t+k|t+z of the next forecast (i.e., generated at t + z) for the same time interval t + k. f is the second-level forecasting model for each conditional quantile α, and uses as covariates i) quantile forecasts qα t+k|t generated at t and ii) engineered features that explain the level of uncertainty in the power forecast generated at t and exogenous variables such as NWP, both denoted by X α t+k|t with l variables, plus a random shock e t+k|t .
Fig. 2 depicts the integration of second-level forecasting into the time-to-decide problem, considering: i) forecast launched at 0h00 (t) for lead-time t + 30 (day-ahead), and ii) two secondlevel forecasts for the forecast that will be generated 12 hours later with NWP updated at 12h00 (i.e., q ), and another for the forecast that will be generated 2-hours before delivery time (i.e., q α t+30|t = E[q α t+30|t+28 |q α t+30|t , X α t+30|t ]).This leads to three risk-cost curves that inform the operator about the possible outcome of waiting for the next forecast to book flexibility, each one corresponding to three different forecasting launch times for the "delivery hour", t + 30|t, t + 18|t, and t + 2|t.In this illustrative example, the preferred option would be to wait for the next forecast since the curve obtained with the second-level forecast Fig. 2. Time-to-decide framework and risk-stakes relation.The integration of second-level forecasts into the time-to-decide problem generates risk-cost curves for a) forecast launched at 0h00 for lead-time t + 30|t, and b) two secondlevel forecasts, one for the t + 18|t forecast (NWP updated at 12h00), and another for the t + 2|t forecast.The preferred solution for the human operator is selected from a risk threshold conditioned by the risk-stakes relation.
for t + 2|t indicates a lower flexibility cost for the same risk level.
Finally, it is essential to mention that contingencies were not considered.However, the methodology can be easily extended to include information about the probability of contingencies in the risk modeling and combined with the scenarios.

A. Sensitivity Indices (SI) 1) Analytical Method:
The Y bus compound matrix method [29] was used for the analytical derivation of node voltages and line currents SI.Using the complete admittance matrix of the network, this method takes advantage of the sparsity in the admittance matrix.It achieves computational efficiency surpassing that of Jacobian-based methods.Moreover, it applies to networks with any number of slack buses.For topology reconfiguration, the impedance (Z bus ) matrix method was used to obtain SI (distribution factors) due to its linearity and computational efficiency [30].
2) Machine Learning Proxy: The SI vary with the grid operating conditions, meaning each scenario requires an analytical calculation.To decrease the computational time due to its fast inference time, the gradient boosting trees (GBT) model is used as a proxy to extract functional knowledge relating SI and grid operating conditions.Firstly, a large set of grid operating scenarios is generated with the NORTA (NORmal To Anything) method [31].Then, the SI are analytically computed for this generated set of grid operating scenarios, and the GBT is fitted over this data using active and reactive nodal power injection as input features.To keep low the number of input variables while leveraging on the localized impact of active/reactive power nodal injections, the Spearman rank correlation coefficient was applied to measure the correlation between the time series of nodal power injection and the SI.Only the nodal injections exhibiting a correlation above 0.5 were selected as input features.
The GBT model is then used to derive SI δ t+k i→j,s (2) for the forecasted scenario s, and for grid element i (i.e., line/bus) and flexible resource j, and using as input a matrix of forecasted active Pt+k s and reactive Qt+k s power nodal injections for leadtime t + k.The mean absolute percentage error of estimated SI Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Fig. 3. Causality tree for a congested line i and flexibility options in nodes j (Level 1).Level 2 corresponds to the inverse impact in other grid elements by applying the Level 1 actions.
with GBT was less than 1%.
The analytical method is only used for generating the training set of the GBT proxy, using network operating scenarios generated by the NORTA method.Subsequently, the GBT model is used in the remaining steps of the methodology.

B. Interpretation From Sensitivity Indices
The SI relate the rate-of-change in nodal voltage/line current as a linear function of active/reactive power modulation, variation in CB/RS elements, or OLTC tap position.This information could assist the human operator in figuring out the causal relationship between the technical issue and the flexibility options.Fig. 3 depicts a causality tree divided into two levels (unrelated to the network topology), which is helpful to guide the human operator analysis to the most suitable flexibility options.In Level 1, the SI represent the influence of each flexibility option on reducing the overload in line i via downward/upward (active power) action over the current operating point of each bus j.Since the activation of flexibility can have an inverse impact on other grid elements (e.g., cause congestion in another line), the SI in Level 2 represent the negative (inverse) impact on other grid elements, which can be used to constrain the activated flexibility in each node j.
From the SI and scenarios, it is possible to compute the contribution C j of each grid node (load or generator) j forecast uncertainty to the forecasted technical problem in grid element i, via the average deviation of a set of S scenarios around the point forecast ŷ * t+k|t,j weighted by the SI: where δ j→i is the sensitivity of resource j to grid element i, and for scenario s.

C. Flexibility Modeling 1) Active and Reactive Power Nodal Flexibility:
Comprises a FSP aggregating DER or individual RES power plants, DR, and storage system per grid node, providing flexibility via upward/downward actions around their operating point.The FSP can use advanced aggregation functions, such as [32], to effectively manage and optimize the heterogenous DER, considering technical constraints, market dynamics, operational objectives, and different asset owners.
The flexibility value (FV), ( 4), represents the amount of active/reactive power that is computed per flexible resource j, based on δ s j→i , to solve a technical problem considering the max/min operational limit λ lim (e.g., voltage or current limits) and the forecasted operating conditions λ s i for each scenario s, where i means the element with a technical issue.The suggested upward (F lex up j ) and downward flexibility margin (F lex down j ) in the flexibility market is an upper limit for F V s j→i .
As illustrated in Fig. 3, a specific flexibility action may create technical issues in other grid elements with opposite SI signs.Thus, (5) is applied to limit the F V s j→i value, conditioned by the operational limits of the other grid elements v.
In situations where conflicting actions between TSO and DSO may arise during the activation of flexibility, (5) can be applied to limit the F V s j→i value.For example, suppose activating a flexible resource within the distribution network leads to technical issues in the transmission network.In that case, the TSO can communicate the parameters of (5) to enable the DSO to compute the corresponding limitation.Alternatively, the TSO can simply communicate the limitation percentage applicable to that specific resource.
Reaching the lower limit λ min for elements with the same SI sign is also possible where the elements are forced to a minimum operational limit (i.e., bus voltage should be above 0.9 p.u.).In this case, (6) is applied.
Moreover, the activation of F V s j→i also creates a change in the slack bus active and/or reactive power injection (in the opposite direction of the flexibility use), which can lead to technical problems in grid elements with the same SI sign of δ s j→i .Thus, (7) is applied to limit the F V s j→i avoiding a second order technical problem due to the slack bus (sb) operating point variation.
where V is the line and bus elements set, and SB is the set of slack buses in the system.The same constraint in ( 6) is applied for the slack bus but conditioned to grid elements with the opposite SI sign of δ s j→i due to a reverse reaction of the slack bus to the flexibility use.
2) Active Power Redispatch: Combined upward/downward action between two (or more) flexible resources/nodes, such that total active power feed-in remains virtually unchanged, but the congestion is removed.Note that, currently, redispatch actions are primarily employed by TSOs and are included in our proposed methodology for completeness.The application of such actions in distribution networks is still a subject of ongoing discussion, as highlighted in [33].However, initiatives such as redispatch 2.0 and 3.0 in Germany [26] have the potential to extend redispatch actions to distribution networks.
In this case, the individual SI in (4) are replaced by the ones computed by (8) for several n flexibility resources (i.e., j 1 , j 2 , . .., j n ) participating in redispatch, where b s j→i is an indicator function equal to +1 if the action is upward and −1 if downward.
To avoid technical issues in other elements (lines or buses) ( 5)-( 7)) are checked for all units participating in re-dispatch.
3) Reactive Power Flexibility From CB/RS: The FV for shunt elements follows (4) and moves on discrete steps determined by the steps of the CB/RS unit as formulated in (9).The FV is communicated to the human operator regarding step changes in shunt elements.
where stp max j is an integer parameter representing the maximum allowed step-change in CB/RS and Q u j is the reactive power per step change in shunt element j.
4) OLTC: In this case, the FV in (4) will be equal to tap [%] • tpos, where tpos and tap [%] mean tap-position and tap-percentage (i.e., percentage of voltage adjustment versus one-step change in tap-position).Thus, (4) is reformulated to (10), where λ s i is voltage in bus i and scenario s.
Moreover, the goal is to determine the required voltage set-point that will be automatically translated to a tap position by the OLTC, using the relation from (11).
In (11), if the tap position (tpos s ) remains zero, the voltage will be unchanged and equal to the voltage set-point of OLTC j in neutral tap position V (set − point) ne j (i.e., zero tap position).Then, (11) could be revised as ( 12) using (10): Like other flexibilities, the FV should be limited by the operating margins of other grid buses using an equation analogous to (5).
Similarly, the limitation of flexibility capacity can also arise due to conflicting actions between TSO and DSO, as explained earlier.

5) Network Topology:
A combination of switching actions proposed to change the network topology while ensuring it remains a traceable-connected graph.The candidate switching actions are selected based on their area-of-effectiveness, defined as the minimum path between two-side buses of the switched line using graph theory.This approach avoids simulating all possible switching actions.Therefore, the closing/opening actions are determined as follows: a) the closing action is applied to normally open lines whose area-of-effectiveness includes the congested line; b) the opening action is applied to normally closed lines in the area-of-effectiveness of the selected closing action.It was assumed that the opening action only applies when preceded by a closing action within the same area to prevent additional technical issues.As a result, the proposed actions can include a single closing action, sequential closing actions, and sequential closing and opening actions.This type of flexibility is considered in the decision-aid phase of Section IV together with the other flexibilities.

D. Second-Level Forecasting
The second-level forecasting concept builds upon a set of computed probabilistic forecasts (set of quantiles) generated with any forecasting algorithm.In this work, we used a GBT model with feature engineering [34], with truncated generalized Pareto distribution for the distribution's tails [35].This GBT model was extended with additional input variables for secondlevel forecasting, namely variables engineered with the quantiles from the first-level forecast, i.e., forecast for the day D + 1 with NWP data generated at 0h00 of day D.
A second model was an Encoder-Decoder Artificial Neural Network (ED-ANN) [36].The Encoder comprises two layers, which are Long Short-Term Memory (LSTM) network layers.The LSTM has feedback connections and can model temporal or sequential information.The Decoder comprises three internal layers: one LSTM and two fully connected layers, with linear activation function.The Decoder produces the output iteratively, and the hidden state produced at each forecast step generates the next forecast.In the training phase, teacher forcing was used, i.e., using the actual first-level forecast values as input for training the Decoder.

A. Flexibility Ranking
After calculating the F V s j→i (see Section III), the next step is ranking the individual flexibility options according to their effectiveness in solving the technical problem under forecast uncertainty.Firstly, the line loading/bus voltage after flexibility use is computed for each scenario: where λ s i and λ s i are the line loading/bus voltage of element i before and after flexibility activation.
Secondly, for each scenario, s, the severity function from [37] associated with line congestion/voltage violation (see (14)) is applied to the value of λ s i , where λ % i is the loading percentage of the line i (an analogous equation is applicable for voltage problems).
The flexibility cost corresponding to F V s j→i is also computed considering the flexibility bid price.The cost of the system Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
operator assets was computed with the methodology from [38], described in Appendix A.
It is important to underline that the goal here is to rank each flexibility option.Thus, cost and severity are computed for each scenario and flexibility option.Then, risk metrics are computed from the severity and flexibility cost, namely: expected value and value-at-risk (VaR) of the flexibility cost, expected value and VaR of the severity, probability of over/under-voltage, or congestion.In situations involving conflicting actions between TSO and DSO, an alternative approach to constrain the F V s j→i in (5) (as explained in Section III-C) is to incorporate risk metrics that quantify the potential impact on the TSO or DSO networks.This can include metrics such as the probability of congestion occurring in an upstream line within the TSO network.
Note that these metrics are computed for all flexibility options described in Section III-C, including network reconfiguration actions (whose impact is estimated with the Z bus matrix -see Section III-A1).Then, the TOPSIS method is applied to rank the flexibility options (i.e., set of alternatives), using the risk metrics as criteria.There is no differentiation between flexibility types during the ranking process.For instance, if a network switching action ranks among the top solutions, it will be included in the set of actions used to construct the risk-cost curves discussed in the following subsection.

B. Risk-Cost Curves
The top three to five flexibility options are selected for combination following the flexibility ranking.All the possible combinations, including binary actions and actions with discrete and continuous values, are considered.Only the non-dominated ones create the risk-cost curve among all these possible combinations.
This approach is applied to the uncertainty forecast available in the present time and to the second-level forecasts generated with the methodology described in Section III-D, which will result in multiple risk-cost curves (one for each forecast).

C. Decision-Making Paradigms
The final step (decision) requires the involvement of a human decision-maker to select a preferred solution from the risk-cost curve(s).Two alternative decision-making paradigms are considered: 1) Maximum Risk-Threshold: Sets a maximum threshold for the risk level conditioned by the decision stakes.In this work, the stakes are directly linked to the risk of cascading failures in the grid if the technical problem is a congested line.A similar concept can be derived for voltage problems, e.g., propagation effects of over-voltage in the protection system.The cascading risk is computed by simulating (using the Z bus method, in each scenario s) the impact on all the grid lines of tripping the congested line i and computing the expected number of lines tripping.This expected value is normalized between 0 and a maximum pre-defined value for the cascading simulation and re-scaled into a scale of integer numbers that is easy to analyze by a human operator (e.g., between 1 and 10).
It is necessary to find, via interaction with the human decisionmaker, a functional relation between stakes (risk of cascading) and maximum risk threshold so that stakes condition the risk level in each decision.A simple method to find this functional relation is a direct rating (e.g., the decision-maker is asked to state the maximum risk for stakes equal to 10).Still, other methods like direct mid-point can be used [39].
2) Risk-Cost Trade-off: The preferred solution is the one with the minimum equivalent cost, Eqcost (15), obtained by considering a trade-off value μ between risk and cost for each solution p in the curve.The trade-off value can be determined by interaction with the decision-maker, such as indifference judgments [39], and estimated for each stake level, creating a functional relation between trade-off value and stakes.In this case, the higher the stakes, the higher the trade-off value, i.e., the decision-maker is willing to pay more to decrease risk.
V. NUMERICAL RESULTS

A. Case-Study Description
The 20 kV Oberrhein MV network supplied by two 25 MVA HV/MV substations 1 was used with some modifications to increase RES penetration and create technical problems.The network supplies 141 MV/LV (secondary) substations and 61.86 MW loads (peak power) through four MV feeders.The topology is meshed but is operated as a radial grid.It contains 147 consumers, nine WPP, four CHP units, and three storage systems.The secondary substation load measurements were taken from the Iowa Distribution Test System [40], and the wind power measurements are from WPP in France (the location cannot be disclosed for confidentiality reasons).The NWP are from the ECMWF High-Resolution Forecast model.Without loss of generality, the uncertainty forecast has been applied only for WPP, and a perfect forecast was used for the substation load and CHP.The training period is from 2019-04-01 to 2020-03-30, and the testing period is six months, starting on 2020-04-01.
In terms of flexibility, the bid by generation units (i.e., WPP and CHP) was considered as 30% of the point forecast for upward/downward directions, while the upward direction for WPP was not considered.As assumed in [41], a shorter notification time can increase the flexibility price.Thus, the following was considered: 40% price increment for booking flexibility 12 hours later (assuming the present moment as the reference time) and 90% for booking two hours before the delivery time.A sensitivity analysis of these prices is presented in Section V-F.Without loss of generalization, it was assumed that availability and activation prices for flexibility are equal.
For DR, we considered a specific time schedule for its availability, and the maximum flexibility is 30% of the electrical energy consumption in a specific hour for upward/downward directions.In the case of storage units, flexibility was determined by the FSP as a function of battery state-of-charge and depth-of-discharge.
To evaluate the impact of different human decision-makers (DMs), different curves relating stakes and risk threshold/tradeoff are considered in Table I.
The proposed methodology can address both congestion and voltage issues.However, the main focus was on congestion problems.

B. Benchmark Strategies and Evaluation Metrics
For benchmark, the following strategies were considered: 1 https://pandapower.readthedocs.io/en/v2.0.0/networks/mv_oberrhein.html Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

TABLE I FUNCTIONAL RELATION BETWEEN STAKES AND RISK THRESHOLD/TRADE-OFF VALUE FOR DIFFERENT DM TABLE II COST-LOSS MATRIX TO EVALUATE THE PERFORMANCE OF THE PREDICTIVE GRID MANAGEMENT PROCESS
Time-to-decide (T2D): Novel concept proposed in this work where the operator decides i) book flexibility now or ii) wait for the next forecast update and book flexibility later.
Decision-now (DN): The operator, using the information from uncertainty forecasts, decides to book flexibility at the lowest (availability) cost based on forecasts generated with NWP generated at 0h00.Deterministic 1 (D1): The operator decides, based on a deterministic forecast, to i) book flexibility now or ii) wait for the next forecast update and book flexibility later.
The probabilistic method from [15].The overall evaluation of the results considered two evaluation matrices: the classical confusion matrix and the cost-loss matrix (see Table II).The confusion matrix is used to evaluate the performance of each strategy in detecting congestion problems, from which F-score accuracy metrics are computed to measure the balance between the precision (i.e., dividing the true positives (TP) by anything that was classified as a positive) and recall (i.e., dividing the TP by anything that should have been classified as positive).
The cost-loss matrix draws inspiration from the thresholdbased cost-loss analysis method described in [42] for evaluating weather forecasts.For this problem, the matrix was adapted to the following: flex cost (C) corresponds to the preventive actions cost (i.e., flexibility cost); loss (L) corresponds to real-time emergency action (i.e., load or generation curtail) to solve the congestion problem with a monetary cost corresponding to the value-of-lost-load (VoLL), considered to be 12000 € /MWh [43].The table can be summarized into the summation of the matrix elements, each weighted by the percentage of occurrence, which C. Second-Level Forecasting Performance 1) Input Variables Analysis: The set of input variables is divided into five categories.Group I consists of variables computed from the NWP data (i.e., generated at 0h00 in day D), namely: the wind module and direction at 10 and 100 meters, averaged for the locations of the wind turbines in each wind power plant, lags and leads (t ± 3) of these variables, as described in [34].The first three components of the PCA decomposition of the NWP variables across all wind power plants (WPP) are also part of this group.Group II pertains to calendar variables, namely sine and cosine transformations of the hour of the day.
Group III are variables directly extracted from the first-level forecasts, namely the quantiles 5%, 95%, and the target quantile.Group IV is a proxy for the uncertainty level extracted from the first-level forecasts, namely: the difference between quantiles one position away from the target quantile (i.e., previous and following quantiles); the difference between quantiles two positions away from the target quantile; the average and standard deviation of the set of target and neighboring quantiles (two before and two after the target quantile); inter-quantile range.
Group V includes variables computed from a poor man's ensemble [44], given that we have a time horizon of up to 90 hours and 12 hours NWP updates.Using these variables did not yield any improvement in almost all quantiles.In fact, on average, the model performed was worse than the best model, i.e., −0.17% improvement in the second-level forecast for the 12 h forecast, and −0.03% for the 2-hours ahead second-level forecast.The meteo-risk index proposed in [44] reflects the spread of the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecasts was also tested as an input variable.The improvement was below 2.9%, which does not justify the cost associated with this data.Different variants of input variables were tested, as shown in Table III.
The second-level forecasting skill is evaluated with the Mean Squared Error (MSE) considering the difference between forecasted quantile (q α t+k|t+z ) and second-level forecast of the same quantile ( q α t+k|t ).Moreover, a naive benchmark model where Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

TABLE IV MEAN PERCENTAGE IMPROVEMENT (PER WPP) OVER THE NAIVE MODEL FOR DIFFERENT MODEL VARIATIONS TABLE V IMPROVEMENT OVER NAIVE MODEL PER WPP
q α t+k|t = qα t+k|t was used, assuming that there is no variation in the quantile forecasts with an information update.The MSE improvement over the benchmark model was computed for each model.As mentioned in Section II, two second-level forecasts are considered: second-level forecast for forecast updated with NWP generated at 12h00, and second-level forecast for two hours-ahead forecast.We used data from nine wind power plants (WPP) for our experiments.Data from April 2019 to March 2020 was used for model training, and from this portion, about 25% random samples were selected for validation.The testing period spanned from April to September 2020.Note that the input and target quantiles of the training period have been produced through k-fold cross-validation.
Table IV presents the mean (overall WPP, quantiles 5% to 95% with 5% increments) improvement over the naive model for different model variations from Table III.The best performance is from model variation Var1.This combination of variables is used to compare GBT and ED-ANN.
2) Model Comparison: The results, averaged by WPP and including extreme quantiles (lower than 5% and higher than 95%), are shown in Table V. Starting with the second-level forecast for the 12h00 forecast launch time in day D, we observe that ED-ANN shows results comparable with the GBT, although the improvement is higher for the former in five WPP.
For the second-level forecast for two hours-ahead forecast, the ED-ANN outperforms the GBT model for seven WPP.Both improve on the naive model, showing the potential for exploring the second-level forecasting concept, in particular, the ED-ANN architecture.

D. Illustrative Example
This subsection presents an example for line 155 on August 13, at 17h00, to demonstrate the potential of the time-to-decide approach in reducing false congestion alarms since the observed (real) loading of this line was 37.6%.The forecast launched at 0h00 for day D+1 (i.e., t + 42|t) estimated a probability of congestion equal to 39.6%.In contrast, the two second-level forecasts (for D+1 with NWP at 12h00, and for t + 2) show a decrease in the probability, 23.6%, and 16.6% correspondingly, which might indicate a false congestion alarm.
Fig. 4 illustrates the contribution of the nine WPP (C j in (3)), depicted through gradient color, for the false congestion alarm in line 155.In this case, the highest contribution (i.e., 36.7%) is from the WPP at bus 51 (where line 155 is connected), which has a higher forecast absolute error for that time interval (47.26% of rated power), followed by WPP connected to bus 31 with a contribution of 23.87% and an absolute forecast error of 52.53%.Fig. 5 depicts the line loading's probability density function (PDF) for the three different forecasts obtained with the kernel density estimation over the scenarios samples.The point forecast (100.3% of the line loading) and observed value (37.6% of the loading) are also presented.
The risk vs cost curves for each forecast are depicted in Fig. 6.Using the risk threshold-stakes curve in Table I, and since, in this case, the stakes are equal to 6, the risk threshold is equal to 6% (in DM A) and 15% (in DM B).For DM A, book flexibility only two hours in advance (t + 2|t) is preferred.Note that the stakes for the two second-level forecasts are equal to 4, which leads to a risk threshold of 6% for DM A. Therefore, the operator should wait for the upcoming forecasts and not book any flexibility.Then, 12 hours later, the operator receives an updated forecast for the same hour (now lead-time t + 30|t) and a second second-level  forecast for t + 30|t + 28.Due to the lower total flexibility cost in the second-level forecast for t + 30|t + 28, the operator will postpone the flexibility booking for the last forecast in t + 2|t.The very short-term forecast launched at t + 2|t estimates a near zero probability of congestion, as depicted in Fig. 7, meaning that no flexibility is necessary for that specific hour.The same procedure could be followed for the risk level of DM B, and it would also reach this point where flexibility is not used.Therefore, the proposed time-to-decide methodology can result in a reduction of the flexibility cost and false alarms through the use of second-level forecasts.E. Practical Issues 1) Flexibility Scarcity: Flexibility scarcity events might occur (e.g., due to low flexibility market liquidity) and can be detected in advance by the proposed method.For instance, on April 14, at 11h00, the observed loading at line 49 was 124.47%.This congestion was detected with the forecast for t + 35|t with a probability of 89.9%.The risk-cost curve for the t + 35|t forecast is depicted in Fig. 8.The set of actions for the lowest reachable risk (i.e., 79.6%) are active power curtailment of 30% in WPP and CHP unit of bus 29 and WPP of bus 51 together with 20% in buses 43 and 117.However, these actions are insufficient for solving this technical issue or reaching an acceptable risk threshold for a decision-maker.In this case, the congestion problem will occur, leading to real-time control actions (e.g., RES curtailment) and a monetary loss of 329.76€ .2) Conflicting Action Between TSO and DSO: In this illustrative example, line 155 on July 5 at 04h00, a solution at the distribution level creates transmission-level problems.The TSO has two possibilities to integrate this information: a) incorporating risk metrics into the flexibility ranking that quantify the potential impact on the TSO network (as explained in Section IV-A); b) imposing a limit on the F V s j→i value to prevent technical problems in the transmission network (as explained in Section III-C).
To illustrate the first possibility, Table VI presents the modification in the flexibility ranking by incorporating the probability of congestion in the transmission network (referred to as "TSO risk" in the Table for simplicity) and the set of top flexible resources.Using this approach, the information from the TSO would be used to influence the flexibility ranking and select the actions with a lower probability of creating technical problems in the upstream network.This implies that flexibility options with higher costs may be placed in the top-ranked positions because of their capability to avoid technical issues in the transmission network.For the second possibility, Fig. 9 depicts the risk-cost curves as an example for the t + 28|t forecast, with and without TSO limitation to the F V s j→i .The curve with TSO limitations shows that to maintain the same risk threshold (e.g., 5%), a higher cost for flexibility is necessary.As a result, constraints on flexibility due to conflicting TSO-DSO actions could result in expensive solutions.
3) Changes in Flexibility Level: The forecasted congestion of line 155 on July 5 at 05h00 was selected for sensitivity analysis of the flexibility level (i.e., maximum flexibility per flexibility resource).Three flexibility levels were considered: 10%, 30% Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.(which will be used in the remaining work), and 70% of the RES point forecast or load.The impact of this change on the risk-cost curve for t + 2|t is depicted in Fig. 10.It shows that reducing the flexibility level can prevent the curve from reaching a lower risk level.Moreover, for the same risk level, the total flexibility cost for a 70% flexibility level is lower since a larger volume of flexibility from top-ranked solutions is available.The result of the T2D strategy for different flexibility levels is presented in Table VII for DM A. In the 10% flexibility level, the risk-cost curve cannot reach the risk threshold defined by the DM, and real-time RES curtailment actions will be necessary to solve the congestion problem.Increasing the flexibility level to 30% and 70%, the congestion will be solved with the "reserved" flexibility.However, the cost of booking flexibility with a 70% flexibility level is lower.In both 30% and 70%, the solutions obtained by the T2D strategy outperform the DN strategy.

F. Overall Evaluation
An overall evaluation was conducted over six months from April to September 2020).All the simulations were run on a cloud-based virtual machine with AMD EPYC CPU with 8core 2.94 GHz, 32 GB RAM, and Microsoft Windows 10 Pro.Table VIII presents the computational time of each step in the proposed method.
To evaluate the congestion detection performance of each strategy, the F1-score and F3-score are reported in Table IX.The T2D approach achieves the highest score, with DM A (using a lower risk threshold) showing higher performance in F3-score, while DM B outperforms in F1-score.The deterministic strategies show the lowest score due to a lower rate of TP detection (and a high rate of FN cases).The probabilistic approaches overcome this limitation, and the rate of FP is reduced in the T2D strategy due to the information provided by the second-level forecasts.The benchmark model from [15] is outperformed by T2D in DM A, B, D, and E.
The results of the cost-loss matrix per DM strategies are presented in Figs.11 and 12.The probabilistic strategies show a lower rate-of-occurrence and total cost in Cell C "action not taken, event occurred".This shows their effectiveness in addressing overlooked congestion cases, in contrast to the deterministic strategies.The higher rate-of-occurrence and total cost in matrix row "action taken" (Cells A+B) are due to more congestion problems solved, mixed with some false congestion alarms (a "side-effect" of risk-based approaches).Compared to the DN strategy, the T2D strategy shows a lower rate-of-occurrence and total cost for matrix Cell C "action not taken, event occurred" and Cell A "action taken, event occurred" since, with the use of second-level forecasting, the human operator may decide book flexibility now or postpone the decision to the next forecast.In this case, the strategy also benefits from the possibility of detecting overlooked congestion (i.e., reducing the FN cases).Moreover, the T2D strategy can significantly reduce the cost caused by false alarms, which is reflected in a higher rate-of-occurrence in Cell D "action not taken, event did not occur".The possibility of selecting the flexibility booking moment by the T2D strategy can reduce the cost significantly, as explained in Section V-D.The comparison between the three different DM shows distinct results and that DM A with lower risk thresholds (in comparison to DM B and C) can further reduce the loss of preventive management while paying more for the flexibility than the other two DMs.The trade-off strategy (DM D and E) provides the DM with the ability to manage the balance between risk and cost effectively.For example, DM D was willing to pay 30€ and 70€ to achieve a 1% reduction in risk.This level of control over risk and cost is not possible with the risk-threshold strategy (DM A-C), where the DM can only determine the maximum acceptable level of risk without considering associated costs.Consequently, this limitation may result in the implementation of costly solutions to meet the risk threshold.In this case, DM D contracts more flexibility to solve the technical problems (higher rate-of-occurrence in Cell B "action taken, event did not occur") compared to DM B and C, which have lower risk thresholds than DM A. DM E has higher trade-off value then DM D for each stakes level, which means that it is willing to pay more to decrease the risk.For instance, for DM E, this leads to a higher total cost in Cell B "action taken, event did not occur", but also to a lower rate-of-occurrence than DM D. The performance indicator γ was computed for each strategy, and results are presented in Table IX.These results show that the D1 strategy that "reserves" flexibility in the moment with lower flexibility price, but higher uncertainty (which leads to higher "reserved" flexibility), has 3.69% better performance than D2 strategy.The DN strategy has 80.61%, 76.12%, 74.53%, 81.09% and 81.73 improvement over the D1 strategy.The performance of the T2D strategy was evaluated concerning the DN strategy in each DM strategy, where an improvement of 21.08%, 20.72%, 20.45%, 12.78%, and 21.74% was obtained by DM A, DM B, DM C, DM D, and DM E, respectively.These results show the advantage of the proposed T2D strategy.Moreover, the benchmark model from [15] outperforms the T2D strategy in all DM A-E.Lastly, it is important to acknowledge that distinct risk profiles of DM can result in different performances.Consequently, an area of future research lies in integrating DM preferences into the cost-loss matrix and corresponding performance metric.This would enable direct comparisons between decision paradigms (e.g., risk-threshold vs trade-off) or DM.
The performance of the proposed T2D strategy for different flexibility price increment percentages as a function of the notification time.The previous results assumed increments of 40% and 90%.The evaluation was conducted through a sensitivity analysis in Table X.In all cases, the T2D strategy consistently outperformed the DN strategy, showing an average improvement of 20.75%.Furthermore, the minor variations observed in the performance indicator γ indicate the robustness of the T2D strategy to changes in flexibility prices with the notification time.

VI. CONCLUSION
This article describes a predictive methodology for flexibility procurement to manage technical constraints under forecast uncertainty and consider the different flexible resources.The main contribution is a complete methodology to guide the human operator along a) the flexibility options available in each hour, ranking them according to their effectiveness in an uncertain context, and b) multiple forecast updates.This methodology was tested in a public MV network, and the results showed that: a) a methodology based on uncertainty forecasts can lead to cost savings when managing grid technical constraints, and Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

TABLE XI PARAMETERS FOR CALCULATING FLEXIBILITY COST OF SO ASSETS
b) choosing the best moment to reserve flexibility also leads to cost savings in comparison to a strategy that reserves flexibility when it is cheaper.It is essential to note that short-term market-based flexibility procurement still has some challenges at the distribution grid level, namely: a) dealing with the radial topology in MV and LV grids, and geographical restrictions, b) lack of a mature regulatory framework for new DSO roles and coordination across different markets, and c) upward/downward flexibility price asymmetry.Moreover, the use of DSO's resources (e.g., OLTC, network topology reconfiguration) should be coordinated with local flexibility markets operation, notably in areas with limited (radial) geographical coverage, fewer FSPs, and lower liquidity.Uncoordinated use of such resources may deter participation, or lead to bid and/or settlement distortions.
Topics for future work are: include the look-ahead impact (i.e., > t + k of activating flexibility in lead-time t + k in the flexibility options ranking; develop new metrics to evaluate decision quality under uncertainty since, in this work, the used cost-loss matrix does not integrate the operator attitude towards risk; integrate low-probability and high-impact weather-related events forecasted with NWP ensembles and develop appropriate decision-aid strategies tailored for managing such events.Moreover, the original concept of second-level forecasting has room for further improvement by exploring other statistical learning models and additional features; it can also be applied to different use cases like deciding to offer all RES in the day-ahead market or wait for the intraday sessions.

APPENDIX A FLEXIBILITY COST
The cost of system operator flexibility assets is computed per switching action as follows [38]: where T T , a T and t OT are the total allowable adjustment times, lifetime after step changed T T times, and maintenance period, respectively.The maintenance cost and capital cost of the element (e.g., OLTC, power line, capacitor bank) is denoted by F OT and F s/b .Note that for the OLTC, the F s/b is equal to F T OLT C • (a T − a T )/a T , where a T is lifetime when the tap is never adjusted and F T OLT C is the capital cost of the transformer with OLTC.Table XI presents the parameters of the flexibility cost formula per each element.

Fig. 4 .Fig. 5 .
Fig. 4. Contribution (C) of RES power plants forecast uncertainty to the forecasted probability of congestion in one line.

Fig. 6 .
Fig.6.Risk-cost curves for the forecast and two second-level forecasts launched at 0h00.

Fig. 7 .
Fig. 7. PDF of line loading of forecast launched at t + 2|t.

Fig. 8 .
Fig. 8. Risk-cost curve for a case with flexibility scarcity.

Fig. 9 .
Fig. 9. Risk-cost curves for the cases with and without TSO limitation in the flexibility value (F V ).

Fig. 11 .
Fig. 11.Rate-of-occurrences for each cell of the cost-loss matrix.

Fig. 12 .
Fig. 12.Total cost for each cell of the cost-loss matrix.

TABLE III LIST
OF INPUT DATA VARIATIONS FOR THE SECOND-LEVEL FORECASTING MODEL is performance indicator γ:

TABLE VI FLEXIBILITY
RANKING WITHOUT AND WITH TSO LIMITATION

TABLE VII SENSITIVITY
ANALYSIS OF THE T2D STRATEGY FOR DIFFERENT FLEXIBILITY LEVELS

TABLE X SENSITIVITY
ANALYSIS FOR DIFFERENT FLEXIBILITY PRICE INCREMENTS (IN %) AS A FUNCTION OF THE NOTIFICATION TIME