• Không có kết quả nào được tìm thấy

12.2% 169000 185M TOP 1% 154 6200

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Chia sẻ "12.2% 169000 185M TOP 1% 154 6200"

Copied!
165
0
0

Loading.... (view fulltext now)

Văn bản

We use a stochastic differential equation (SDE) of the form dXt ¼a X,ð tÞdtþ b X,ð tÞdBHt , whereBHt is a fractional Brownian motion to model inflationary dynamics. Next, we specify a stochastic differential equation with a fractional Brownian motion and a Lévy process and perform several numerical simulations. Almost all other stochastic processes, such as ordinary Brownian motion or Lévy process have time-independent increments (at least almost certainly).

Definition of "fractional Brownian motion (fBm)." Let the Hurst index H be 0

The simulation follows a return-to-mean stochastic differential equation driven by fractional Brownian motion and a Lévy process. Therefore, we conclude that a fractional Brownian motion with a Lévy process provides a better approximation to model eurozone inflation dynamics.

Figure 3 represents the Harmonized Index of Consumer Price (HICP) of the Eurozone on monthly frequency from 1997 to 2020
Figure 3 represents the Harmonized Index of Consumer Price (HICP) of the Eurozone on monthly frequency from 1997 to 2020

Conclusion

In this regard, a higher inflation target does not eliminate deflationary events such as today's target level of 2%. But from an economic point of view, we declare that a higher inflation target creates greater volatility and consequently removes the anchoring of inflation expectations. As a result, raising the inflation target is not without risk due to growing uncertainty about inflation expectations and price stability in general.

That said, the Eurozone's stable and low inflation rates are highly dependent on the European Central Bank's inflation target. Still, our model simulation shows that proposing to raise the inflation target, like the one proposed by Blanchard et al. In the end, you may have higher volatility and the risk of de-anchored inflation expectations.

The latter can create a strong upward bias in inflation rates beyond the control of the central bank. In this chapter, we will present statistical methods that can be used to combine historical data and scenario estimates to estimate extreme quantiles.

Introduction

Since financial companies have limited historical data available to estimate these extreme quantiles, they often use expert scenario assessments to supplement the historical data by providing a forward-looking view. Such a division is accepted as an adequate reflection of the past, but must look forward in the sense that expected future losses are taken into account. To estimate a loss of one in a thousand years, one would hope that at least a thousand years of historical data is available.

In reality, however, only between five and ten years of internal data are available and scenario assessments are often used by experts to supplement historical data and provide a forward-looking view. In this chapter, we outline statistical methods that can be used to estimate VaR using historical data in combination with expert quantile assessments. In the next section, we discuss two approaches, Monte Carlo and Single Loss Approximation, that can be used to approximate VaR from known distributions and parameters.

Then, in the third section (Historical data and scenario modeling), we will discuss available data sources and formulate the scenario approach and how it can be created and evaluated by experts. In the fifth section (Implementation recommendations) some guidelines are given for the implementation of the preferred approach.

Approximating VaR

In this chapter, we concentrate on the estimation of VaR for the total loss or compensation distribution and strive to make the approach more accessible to a wider audience. Based on the implementation carried out for larger banks, we also include some practical guidelines for using and implementing the method in practice. The number of iterations determines the accuracy of the approximation, and the larger it is, the higher its accuracy.

CoP 1ð1 γÞ≈T 1ð1 γ=λÞ, (1) states that the 100 1ð γÞ% VaR of the total loss distribution can be approximated by the 100 1ð γ=λÞ% VaR of the severity distribution, if the latter is part of the sub- exponential class of distributions. The result is quite remarkable because a quantile of the total loss distribution can be approximated by a more extreme quantile (ifλ>1) of the underlying severity distribution. With this in mind, we might consider modeling the core and tail of the severity distribution separately as follows.

We use q as a threshold that splices Tin in such a way that the interval below q is the expected part and the interval above q the unexpected part of the severity distribution. The distributions in the attraction domain of GEV are a broad class of distributions that include most of the distributions of interest to us.

Historical data and scenario modelling

Of course, the choice of c¼100 may be questionable because judgment at a 1-in-100-year loss level is likely to be beyond many of the experts' experience. The oracle will then produce an answer that can be used directly as an approximation for the 99.9% VaR of the total loss distribution. In the light of the above arguments, one must take into account: (a) the SLA only gives an approximation to the VaR we are trying to estimate, and (b) it is very unlikely that experts the experience or the information at their disposal will have to reliably assess a 1-in-1000 year event.

Returning to the oracle's answer in (4), the expert must consider both the true severity distribution and the annual frequency when providing an assessment. In order to simplify the task of the expert, consider the mixed model in (3) discussed in the previous section. Note that the oracle's answer to the question in the previous setting can be stated as T qc.

¼Tu1 1 bc) has interesting suggestions on the formulation of the basic question of the 1-by-year approach. In the rest of the chapter we assume that this question is put to the experts to form their judgment.

Estimating VaR

  • Naïve approach
  • The GPD approach
  • Venter’s approach
  • GPD and Venter model comparison

The bottom panel shows the results of the VaR estimates using the naïve approach. Note how the distribution of VaR estimates differs from those obtained using the true underlying severity distribution. ¼0:93 (5), which can be solved to obtain estimates of the ~σin~ξparametersσandξv GPD based on scenario evaluations.

With more than three scenario ratings, fitting techniques can be based on (5), which relates the quantiles of the GPD to the scenario ratings. In the second case, the quantities are provided by the false severity distribution, but the loss data follows the true severity distribution. The boxplots of the VaR estimates are given in Figure 4(b) for Case 1 and Figure 4(c) for Case 2.

The behavior of the GPD approach is as expected and the box plots correspond to the quantiles provided. Here, the fraction ϵ expresses the size or range of possible deviations (or errors) that are part of the scenario estimates.

Illustration of VaR estimates obtained from a GPD fit on the oracle quantiles. (a) True Burr distribution, T_Burr(1, 0.6, 2), (b) fitted distribution F_Burr(1.07, 0.56, 2.2) on simulated data, (c) fitted distribution F_Burr(1.01, 0.52, 2.26) on augmented s
Illustration of VaR estimates obtained from a GPD fit on the oracle quantiles. (a) True Burr distribution, T_Burr(1, 0.6, 2), (b) fitted distribution F_Burr(1.07, 0.56, 2.2) on simulated data, (c) fitted distribution F_Burr(1.01, 0.52, 2.26) on augmented s

Implementation recommendations

The above information suggests that, provided sufficient loss data is available, the Venter approach is the best choice to work. Calculate the ratios Rð Þ,7 Rð7, 20Þ,Rð20, 100Þand Rð100Þof the best fit distributions obtained above and then select the best distribution based on the ratios. For the best fit distribution, give the ratios that deviate significantly from one to the experts for possible reassessment.

In practice, different data sets are included, for example internal, external and mixed where the latter is scaled. Guideline vi can also be repeated on appropriately mixed (scaled) datasets to select the best distribution type.

Some further practical considerations

Conclusion

Appendix A

The generalised Pareto distribution (GPD) The GPD given by

The Burr distribution

  • Model
    • Multiplicative Factor-MSVOL model
    • Additive Factor-MSVOL model
  • Empirical analysis 1 Dataset
    • Bayesian estimation
    • Findings
  • Conclusion
  • Framework of stochastic network model
    • Notations and assumptions
    • VI formulation for different stochastic network models .1 Stochastic network-system optimal (SN-SO) formulation
    • Stochastic travel times under different sources of uncertainty
  • Marginal cost pricing in a stochastic network (SN-MCP) with both supply and demand uncertainty
    • Analysis of SN-MCP
    • Calculation of SN-MCP
  • Risk-based MCP (RSN-MCP) in a stochastic network
    • Analysis of risk-based SN-MCP
  • Formulation of perceived RSN-MCP (PRSN-MCP)
    • Model incorporating the travelers’ perception error
    • Calculation of PRSN-MCP
  • Numerical examples
    • Effect of the VMR on the performance of SN-MCP toll scheme
    • Importance of incorporating supply and demand uncertainty
    • Analysis of the essentiality of incorporating the travelers’ perception error The traffic network shown in Figure 2 is again adopted in examining the PRSN-
    • Application to the Sioux Falls network in the PRSN-MCP (SS-SD) case The final example illustrates the calculation of the PRSN-MCP (SS-SD) toll in a
  • Conclusions
  • The QLE and AQL methods
    • The QL method
    • The AQL method
  • Parameter estimation of ARCH(q) model using the QL and AQL methods
    • Parameter estimation of ARCH(q) model using the QL method The ARCH(q) process is defined by
    • Parameter estimation of ARCH(q) model using the AQL method
    • Simulation studies for the ARCH(1) model
    • Empirical applications
  • Parameter estimation of GARCH(p,q) model using the QL and AQL methods
    • Parameter estimation of GARCH(p,q) model using the QL method The GARCH(p,q) process is defined by
    • Simulation studies for the GARCH(1,1) model
    • Empirical applications
  • Conclusions
  • Literature review
  • Methodology
    • Short-term return analysis
    • Long-term return analysis
    • Risk-adjusted performance analysis
    • Market trend return analysis
  • The sample
  • Empirical results
    • Short-term return analysis
    • Long-term return analysis
    • Risk-adjusted performance analysis
    • Market trend return analysis
  • Conclusion
    • Bonds
    • Commodities
    • Equities
    • Listed real estate
  • Data and modelling
    • Data
    • Data description and preliminary statistics
    • Volatility spillover modelling
  • Analysis
  • Empirical findings
    • Results of the bivariate models
    • Results of the multivariate models

It describes the variance of the total travel time. cnað1 θaÞð1 nÞvnþ1a yna2þn. Based on the current analysis, we derive the mean and variance of the expected total perceived travel time. Then the estimate of θ using the QL method is a solution of the equation QL GT∗ð Þ ¼θ 0 (see [25]).

A broad evaluation of the kernel estimator of the Nadaray-Watson (NW) type estimator can be found in [27]. Abnormal returns are obtained by applying a market model to the S&P 500 Index and the S&P 600 Small Cap Index.

This fund's historical buy-and-hold benchmark-adjusted returns are 62% and 83%, respectively, in the case of the S&P 500 Index and the S&P 600 Small Cap Index.5. At the fund level, when the first version of the model is assessed (i.e. the one with the S&P 500 index), about half the momentum. The daily excess return of IPO ETFs is sequentially reduced by the excess return of the S&P 500 Index or the S&P 600 Small Cap Index, and the .

The study examines the developed markets of the US, the UK, France, Germany, Australia, Japan, Hong Kong and Singapore and their links to the global stock market and global real estate markets.

Figure 2 shows a network consisting of 14 nodes and 21 directed links. There are two OD pairs, one is from node 1 to 12, and the other one is from node 1 to14.
Figure 2 shows a network consisting of 14 nodes and 21 directed links. There are two OD pairs, one is from node 1 to 12, and the other one is from node 1 to14.

Hình ảnh

Figure 3 represents the Harmonized Index of Consumer Price (HICP) of the Eurozone on monthly frequency from 1997 to 2020
Illustration of VaR estimates obtained from a GPD fit on the oracle quantiles. (a) True Burr distribution, T_Burr(1, 0.6, 2), (b) fitted distribution F_Burr(1.07, 0.56, 2.2) on simulated data, (c) fitted distribution F_Burr(1.01, 0.52, 2.26) on augmented s
Figure 2 shows a network consisting of 14 nodes and 21 directed links. There are two OD pairs, one is from node 1 to 12, and the other one is from node 1 to14.
Figure 4 demonstrates the percentage improvements in the expected total per- per-ceived travel time related to Table 2

Tài liệu tham khảo

Tài liệu liên quan

Additionally, top management of family firms is needed to adopt or replace the existing resources and capabilities according to the external market of digitaliza- tion, because