Another remedy suggested is in using a different volatility metric , i. Random matrices are introduced to show the importance of dimensionality reduction. It can be shown in cases where the asset universe is large, there are a few eigen values which dominate the rest. Hence random matrix theory makes a case for using factor based model like APT. Again factor models can themselves be of multiple varieties. They can be statistical factor models, macroeconomic factor models, fundamental factor models etc.
The selection criteria should be in such a way that it reduces estimation error and bias. Too less factors decreases estimation error but increases bias. Too many factors increase estimator error but decrease bias. So, I guess the ideal number of factors and type of factors to choose is more an art than science. One of the strong reasons for using a factor model is the obvious dimensional reduction for a large asset universe allocation problem.
Robustness-based portfolio optimization under epistemic uncertainty
The chapter also mentions various methods for estimating volatility such as modeling it based on the implied vols of the relevant options,use clustering techniques, use GARCH methods or use stochastic vol methods. One must remember though that most of these methods are relevant in the Q world sell side and not the P world buy side. Robust estimation is not just a fancy way of removing outliers and estimating parameters. It is a fundamentally different way of estimation.
If one is serious about understanding it thoroughly, it is better to skip this chapter as the content merely does a lip service to the various concepts. Estimation of expected returns and covariance matrix is subjected to estimation error in practice. The portfolio weights change constantly and resulting in the considerable portfolio turnover and sub-optimal realization of portfolio returns. What can be done to remedy such a situation? There can be two areas where one can improve things. On the estimation side, one can employ estimates that are less sensitive to outliers, estimators such as shrinkage estimators, Bayesian estimators.
- Navigation menu.
- The Demography of Health and Health Care (The Springer Series on Demographic Methods and Population Analysis).
On the modeling side, one can constrain portfolio weights, use portfolio resampling, or apply robust or stochastic optimization techniques to specify scenarios or ranges of values for parameters estimated from data, thus incorporating uncertainty in to the optimization process itself. The chapter starts off by talking about the practical problems encountered in Mean-Variance optimization. Some of them mentioned are. The paper had so many points clearly mentioned, that I feel like listing down them here.
I was always under the impression that all that matter is frontier stability. This is a far more illuminating than any numbers that are usually tossed around. Using a few assets I looked at Estimated and True Frontier in two separate years, one in bull market and one in bear market.
Robust Portfolio Optimization & Management : Summary
In the first illustration, the realized frontier is at least ok, but in the second case, the realized frontier shows pathetic results. It is not at all surprising that the frontier works well for minimum variance portfolios but falls flat for maximum return portfolio. Also, one can empirically check that errors in expected return are about 10 times more important than errors in covariance matrix, and errors in variance are twice more important than errors in covariance.
I have tried the latter but till date have not experimented with shrinking both the mean and covariance at once. Can one do it? Shrinking covariance towards a constant correlation matrix is what I have tried before. Ledoit and Wolf method shrinks the covariance matrix and compare the performance with sample covariance matrix, statistical factor model based on principal components, and a factor model.
They find that shrinkage estimator outperforms other estimators for a global minimum variance portfolio. Since expected returns are so much more important in the realized frontier performance, it might be better to use some model instead of sample mean as an input to the mean-variance framework. Black-Litterman model is one such popular model that combines views with prior distribution and gives the portfolio allocation for various risk profiles.
You have an equation for market portfolio and you have an equation for views. You combine these equations and form a Generalized Linear Model, estimate the asset returns. Even if you have views on few assets, that will change the expected return for all the assets. However I am kind of skeptical about this method as I write. I have never developed a model with views till date. But as things change around me, it looks like I might have to incorporate views in to asset allocation strategies.
Part III of the book is a list about optimization techniques and appears more like a math book than a finance book. For an optimization-newbie, this chapter is not the right place to start. For an optimization-expert, the chapter will be too easy to go over. This chapter then is targeted towards a person who already knows quite a bit about optimization procedures and wants to know different viewpoints.
Like most of the Fabozzi books that try to attempt at mathematical rigor and fail majestically, this part of the book throws not surprise. Part IV of the book talks about application of robust estimation and optimization methods. Takeaway :. You are commenting using your WordPress. You are commenting using your Google account. You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. This site uses Akismet to reduce spam.
Learn how your comment data is processed. Blog at WordPress. Portfolio Selection in Practice There are usually constraints imposed on the mean variance framework. This chapter gives a basic classification of the constraints, which are Linear and Quadratic constraints Long-Only constraints Turnover constraints Holding constraints Risk factor constraints Benchmark Exposure and Tracking error constraints General Linear and Quadratic constraints Combinatorial and Integer constraints Minimum-Holding and Transaction size constraints Cardinality constraints Round lot constraints Portfolio optimization problems with minimum holding constraints, cardinality constraints or round lot constraints are so-called NP-complete.
Classical Asset Pricing This chapter gives a crash course on simple random walks, arithmetic random walks, geometric random walks, trend stationary processes, covariance stationary processes, etc. Forecasting Expected Risk and Return The basic problem with mean variance framework is that one uses historical data estimates of mean return and covariance to use it in the asset allocation decisions for the future.
Robust Portfolio Optimization & Management : Summary | Book Reviews
The technique does not amplify errors already present in the inputs used in the process of intuition. The forecast should be intuitive, i.
- Portfolio optimization - Wikipedia;
- Featured categories.
- Robust Portfolio Optimization Using Pseudodistances?
- Principles of Neural Design.
- Archived Entry.
- Portfolio optimization - Wikipedia!
- BE THE FIRST TO KNOW!
Equally weight portfolios offer a comparable advantage to using asset allocation using sample mean and sample covariance. Uncertainty of mean completely plays a spoil sport in efficient asset allocation. It is sometimes ok to have an estimation error in covariance, but estimation error in mean kills the performance of asset allocation. Mean variance portfolios are not diversified properly. In fact one can calculate zephyr drift score for the portfolios and one can easily see that there is a considerable variance in the portfolio composition. Large data requirements necessary for accurately estimating the inputs for the portfolio optimization framework.
There is a trade-off between Stationarity of parameter values Vs. If you take too long a dataset, you run the risk of non-stationarity of the returns.
If you take too short a dataset, then you run the risk of estimation error. True efficient frontier — based on true unobserved return and covariance matrix. Estimated Efficient frontier — based on estimated return and covariance matrix. Actual frontier — based on the portfolios on estimated efficient frontier and using the actual returns of the assets. One can draw these frontiers to get an idea of one basic problem with Markowitz theory — error maximization property.
The assets which have large positive error for returns, large negative errors for standard deviation and large negative errors for correlation tend to get higher asset weights than they truly deserve. If you assume a true return and covariance matrix for a set of assets, draw the three frontiers, it is easy to observe that , minimum variance portfolios can be estimated more accurately than maximum return portfolios. To distinguish between two assets having a distribution of m1,sd1 and m2,sd2 requires a LOT of data and hence identifying the maximum return portfolio is difficult.
Robust Portfolio Selection Problems
Practitioners often add additional constraints to improve diversification and further limit risk. Examples of such constraints are asset, sector, and region portfolio weight limits. Portfolio optimization often takes place in two stages: optimizing weights of asset classes to hold, and optimizing weights of assets within the same asset class.
An example of the former would be choosing the proportions placed in equities versus bonds, while an example of the latter would be choosing the proportions of the stock sub-portfolio placed in stocks X, Y, and Z. Equities and bonds have fundamentally different financial characteristics and have different systematic risk and hence can be viewed as separate asset classes; holding some of the portfolio in each class provides some diversification, and holding various specific assets within each class affords further diversification.