The Abuse of the Performance Survey

27four’s Head of Manager Research Claire Rentzke explains both the power and the consequences of misuse when it comes to selecting funds based on short term rankings.

Investment surveys are probably similar in some ways to guns. In the hands of a trained user a very valuable tool but if they fall into the wrong hands the results can be disastrous.

In the institutional space the monthly surveys tend to be the most important to the asset managers who are very keen to see how they rank relative to their peers, whereas trustees seldom spend much time on them unless spurred on by a service provider (who are generally the publishers of these surveys). However, within the retail space the ranking tables that fill up a good proportion of the print space in the finance section of the weekend papers, seem often to be the starting point for some financial advisors in making recommendations to their clients on manager selection. Most managers operating in the retail space know the Monday morning phone calls that follow a drop in the weekly ranking table over the weekend.

It would be naive to say that performance doesn’t matter, ultimately when choosing an asset manager and in particular an active manager, clients want to benefit from the manager’s skills through the performance that the manager can generate. After all that is the reason the client pays fees, but where the problem lies is in appointing a manager based on performance alone and their number one slot in the survey ranking. This results in manager’s being included in a portfolios where their ability to generate long term outperformance over the market cycle is not questioned or indeed understood. Looking out at a longer performance period doesn’t necessarily negate falling into the trap of not knowing where your manager generates their performance from and whether it is indeed sustainable. Madoff generated great returns for clients over a number of years, not just a 12-month period. Additionally, bull markets can last longer than 3 years as can bear markets. If clients know what their manager is offering and what they are paying for then it becomes easier for clients to understand the manager’s particular pattern of performance, over what time frame they should be judged and then ultimately whether they are living up to their mandate and have earned their fees. How the manager stacks up in the surveys may be an unfair comparison because the manager should be judged against a peer group of like mandates and this can differ vastly from the peer group indicated in the survey. A momentum driven equity manager would (in the current market) look vastly superior to a dyed in the wool value manager.

This raises the question as to whether surveys need to be amended so that ultimately they compare apples with apples and all like mandates are grouped together. For the survey providers this becomes a nightmare to get these produced each month and it only serves to increase the level of confusion amongst readers. A survey that only has a comparison of two products quickly becomes a bit meaningless. Even in the groupings used in the unit trust space two managers in the same grouping can be following very different mandates.

So then what role do the risk statistics fulfil? Is risk adjusted performance a better indicator of which managers are superior? Unfortunately again there is no short cut here. For instance, the variability and the size of those returns below the benchmark is much more significant than the variability and size of returns in excess of the benchmark. The standard risk measures in fixed income and hedge fund portfolios often fail to truly capture the inherent risks in these portfolios. There isn’t a risk measure that captures what happens to the performance of a corporate credit portfolio for example when the bond market comes under pressure and all liquidity dries up overnight, or one that easily shows the concentration risk in a hedge fund. There is simply no one risk measure that provides the full picture and because surveys need to be visually pleasing and space is limited the easily understood and summarised numbers are the ones shown.

Performance surveys give us a very good summary of past performance and the past variability of that performance and as such surveys do have a place but that place is not in manager selection. Patterns definitely exist and past ability is one factor in future ability but not the only one. Just as asset managers don’t pick the shares they buy solely based on how they have done in the past, so we should not be picking our managers based on past performance alone. We need to understand the process and then understand what performance and risk characteristics that process is likely to generate through different market cycles. From this point we can then access whether or not the manager fits into our investment strategy and our risk budget. Of great benefit are also those surveys that talk to the state of the industry which are generally only produced on an annual basis. These surveys provide a great tool in assessing how the industry has grown, where significant flows have been evidenced, what strategies may be in favour, how healthy the underlying businesses are etc.

Month by month the surveys give us a tool to track performance and risk and if we have a mandate in line with the data included for the surveys, provide an independent verification that our performance numbers are in-line with other investors. Using surveys as a crutch to make manager selection decisions is rear-view investing with all its disastrous consequences.

Claire Rentzke

May 2015

27four Investment Managers