Skip to main content
← Back to A Definitions

Adjusted return deviation

What Is Adjusted Return Deviation?

Adjusted return deviation is a concept within portfolio theory that refers to the extent to which an investment's actual returns vary from its expected returns after accounting for specific factors, most commonly risk. Unlike raw return, which simply measures gain or loss, or traditional standard deviation which measures total volatility, adjusted return deviation seeks to provide a more nuanced view of portfolio performance by normalizing returns for particular influences. This adjustment allows for a more "apples-to-apples" comparison of different investments or strategies during investment analysis, particularly when those investments carry different levels or types of risk.

History and Origin

The evolution of performance measurement in finance has long moved beyond simply looking at raw returns. Early concepts often focused on mean-variance analysis, popularized by Harry Markowitz's Modern Portfolio Theory. However, as financial markets grew in complexity and understanding of investor behavior deepened, the need to adjust returns for various factors became apparent. The development of risk-adjusted performance measures intensified in the latter half of the 20th century. This shift recognized that higher returns often come with higher risk, and a superior investment is one that achieves strong returns with a commensurate, or even lower, level of risk. Subsequent research has explored how different risk adjustments align with various implied risk-attitudes or investor preferences, leading to a more sophisticated understanding of how performance should be evaluated.

Key Takeaways

  • Adjusted return deviation quantifies the dispersion of an investment's returns after accounting for specific modifying factors, primarily risk.
  • It provides a more meaningful comparison of investment performance across assets with differing risk profiles.
  • The concept is foundational to various sophisticated risk-adjusted return metrics used in modern finance.
  • It helps investors understand if a deviation in returns is simply due to inherent risk exposure or indicative of true outperformance or underperformance.

Interpreting the Adjusted Return Deviation

Interpreting adjusted return deviation involves understanding what factors have been accounted for and how the resulting "adjusted" figure provides insights into an investment's quality. A smaller adjusted return deviation often indicates more consistent performance given the specific adjustment criteria, which is usually risk. For example, if two portfolios have similar raw returns but one has a significantly lower adjusted return deviation (after accounting for risk), it implies that the latter achieved its returns more efficiently or with less unexpected volatility relative to the risks taken. This metric is crucial for effective risk management, as it helps pinpoint whether a deviation from expected return on investment is acceptable within the investment's risk mandate.

Hypothetical Example

Consider two hypothetical investment funds, Fund A and Fund B, both with an average annual return of 10% over five years.

  • Fund A: Invests primarily in stable, large-cap equities.
  • Fund B: Invests in highly speculative small-cap stocks.

While their average returns are identical, their unadjusted return deviations (or volatility) would likely differ significantly due to their underlying investment strategy. Fund B would likely exhibit much higher volatility.

To calculate the "adjusted return deviation," one might use a risk-adjusted metric like the Sharpe Ratio. For simplicity, let's say the risk-free rate is 2% and both funds generated an average excess return of 8% (10% - 2%).

  • Fund A's standard deviation: 8%
  • Fund B's standard deviation: 20%

Sharpe Ratio (Adjusted Return Deviation proxy):
Fund A: (\frac{8%}{8%} = 1.0)
Fund B: (\frac{8%}{20%} = 0.4)

In this scenario, after "adjusting" for the deviation (volatility), Fund A demonstrates a much better risk-adjusted return deviation. Even though both had the same average return, Fund A achieved it with less risk, signifying a higher upside potential relative to its downside. This analysis highlights that a higher raw return with lower adjusted return deviation is preferable.

Practical Applications

Adjusted return deviation is a cornerstone of modern financial analysis, particularly in evaluating investment managers and products. It helps investors look beyond simple historical gains to understand the efficiency and consistency of those gains relative to assumed or specified factors. For instance, mutual funds and hedge funds often present their performance adjusted for various risks, utilizing metrics like the Sharpe Ratio, which penalizes returns for excessive volatility, or the Sortino Ratio, which focuses specifically on downside risk. Furthermore, regulatory bodies, such as the U.S. Securities and Exchange Commission (SEC), issue guidelines on how investment performance, including adjusted figures, must be presented in marketing materials to ensure transparency and prevent misleading claims. Recent guidance on the SEC Marketing Rule has provided clarity on presenting gross versus net performance and how various portfolio characteristics can be shown, which directly impacts how adjusted return deviations are communicated to potential investors.

Limitations and Criticisms

While highly valuable, adjusted return deviation measures are not without their limitations. The primary criticism often revolves around the choice of adjustment factors. If the wrong factors are used, or if the underlying assumptions are flawed, the resulting adjusted deviation can be misleading. For example, traditional measures like standard deviation, often used in these adjustments, assume a normal distribution of returns, which may not hold true for all asset classes, particularly during periods of market stress or for investments with asymmetrical return profiles. As noted by academic research, the limitations of standard deviation as a risk measure can lead to inaccurate conclusions, especially for fixed-income portfolios1. Critics also point out that such measures may not fully capture all relevant risks, such as liquidity risk or tail risk, leading to an incomplete picture. Additionally, certain adjustments, such as those that produce alpha (excess return relative to a benchmark), rely on specific models like the Capital Asset Pricing Model (CAPM) and its assumptions about market beta, which may not perfectly reflect real-world market dynamics.

Adjusted Return Deviation vs. Standard Deviation

Adjusted return deviation and standard deviation are related but distinct concepts. Standard deviation is a statistical measure of the dispersion of a set of data points around its mean. In finance, it quantifies the total volatility or risk of an investment's returns, treating both positive and negative deviations from the average return equally.

Adjusted return deviation, on the other hand, refers to the change in returns after applying a specific filter or adjustment. This adjustment typically accounts for risk, but can also factor in other elements like taxes, fees, inflation, or specific market conditions. While standard deviation provides a raw measure of fluctuation, adjusted return deviation seeks to contextualize that fluctuation by considering the "cost" of the risk taken or other relevant modifying factors. For instance, a Sharpe Ratio uses standard deviation in its denominator to adjust returns for the level of risk, thus providing an "adjusted return deviation" that indicates how much excess return was generated per unit of total risk.

FAQs

What is the main purpose of calculating adjusted return deviation?

The main purpose is to evaluate investment performance more accurately by considering factors beyond just raw returns, most commonly risk. It helps investors understand if higher returns are simply a result of taking on more risk, or if an investment truly delivers superior performance relative to its risk profile.

How does adjusted return deviation help in comparing investments?

It provides a normalized basis for comparison. For example, two investments might have the same average return, but if one has a much lower adjusted return deviation (meaning it achieved those returns with less risk), it is generally considered a more efficient investment. This allows for more informed decision-making in portfolio performance assessment.

Is adjusted return deviation the same as a risk-adjusted return?

The terms are closely related and often used interchangeably, but "adjusted return deviation" emphasizes the variability or dispersion of returns after an adjustment has been made. A "risk-adjusted return" is a type of adjusted return that specifically factors in risk. Many metrics that measure risk-adjusted return implicitly provide a form of adjusted return deviation, showing how returns deviate from expectations given the risk.

Are there different ways to calculate adjusted return deviation?

Yes, there are many ways, as it depends on the "adjustment" being made. Common methods involve metrics such as the Sharpe Ratio, Sortino Ratio, or Jensen's Alpha, each adjusting returns for different aspects of risk or market performance. The choice of method depends on the specific aspect of performance or risk that an investor wants to evaluate.