I need to see real growth in metrics like customer acquisition and trading volume before making a deeper commitment. From what I can tell, the news about EDXM will only be positive for Coinbase if it helps to expand the pie for the crypto industry as a whole. That's right -- they think these 10 stocks are even better buys. Independent nature of EDXM would also restrain the firm from the possibility of conflicts of interest. EDXM needed to prove its utility to stay relevant within the crypto space though. For now, I'm taking a wait-and-see **backed crypto exchange** with Coinbase. Meanwhile, the EDX exchange would work to accommodate both private and institutional investors.

People tend to worry too much about these risks because they happen frequently, and not enough about what might happen on the worst days. Risk should be analyzed with stress testing based on long-term and broad market data. The risk manager should concentrate instead on making sure good plans are in place to limit the loss if possible, and to survive the loss if not.

Periodic VaR breaks are expected. The loss distribution typically has fat tails , and there might be more than one break in a short period of time. Moreover, markets may be abnormal and trading may exacerbate losses, and losses taken may not be measured in daily marks , such as lawsuits, loss of employee morale and market confidence and impairment of brand names. An institution that cannot deal with three times VaR losses as routine events probably will not survive long enough to put a VaR system in place.

Three to ten times VaR is the range for stress testing. Institutions should be confident they have examined all the foreseeable events that will cause losses in this range, and are prepared to survive them. Foreseeable events should not cause losses beyond ten times VaR. If they do they should be hedged or insured, or the business plan should be changed to avoid them, or VaR should be increased. It is hard to run a business if foreseeable losses are orders of magnitude larger than very large everyday losses.

It is hard to plan for these events because they are out of scale with daily experience. Another reason VaR is useful as a metric is due to its ability to compress the riskiness of a portfolio to a single number, making it comparable across different portfolios of different assets. Within any portfolio it is also possible to isolate specific positions that might better hedge the portfolio to reduce, and minimise, the VaR.

Backtesting[ edit ] Backtesting is the process to determine the accuracy of VaR forecasts vs. A key advantage to VaR over most other measures of risk such as expected shortfall is the availability of several backtesting procedures for validating a set of VaR forecasts. Early examples of backtests can be found in Christoffersen , [30] later generalized by Pajhede , [31] which models a "hit-sequence" of losses greater than the VaR and proceed to tests for these "hits" to be independent from one another and with a correct probability of occurring.

A number of other backtests are available which model the time between hits in the hit-sequence, see Christoffersen and Pelletier , [32] Haas , [33] Tokpavi et al. Backtest toolboxes are available in Matlab, [36] or R —though only the first implements the parametric bootstrap method. History[ edit ] The problem of risk measurement is an old one in statistics , economics and finance. Financial risk management has been a concern of regulators and financial executives for a long time as well.

Retrospective analysis has found some VaR-like concepts in this history. But VaR did not emerge as a distinct concept until the late s. The triggering event was the stock market crash of This was the first major financial crisis in which a lot of academically-trained quants were in high enough positions to worry about firm-wide survival. A reconsideration of history led some quants to decide there were recurring crises, about one or two per decade, that overwhelmed the statistical assumptions embedded in models used for trading , investment management and derivative pricing.

These affected many markets at once, including ones that were usually not correlated , and seldom had discernible economic cause or warning although after-the-fact explanations were plentiful. If these events were excluded, the profits made in between "Black Swans" could be much smaller than the losses suffered in the crisis. Institutions could fail as a result.

It was hoped that "Black Swans" would be preceded by increases in estimated VaR or increased frequency of VaR breaks, in at least some markets. The extent to which this has proven to be true is controversial. It was well established in quantitative trading groups at several financial institutions, notably Bankers Trust , before , although neither the name nor the definition had been standardized. There was no effort to aggregate VaRs across trading desks.

Since many trading desks already computed risk management VaR, and it was the only common risk measure that could be both defined for all businesses and aggregated without strong assumptions, it was the natural choice for reporting firmwide risk. Morgan CEO Dennis Weatherstone famously called for a " report" that combined all firm risk on one page, available within 15 minutes of the market close. Development was most extensive at J. Morgan , which published the methodology and gave free access to estimates of the necessary underlying parameters in This was the first time VaR had been exposed beyond a relatively small group of quants.

Securities and Exchange Commission ruled that public corporations must disclose quantitative information about their derivatives activity. Major banks and dealers chose to implement the rule by including VaR information in the notes to their financial statements. VaR is the preferred measure of market risk , and concepts similar to VaR are used in other parts of the accord.

A famous debate between Nassim Taleb and Philippe Jorion set out some of the major points of contention. Taleb claimed VaR: [38] Ignored 2, years of experience in favor of untested models built by non-traders Was charlatanism because it claimed to estimate the risks of rare events, which is impossible Gave false confidence Would be exploited by traders In David Einhorn and Aaron Brown debated VaR in Global Association of Risk Professionals Review [20] [3] Einhorn compared VaR to "an airbag that works all the time, except when you have a car accident".

He further charged that VaR: Led to excessive risk-taking and leverage at financial institutions Focused on the manageable risks near the center of the distribution and ignored the tails Created an incentive to take "excessive but remote risks" Was "potentially catastrophic when its use creates a false sense of security among senior executives and watchdogs. Fortunately for value investors, their stocks revived since the correction in September.

Given the recent extreme stock market volatility especially the divergence between sector performance and the uncertainty about how the future economy will look like, most investors can't see the forest for the trees. Is value investing still interesting to generate strong returns in the future and why? I believe the answer to that question is yes.

However, after the Covid crisis, I believe it is becoming increasingly important to get rid of the old theory of value investing a statement backed by empirical evidence and start approaching value investing in a different way, which will be discussed in this article. Take your time to read this article, I believe it is one of the most valuable I have written so far. The old value investing definition: from 6. Value investors actively ferret out stocks they think the stock market is underestimating.

In the past, researchers have found that one can outperform the market significantly by buying value stocks i. In contrast, Lakonishok et al. Typical investors tend to overreact on bad news and value investors generate alpha by buying these stocks at cheap prices.

The historical outperformance of value compared to growth stocks is well visualised in the table below. This strategy would've returned 6. In other words: value stocks outperformed growth stocks by 6. Source: Researchaffiliates However, this changed drastically after Over the past 13 years, this value strategy returned a negative 6.

The recent value underperformance compared to other strategies is also visualized in JP Morgan's research. While the value strategy red border has generated significant alpha consistently in the past, since it has been the worst performing strategy for three years out of six.

This year was the biggest outlier as it underperformed the best strategy, Momentum, by a staggering Source: JP Morgan's Guide to the markets Reasons for recent value underperformance The reasons why value stocks underperformed so significantly recently, has been a widely discussed topic on sites like Seeking Alpha. Unfortunately, most of these discussions were not evidence-based. In contrast, Arnot et al.

First, they refuted several beliefs of the average investor, of which the following two are the most common: 1. Stronger financial performance for growth stocks? Many investors argue that today "technological leaders can drive outsized monopolistic profits, while the old value stocks are choked into irrelevance", more than it has ever been.

As such, growth stocks' recent outperformance is fundamentally based. Higher growth deserves a higher multiple. Arnot et al. Low interest rates? We are currently operating in an unprecedented monetary environment with near-zero interest rates.

Investors claim that growth stocks are the main beneficiary of this "free money" and therefore are worth their higher valuation multiple. In contrast, the Gordon valuation method shows that low interest rates should have "a disproportionate valuation impact on longer-duration and lower-yielding assets".

Moreover, Arnott et al. So, if financial performance and low interest rates are no sound theory to explain the value underperformance, what is? Revaluation Arnott et al. Profitability: most growth stocks are more profitable and exhibit faster growth in sales and profits than most value stocks.

Profitability benefits growth relative to value and offsets much, but typically not all, of the benefits from migration. Revaluation: the change in relative valuation of growth versus value. As you can see in the first table of this article, the structural premium, which consists of profitability and migration, was basically flat since and thus had no impact on the value underperformance.

So, the sample average of The probability of getting z is the p-value of 0. How does the p-value vary with random samples over time? It turns out it varies much more than you would think. The p-values were calculated for each those random samples with 20 observations shown in Figure 2. The maximum p-value was 0. The minimum p-value was 0. That is a large range when taking 20 random observations from the same population.

The distribution of the p-values is shown in Figure 3. Figure 3: Distribution the p-value for Random Samples The chart almost looks like a uniform distribution. In this case, it is. With continuous data and assuming the null hypothesis is true, the p-values are distributed uniformly between 0 and 1.

Remember, a p-value measures the probability of getting a result that is at least extreme as the one we have — assuming the null hypothesis is true. It does not measure the probability that the hypothesis is true. Nor does it measure the probability of rejecting the null hypothesis when it is true.

That is what alpha does. Using Alpha and the p-value Together So, how do we use these two terms together. Basically, you decide on a value for alpha. What probability of being wrong do you want to use? What makes you comfortable? You then collect the data and calculate the p-value. If the p-value is greater than alpha, you assume that the null hypothesis is true. If the p-value is less than alpha, you assume that null hypothesis is false.

What do you do if the two values are very close? For example, maybe the p-value is 0. It is your call to make in those cases. You can always choose to collect more data. Note that the confidence interval and p-value will always go to the same conclusion. If the p-value is less than alpha, then the confidence interval will not contain the hypothesized mean.

If the p-value is greater than alpha, the confidence interval will contain the hypothesized mean. Summary This publication examined how to interpret alpha and the p-value. Alpha, the significance level, is the probability that you will make the mistake of rejecting the null hypothesis when in fact it is true.

The p-value measures the probability of getting a more extreme value than the one you got from the experiment. The following lists some levels of confidence with their related values of alpha: For results with a 90 percent level of confidence, the value of alpha is 1 — 0. For results with a 95 percent level of confidence , the value of alpha is 1 — 0. For results with a 99 percent level of confidence, the value of alpha is 1 — 0.

Although in theory and practice many numbers can be used for alpha, the most commonly used is 0. The reason for this is both because consensus shows that this level is appropriate in many cases, and historically, it has been accepted as the standard. However, there are many situations when a smaller value of alpha should be used.

There is not a single value of alpha that always determines statistical significance. The alpha value gives us the probability of a type I error. Type I errors occur when we reject a null hypothesis that is actually true.

Thus, in the long run, for a test with a level of significance of 0. P-Values The other number that is part of a test of significance is a p-value. A p-value is also a probability, but it comes from a different source than alpha. Every test statistic has a corresponding probability or p-value. This value is the probability that the observed statistic occurred by chance alone, assuming that the null hypothesis is true.

For some cases, we need to know the probability distribution of the population. The smaller the p-value, the more unlikely the observed sample. Difference Between P-Value and Alpha To determine if an observed outcome is statistically significant, we compare the values of alpha and the p-value.

There are two possibilities that emerge: The p-value is less than or equal to alpha.