The Thought Refuse

A Virtual Repository for the Mind

Financial Risk Management: The Problem With Applying Statistical Models To A Random System

with 3 comments

Having digested more then my fair share of reasons behind the market crash from television to radio to print to the blogosphere, I’ve been saturated by the common theme of corporate greed and corruption.  I don’t buy it as anything other then a compulsory and secondary component to a free market economy.  This is, of course, beyond obvious.

Add to the equation profitable longevity, and the greed/corruption arguement becomes less credible.  If I gave you the option of taking $5000 right now or $1000 for the next ten years which option would you take?  Profitable longevity is not a difficult concept to understand, yet it seems lost on the relentless pounding being doled out on anyone remotely associated with the market crash – market traders, business executives, and government officials.

While there are certainly businesses that run counter to profitable longevity, i.e. Enron, this is not an isolated incident of a single lending institution engaging in suicidal greed.  The problem spans entire business sectors including lending institutions and trading firms.  From a logical perspective, it’s difficult to assume that two entire business sectors concurrently decided to knowingly engage in practices fueled by excess at the cost of self-destruction.

The need to balance profit and longevity within the financial sector following the Great Depression, the concept of stock and bond risk management was born.  By the 1950s, financial risk management evolved from a concept into a full fledged theory centered around mathematical diversification.  A decade later risk management theory adopted stochastic calculus in order to better account for the inherent randomness involved in stock trading.  Today the financial corporations are littered with what are known as quantitative analysts, otherwise known as quants.

The role of a quant is to develop sophisiticated mathematical and statistical models designed to simulate stochastic processes and it’s potential impact against illiquid products.  Uneffected by supply-and-demand price fluctuations, illiquid asset management depends entirely on these quant models to determine value.  Another tool employed by quants is the process of statistical arbitrage that relies on quantitative data mining in order to determine the expected value of an asset.  These two quant instruments are the foundation for the risk management techniques finicial corporations employ today.

Stochastic calculus attempts to deal with multiple outcomes and given an array of possible initial values, marks the possible outcomes for multiple initial conditions.  While it accounts for a wide possibility of paths, it determines which paths are more probable and which ones are more improbable based on the probability of the initial conditions being present(whose sheer numbers can be exponentially mindnumbing).  Statistical arbitrage relies heavily on gathering statistics over time, in order to create a computational value based on the expected outcome where multiple outcomes exist.

These two systems are attempting to deal with and account for rogue events.  It is in trying to deal with events which have never occurred previously that a fundamental flaw becomes apparent.  These two systems, particularly statistical arbitrage, relies almost exclusively on the collection of event data.  These statistical models derive their accuracy from the amount of data collected.  Hence, the more finite a time data is mined, the less accurate the model becomes.  In order for either of these models to be bulletproof, it would require an infinite amount of time to aggregate data.

Here we have two systems employed by the financial sector to determine the improbability of events and conditions which have never occurred.  The problem exacerbates itself when you consider how the data that is amassed is disseminated.

In order to map these probable and improbable events, statistical analysis employs the use of the Gaussian curve.  It’s a bell curved shaped graph illustrating the probability density of all potential values.  The Gaussian curve bends downwards at it’s edges towards improbable events and upwards towards the probable.  The width of the curve correlates to the weight of those probable events against those improbable ones.  Basically, it charts the normal distribution for events, and giving an indication of which events are more probable then others.

The Gaussian curve is essentially a median value for all possible events.  And that is where it fails.  A model cannot really account for and give the proper value to a rogue event, the most improbable of occurrences, when it gives greater value to standard, the most probable of events.  It fails in calculating the impact of heavy swings against the normal distribution value.  A singular, yet improbable event, will hardly impact the height or width of the curve against the statistical weight of a multitude of those that are likely.

To better illustrate this take 100 middle class American households, and calculate the average income.  Now put every name of every adult in the United States, and put them in a hat.  Randomly draw one name from the hat.  Now take that persons household income and recalculate our income average.  99% of the time you would have selected someone close to the median US household income of $40k.  The average will not make an appreciable movement upwards or downwards.  You could have selected the poorest household in America and the number would hardly have budged.  Let’s say you draw from the hat again, but this time you pick out someone in the top 1% income bracket out of the hat with an improbability factor of 298,128,548 to 1.  Recalculate the average again.  A massive swing upwards in the average will occur.  An event that has a 0.01% chance of occurring will appreciable change our end value, where as 99.99% of possible events will not.

One singular, blip on the Gaussian curve can drastically effect our end value.  But the curve itself doesn’t properly adjust for that highly improbable, rogue event.  Combine this massive swing due to random events with quant statistical models that are ineffectual in mapping these Gaussian blips, and you can begin to see where the problem might lie.

Let’s throw these inadequate models in bed with profit longevity, and it becomes a sticky situation.  Financial corporations have a tricky balancing act to perform.  Faced with reams of data pointing at the normal distribution of the Gaussian curve coupled with the blind spots for events which have never occurred in stochastic calculus and statistical arbitrage, conclusions have to be made as to which path to follow.

Taking the path of improbability would severely limit potential profits.  Under performing profit margins threat longevity.  Following the road towards the probable, gives some assurance in profits and promotes company longevity.  The decision making process cannot even account for rogue events which the statistical models fail in forecasting.

The historical housing market rise and fall would, at it’s very least, qualify, as an most improbable event.  A case could be made that it was a rogue event incapable of being calculated by the quants models.  Instead of tossing around the idea that corporate greed was at the center of the economic crisis our country now faces, let’s consider that, at best, the financial sector was presented with misrepresented data by a fundamentally flawed model.  And, at worst, their models simply were not able to account for never before seen event chain.

Either way, the quant risk management system might be to blame, and desperately needs to be reexamined to better account for those improbably catastrophic occurrences.  No amount of money infusion will turn failing statistical financial models into pinpoint accurate prediction machines – if that’s even possible.  Or maybe we should just learn to accept certain levels of risk that randomness carries with it rather then constantly acquiescing to the fat cat blame game.

Advertisements

3 Responses

Subscribe to comments with RSS.

  1. Dagens Nyheter

    Check out:
    http://www.kjetil.org/news/

    Norges største nettsted. Oppdateres minutt for minutt på siste nytt innen sport, innenriks, utenriks og underholdning.
    Ledende nettavis i Norge. Oppdatert døgnet rundt med saker spesialskrevet for nett av egen redaksjon. Fokus på nyheter, sport, kultur, feature reportasje, forbrukerstoff og kjendiser.

    nyheter,nytt,nyhet,nettavis,dagbladet,avis,norge,europa,skandinavia,norden,
    jobb,bolig,bil,sport,sms,vær,værvarsel,politikk,kjendis,feature,magasinet,forbruker,
    helse,sex,reise,bil,data,teknologi,weblogg,blogg,blogging,blogger,weblog,blog,
    meninger,kommentarer,litteratur,musikk,film,dvd,spill,nyheter, nytt, nettavis, innenriks, utenriks, sport, fotball, næringsliv, privatøkonomi, it, spill, sjakk, trav, golf, bil, båt, poker, mat, drikke, dataspill, odds, oddsen, økonomi, børs,

    Klikk her:
    http://www.kjetil.org/news/

    news

    September 28, 2008 at 6:05 am

  2. […] lack of oversight.  The point of interest for me is the issue of risk management which I have posted about previously.  The fundamental problem is not improper oversight as much as it is the […]

  3. […] previously made numerous posts concerning the reasons behind the economic crisis, and specifically attributed the error in management on the risk management models used by financial institutions.  The post […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: