You are currently browsing the monthly archive for October 2008.

After reading at Yahoo! Finance (Oct. 24), “The Twilight of Free-Market Ideology” by Charles Wheelan (lecturer at Univ. of Chicago and author of Naked Economics) I decided to create a poll. Here are some passages from his article (as well as my very similar opinion over here):

[…] When I heard Alan Greenspan’s testimony before Congress last Thursday, I had one immediate thought: This is the beginning of the end for the free-market ideologues (…) According to press reports of the testimony, Greenspan told Congress that he “had put too much faith in the self-correcting power of free markets.” That’s no small statement. In fact, it struck me that if 1989 was the year when no reasonable person could still believe in communism (or any of its government-intensive relatives), then 2008 will go down in history as the year in which the free-market zealots saw their “wall” come crumbling down. […]

So, what’s your opinion?

October, 29. Just 3 days ahead of us, almost 80 years ago. Up till now, October 10, 2008 – Black Friday – was the worse. On that day the biggest DJIA point drop in history was found. A collection of these photos (below) are available at “A Photo Essay on the Great Depression“. Plus, to compare what happened in 1929 versus today I vividly recommend you this New York Times Journal graphic comparing severity versus time for several historical crashes.

Dorothea Lange‘s “Migrant Mother,” destitute in a pea picker’s camp, because of the failure of the early pea crop. These people had just sold their tent in order to buy food. Most of the 2,500 people in this camp were destitute. By the end of the decade there were still 4 million migrants on the road. (Source)

The trading floor of the New York Stock Exchange just after the crash of 1929. On Black Tuesday, October 29, the market collapsed. In a single day, sixteen million shares were traded -a record- and thirty billion dollars vanished into thin air. Westinghouse lost two thirds of its September value. DuPont dropped seventy points. The “Era of Get Rich Quick” was over. Jack Dempsey, America’s first millionaire athlete, lost $3 million. Cynical New York hotel clerks asked incoming guests, “You want a room for sleeping or jumping?” (Source). Finally (photo below): Bud Fields and his family. Alabama. 1935 or 1936. Photographer: Walker Evans. (Source)

August 1, 2007

August 10, 2007 (the beginning)

December 20, 2007

January 15, 2008

May 15, 2008

July 10, 2008

September 15, 2008 (post-Lehman)

October 10, 2008 (biggest DJIA point drop in history)

With a short empirical investigation, Reginald Smith (MIT – Sloan School of Management) have come to some interesting complex networks (nodes in here are financial stocks) over time, since the beginning of the financial crisis in August 10, 2007, till today. His rather simple econophysics study (draft PDF link) somehow demonstrates that the losses in certain markets, in this case the US equity markets, follow a cascade or “epidemic” flow like model along the correlations of various stocks. His networks shows the correlation (similar rise and fall movements) among the stocks in the S&P 500 and NASDAQ-100 using the latest stocks in the index (as of 10/10/2008). The abbreviations are the ticker symbols. Network edges here connect stocks (nodes) based on their correlations. More then 500 tickers were used. After correlations among any two stocks were calculated (J.C. Gower, Biometrika, 1966), a distance metric is computed. Finally these distances are used to create a minimal spanning tree. For the graphics and animations Reginald have used the python-graph module, pydot and Graphviz. Extra details and a F.A.Q. is here as well as some movies. If the stock share price return had a return (minus dividends) greater than or equal to -10% the nodes are green, less than -10% but greater than -25% yellow, and less than or equal to -25% red.

In what relates red nodes over time, I now wonder what would be the probability distribution of vertex connectivity change (is it scale-free?!), the characteristic path length L as well as the clustering coefficient C. It would be quite funny to know.

It’s not everyday we see a 40 year ideology collapsing through a dramatic act of contrition. It has happened just a few hours ago (check video above), yesterday in the “financial crisis” congressional hearing in Washington (23. Oct. 2008). Moreover, what seems remarkable, is that the recognition comes from one of his most universally respected founding fathers and defenders.

Alan Greenspan was the longest serving chairman in the Federal Reserve board history (1987-2006), and during this 18-year period of time he were perhaps the leading proponent of de-regulation along with libertarian capitalism, vividly expressed on his “The Age of Turbulence – Adventures in a New World” 2007 book, advocating above all issues, Adam Smith’s “Invisible Hand” that markets can regulate themselves. As it’s known, for his whole adult life, the former Fed chairman has been a devotee of the philosophy of Ayn Rand, who celebrated free-market capitalism as the world’s most moral economic order and advocated a strict laissez-faire approach to government regulation of the marketplace. Ironically, he was a regulator that did not believed in any regulation at all.

It is now quite a remarkable historic moment seeing former Federal Reserve chairman, a lifelong champion of free markets, publicly questioning the philosophy that guided him throughout his years as the world’s most powerful economic policymaker. A philosophy followed and strongly defended by him (along with many others like Margaret Thatcher and Ronald Reagan), at least in the last 40 years, as he himself acknowledged yesterday. Asked by the congressional committee chairman, whether his free-market convictions pushed him to make wrong decisions, especially his failure to rein in unsafe mortgage lending practices, Greenspan replied that indeed he had found a flaw in his ideology, one that left him very distressed. “In other words, you found that your view of the world, your ideology was not right?” he was asked:

Absolutely, precisely“, replied Greenspan. “That’s precisely the reason I was shocked, because I have been going for 40 years or more with very considerable evidence it was working exceptionally well“. Albeit he was surely one of the most influential voices for de-regulation: “There is nothing in Federal regulation that makes it superior to Market regulation”, said Greenspan back in 1994, in one among many of his past radical free-market statements.

I presume we now all wonder, where was Greenspan, when back in 2003 one of the most prestigiously recognized and legendary financial investors such as Warren Buffet, called credit default obligations and derivatives “weapons of financial mass destruction“? Or where was he when Princeton Professor of Economics, Paul Krugman – the recent Nobel laureate – said back in 2006 that “If anyone is to blame on the current situation (sub prime) is Mr Greenspan who poopooed warnings about an emerging bubble and did nothing to crack down on irresponsible lending“. Or what did he, Greenspan itself, said just a few days after ENRON collapsed?

People working in complex systems – and surely financial markets are one of them (yes, for the past 4-5 years including these present turbulent times I am working hard in this area as well) – for long know that any systemic structure could collapse if only positive-feedbacks are injected into them, creating an auto-catalytic snow-ball effect, leading among other things to a power-law like Black Swan. Indeed power-laws are a striking powerful signature. This is specially true when we address self-organization (read it in the present context as self-regulation). In order to be a truly self-regulated system, financial markets should also be embedded with negative-feedbacks as well, as I have addressed in a post about finance and complex systems one month ago. In fact, in order to emerge as a truly self-organized system, self-interest, should constitute just one among many of the ingredients over the entire financial system, and not the isolated unique ingredient. Self-interest promotes amplification and positive feedback, which is – as I recognize – necessary. However, left alone, promotes instead dramatic snowballing drifts over chaotic regimes, due to it’s intrinsic amplification. What’s amazing (at least for me), is that Alan Greenspan just recognized that in a tiny few seconds along his current discourse (check video above), pointing it to the precise key-word:

[…] I made a mistake in presuming that the self-interest of organizations, specifically banks and others, was such that they were best capable of protecting their own shareholders. […] So the problem, here is something which looked to be as a very solid edifice, and indeed a critical pillar to market competition and free-markets, did breakdown and I think that, as I said, shock me. I still do not fully understand why it happened, and obviously to the extend, that I figure out where it happened and why, … aaaaaa, … I will change my views. If the facts change I will change. […]

As a result, “the whole intellectual edifice” of risk management collapsed, Greenspan said. In what regards his unexpected words yesterday at the congressional hearing, at least, I frankly praise him for his huge intellectual courage and present honesty. In the end, it seems that during the past 18 years, former FED chair was nothing else then a simple-man driven by his own blind faith on markets, from which he apparently comes out now. Unfortunately, only now at a very high price. Meanwhile as we know, severe consequences are here to stay, as was already evident when Greenspan addressed the House Financial Services Committee on 2003 (video below). Let’s hope that all these will not be forgotten in 3 decades from now (though, I doubt it – after all, nothing really serious came out from the entitled 3-man dream-team Bush-Sarkozy-Barroso “new global finance order” summit at Camp David last weekend, as expected):

“You have told the American people that you support a trade policy which is selling them out.” – Rep. Bernard Sanders to Federal Reserve Chairman Alan Greenspan on 7/16/03. Rep. Bernard Sanders (Independent-Vermont), now a US Senator, dresses down Federal Reserve Chairman Alan Greenspan in front of the House Financial Services Committee on 7/16/03.


Transition behavior of one Artificial Ant Colony in presence of a sudden change in his artificial digital image Habitat, between two different Digital Grey Images (face of Einstein and a Map). Created with an Artificial Ant Colony, that uses images as Habitats, being sensible to their gray levels [in, V. Ramos, F. Almeida, “Artificial Ant Colonies in Digital Image Habitats – a mass behavior effect study on Pattern Recognition“, ANTS’00 Conf., Brussels, Belgium, 2000].

After “Einstein face” is injected as a substrate at t=0, 100 iterations occur. At this point you could recognize the face. Then, a new substrate (a new “environmental condition”) is imposed (Map image). The colony then adapts quickly to this new situation, losing their collective memory of past contours.

In white, the higher levels of pheromone (a chemical evaporative sugar substance used by swarms on their orientation trough out the trails). It’s exactly this artificial evaporation and the computational ant collective group synergy reallocating their upgrades of pheromone at interesting places, that allows for the emergence of adaptation and “perception” of new images. Only some of the 6000 iterations processed are represented. The system does not have any type of hierarchy, and ants communicate only in indirect forms, through out the successive alteration that they found on the Habitat. If you however, inject Einstein image again as a substrate, the whole ant society will converge again to it, but much faster than the first time, due to the residual memory distributed in the environment.

As a whole, the system is constantly trying to establish a proper compromise between memory (past solutions – via pheromone reinforcement) and novel ones in order to adapt (new conditions on the habitat, through pheromone evaporation). The right compromise, ables the system to tackle two contradictory situations: keeping some memory while learning something radically new. Antagonist features such as exploration and exploitation are tackled this way.

Figure – From top left to bottom right, a sequential data-items clustering task performed by an artificial ant colony. The system is able to cope with unforeseen data items in real-time, that is, as data appears in a continuous basis over a large period of time. Also, as time evolves, spatial entropy decreases.

[] Vitorino Ramos, Ajith Abraham, Swarms on Continuous Data, in CEC´03 – Congress on Evolutionary Computation, IEEE Press, ISBN 078-0378-04-0, pp.1370-1375, Canberra, Australia, 8-12 Dec. 2003.

While being it extremely important, many Exploratory Data Analysis (EDA) systems have the inability to perform classification and visualization in a continuous basis or to self-organize new data-items into the older ones (even more into new labels if necessary), which can be crucial in KDD – Knowledge Discovery, Retrieval and Data Mining Systems (interactive and online forms of Web Applications are just one example). This disadvantage is also present in more recent approaches using Self-Organizing Maps. On the present work, and exploiting past successes in recently proposed Stigmergic Ant Systems a robust online classifier is presented, which produces class decisions on a continuous stream data, allowing for continuous mappings. Results show that increasingly better results are achieved, as demonstrated by other authors in different areas.

(to obtain the respective PDF file follow link above or visit

Springer book “Swarm Intelligence in Data Mining” (Studies in Computational Intelligence Series, Vol. 34) published in late 2006, is receiving a fair amount of attention, so much so, that early this year, Tokyo Denki University press (TDU) decided to negotiate with Springer the translation rights and copyrights in order to released it over their country in Japanese language. The Japanese version will now become shortly available, and I do hope – being one of the scientific editors – it will receive increasing attention as well in Japan, being it one of the most difficult and extraordinary real-world areas we could work nowadays among computer science. Multiple Sequence Alignment (MSA) within Bio-informatics is just one recent example, Financial Markets another. The amount of data – 100000 DVD’s every year -, CERN’s Large Hadron Collider (LHC) will collect is yet another. In order to transform data into information, and information into useful and critical knowledge, reliable and robust Data Mining is more than ever needed, on our daily life.

Meanwhile, I wonder how the Japanese cover design will be?! Starting with it’s own title, which appears to be pretty hard to translate. According to Yahoo BabelFish the Japanese characters (群れの知性) – derived among other language scripts from Kanji – correspond to the English sentence “Swarm Intelligence“. I wonder if this translation is correct or not, since “swarm” in itself, is kind of difficult to translate. Some meanings of it point out to a spaghetti dish, as well, which kind of makes some logic too. Moreover, the technical translation of it is also difficult. I guess the best person to handle the translation (at least from the list of colleagues around the world I know) is Claus Aranha. (IBA Lab., University of Tokyo). Not only he works in Japan for several years now, as well as some of his works focus this precise area.

SIDM book (Swarm Int. in Data Mining) focus on the hybridization of these two areas. As you may probably now, Data Mining (see also; Knowledge Extraction) refers to a collection of techniques – many of them classical – that envisions to tackle large amounts of data, in order to perform classification, clustering, sorting, feature selection, search, forecasting, decision, meaningful extraction, association rule discovery, sequential pattern discovery, etc. In recent years however (1985-2000), state of the art Artificial Intelligence such as Evolutionary Computation was also used, since some of his problems could be seen as – or properly translated to – optimization problems (namely, combinatorial). The same now happens with Swarm Intelligence, since some of it’s unique self-organizing distributed features (allowing direct applications over Grid Computing) seems ideal to tackle some of the most complex data mining problems we may face today.

For those willing for more, I will leave you with it’s contents (chapters), a foreword to this book by James Kennedy (one of the founding fathers of PSO Particle Swarm Optimization, along with Russell C. Eberhart, and Yuhui Shi) which I vividly recommend (starting with the sentence “Science is a Swarm“!), as well as a more detailed description to it:

Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviors observed in flocks of birds, schools of fish, or swarms of bees, and even human social behavior, from which the idea is emerged. Ant Colony Optimization (ACO) deals with artificial systems that is inspired from the foraging behavior of real ants, which are used to solve discrete optimization problems. Historically the notion of finding useful patterns in data has been given a variety of names including data mining, knowledge discovery, information extraction, etc. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. In order to achieve this, data mining uses computational techniques from statistics, machine learning and pattern recognition. Data mining and Swarm intelligence may seem that they do not have many properties in common. However, recent studies suggests that they can be used together for several real world data mining problems especially when other methods would be too expensive or difficult to implement. This book deals with the application of swarm intelligence methodologies in data mining. Addressing the various issues of swarm intelligence and data mining using different intelligent approaches is the novelty of this edited volume. This volume comprises of 11 chapters including an introductory chapters giving the fundamental definitions and some important research challenges. Chapters were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed.

The eleven chapters are organized as follows. In Chapter 1, Grosan et al. present the biological motivation and some of the theoretical concepts of swarm intelligence with an emphasis on particle swarm optimization and ant colony optimization algorithms. The basic data mining terminologies are explained and linked with some of the past and ongoing works using swarm intelligence techniques. Martens et al. in Chapter 2 introduce a new algorithm for classification, named AntMiner+, based on an artificial ant system with inherent selforganizing capabilities. AntMiner+ differs from the previously proposed AntMiner classification technique in three aspects. Firstly, AntMiner+ uses a MAX-MIN ant system which is an improved version of the originally proposed ant system, yielding better performing classifiers. Secondly, the complexity of the environment in which the ants operate has substantially decreased. Finally, AntMiner+ leads to fewer and better performing rules. In Chapter 3, Jensen presents a feature selection mechanism based on ant colony optimization algorithm to determine a minimal feature subset from a problem domain while retaining a suitably high accuracy in representing the original features. The proposed method is applied to two very different challenging tasks, namely web classification and complex systems monitoring. Galea and Shen in the fourth chapter present an ant colony optimization approach for the induction of fuzzy rules. Several ant colony optimization algorithms are run simultaneously, with each focusing on finding descriptive rules for a specific class. The final outcome is a fuzzy rulebase that has been evolved so that individual rules complement each other during the classification process. In the fifth chapter Tsang and Kwong present an ant colony based clustering model for intrusion detection. The proposed model improves existing ant-based clustering algorithms by incorporating some meta-heuristic principles. To further improve the clustering solution and alleviate the curse of dimensionality in network connection data, four unsupervised feature extraction algorithms are also studied and evaluated. Omran et al. in the sixth chapter present particle swarm optimization algorithms for pattern recognition and image processing problems. First a clustering method that is based on PSO is discussed. The application of the proposed clustering algorithm to the problem of unsupervised classification and segmentation of images is investigated. Then PSO-based approaches that tackle the color image quantization and spectral unmixing problems are discussed.
In the seventh chapter Azzag et al. present a new model for data clustering, which is inspired from the self-assembly behavior of real ants. Real ants can build complex structures by connecting themselves to each others. It is shown is this paper that this behavior can be used to build a hierarchical tree-structured partitioning of the data according to the similarities between those data. Authors have also introduced an incremental version of the artificial ants algorithm. Kazemian et al. in the eighth chapter presents a new swarm data clustering method based on Flowers Pollination by Artificial Bees (FPAB). FPAB does not require any parameter settings and any initial information such as the number of classes and the number of partitions on input data. Initially, in FPAB, bees move the pollens and pollinate them. Each pollen will grow in proportion to its garden flowers. Better growing will occur in better conditions. After some iterations, natural selection reduces the pollens and flowers and the gardens of the same type of flowers will be formed. The prototypes of each gardens are taken as the initial cluster centers for Fuzzy C Means algorithm which is used to reduce obvious misclassification errors. In the next stage, the prototypes of gardens are assumed as a single flower and FPAB is applied to them again. Palotai et al. in the ninth chapter propose an Alife architecture for news foraging. News foragers in the Internet were evolved by a simple internal selective algorithm: selection concerned the memory components, being finite in size and containing the list of most promising supplies. Foragers received reward for locating not yet found news and crawled by using value estimation. Foragers were allowed to multiply if they passed a given productivity threshold. A particular property of this community is that there is no direct interaction (here, communication) amongst foragers that allowed us to study compartmentalization, assumed to be important for scalability, in a very clear form. Veenhuis and Koppen in the tenth chapter introduce a data clustering algorithm based on species clustering. It combines methods of particle swarm optimization and flock algorithms. A given set of data is interpreted as a multi-species swarm which wants to separate into single-species swarms, i.e., clusters. The data to be clustered are assigned to datoids which form a swarm on a two-dimensional plane. A datoid can be imagined as a bird carrying a piece of data on its back. While swarming, this swarm divides into sub-swarms moving over the plane and consisting of datoids carrying similar data. After swarming, these sub swarms of datoids can be grouped together as clusters. In the last chapter Yang et al. present a clustering ensemble model using ant colony algorithm with validity index and ART neural network. Clusterings are visually formed on the plane by ants walking, picking up or dropping down projected data objects with different probabilities. Adaptive Resonance Theory (ART) is employed to combine the clusterings produced by ant colonies with different moving speeds. We are very much grateful to the authors of this volume and to the reviewers for their tremendous service by critically reviewing the chapters. The editors would like to thank Dr. Thomas Ditzinger (Springer Engineering Inhouse Editor, Studies in Computational Intelligence Series), Professor Janusz Kacprzyk (Editor-in-Chief, Springer Studies in Computational Intelligence Series) and Ms. Heather King (Editorial Assistant, Springer Verlag, Heidelberg) for the editorial assistance and excellent cooperative collaboration to produce this important scientific work. We hope that the reader will share our excitement to present this volume on ‘Swarm Intelligence in Data Mining’ and will find it useful.

April, 2006
Ajith Abraham, Chung-Ang University, Seoul, Korea
Crina Grosan, Cluj-Napoca, Babes-Bolyai University, Romania
Vitorino Ramos, IST Technical University of Lisbon, Portugal

Subprime Banking Mess (via YouTube) – John Bird and John Fortune (the Long Johns) brilliantly, and accurately, describing the mindset of the investment banking community in this satirical interview.

Now, above in the video, do you remember the words: Structured Investment Vehicle (S.I.V. or Conduits)? No? Okay, let’s now pass to a very brief and relatively more technical approach to it (as you will see it, reality transcends fiction) – CNBC via Youtube:

Care for more?

[…] From the ice-age to the dole-age
There is but one concern
I have just discovered: Some girls are bigger than others
Some girls are bigger than others
Some girls mothers are bigger than
Other girls mothers

The Smiths – “Some Girls Are Bigger Than Others“, (Queen is dead) 1986. Song written by Morrissey and Johnny Marr.

 ________________  §  ________________


Key: “S:” = Show Synset (semantic) relations, “W:” = Show Word (lexical) relations / S: (n) dole (a share of money or food or clothing that has been charitably given) / S: (n) dole, pogy, pogey (money received from the state

 ________________  §  ________________

(via William Gibson‘s blog)

posted 12:29 PM








Many man made and naturally occurring phenomena (being inherently complex systems), including city sizes, incomes, word frequencies, internet links, social networks and earthquake magnitudes, are distributed according to a power-law distribution. One under f Zipf’s law follows the same characteristic, or pink noise. Here is a possible long list, collected year after year, since the 1910’s up to now – 2008 (which I vividly recommend). From vacuum tubes to trading activities in world financial markets. Even, we could easily found them on Pollock’s paintings (recently here). Back in 2002, I have addressed some of his painting features (mostly fractal) regarding the theme “Emergent Aesthetics in Autonomous Collective Systems“. Astonishingly, without having a clue what fractal dimension’s would be, Jackson increased his fractal signatures year over year, while getting old. Indeed, “Action painting”, has he call it, mainly using his body motion and a bucket, were largely enough. 

Financial markets are indeed complex systems, even if they are far from being self-regulated. Much after this recent black Monday, I have made here some notes regarding Self-Organization and finance, over two weeks or so. Their basic features – as I see it. However- essentially what brings me here today is-, what happens to their frequency? Are phenomena like the current financial crisis, frequent? Well, much of that depends on our knowledge on power-laws. Good news is that we know how many of them will occur in a very large time window, bad news is that, we don’t know when will they precisely occur. As in Earthquakes (check this out). Does this impel us to do nothing? Not at all. We can’t do anything about earthquakes (at least for now – except prevent them), however we can establish some ground smart rules in order to avoid financial systems to collapse in turmoil (that is, tune them in the precise physical regime). Power-laws are not only our wake-up call, as they are a signature. For good or for worse. It seems that we are all playing across the planet, a reversed El Farol Bar problem. If that’s somehow true we all should ask new questions like: In what frequency should we go to a bar ?! In other words, should we all run to the banks now, asking for our deposits?

Any polynomial relationship that exhibits the property of scale invariance is a Power-Law. Power-law implies that small occurrences are extremely common, whereas large instances are extremely rare (similarly over maps and cities – if you have time, found out the foundation of Berlin city over time). As the large quantity of small dots + low frequency of large dots we may found on Jackson Pollock paintings. The same goes for Black-Swans.

Jackson Pollock in action - As reported somehow recently by Nature magazine (Sept., 13, 2000), research suggests that the abstract works of artists such as Jackson Pollock are esthetically pleasing because they obey fractal rules similar to those found on the natural world. Pollock was known to have swung his paint back and forth like a pendulum, using a can on the end of a string with a hole punched in it. Researchers (Jensen) have found that if a swinging pendulum is hit with a hammer at just the right frequency (slightly less than the natural rhythm of the pendulum), its motion becomes chaotic and the paint traces out very convincing fake Pollocks. However, the artist had no idea of fractals or chaotic motion, while dot distribution over Pollocks paintings follow a power-law.

Jackson Pollock in action - As reported somehow recently by Nature magazine (Sept., 13, 2000), research suggests that the abstract works of artists such as Jackson Pollock are esthetically pleasing because they obey fractal rules similar to those found on the natural world. Pollock was known to have swung his paint back and forth like a pendulum, using a can on the end of a string with a hole punched in it. Researchers (Jensen) have found that if a swinging pendulum is hit with a hammer at just the right frequency (slightly less than the natural rhythm of the pendulum), its motion becomes chaotic and the paint traces out very convincing "fake Pollocks". However, the artist had no idea of fractals or chaotic motion, while dot distribution over Pollock's paintings indeed follow a power-law.

A Black Swan is a highly improbable event that has three characteristics: It is unpredictable, it has incredible impact, and after it happens we invent a reason for it that makes it seem less probable. For those of you that did not have read 2007 Taleb’s book (picture above), wondering what a Black Swan is, or question yourself from where the name arises, just jump for a quick look over here. Nassim started to wrote his book in 2003. Finished it in 2006. So, in what way this “funny” distribution regards financial markets? Well, for many of us now, it his surprising that he have wrote this, back then:

[…] Globalization creates interlocking fragility, while reducing volatility and giving the appearance of stability. In other words it creates devastating Black Swans. We have never lived before under the threat of a global collapse. Financial Institutions have been merging into a smaller number of very large banks. Almost all banks are interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks – when one fails, they all fall. The increased concentration among banks seems to have the effect of making financial crises less likely, but when they happen they are more global in scale and hit us very hard. We have moved from a diversified ecology of small banks, with varied lending policies, to a more homogeneous framework of firms that all resemble one another. True, we now have fewer failures, but when they occur ….I shiver at the thought. […]

Were these words a Black Swan within the Black Swan book itself? Rather not. He continues directly to something we now know and face it in precise context. Please note that this was written in the period 2003-2006:

[…] Banks hire dull people and train them to be even more dull. If they look conservative, it’s only because their loans go bust on rare, very rare occasions. But (…) bankers are not conservative at all. They are just phenomenally skilled at self-deception by burying the possibility of a large, devastating loss under the rug. […] The government-sponsored institution Fannie Mae, when I look at its risks, seems to be sitting on a barrel of dynamite, vulnerable to the slightest hiccup. But not to worry: their large staff of scientists deemed these events “unlikely”. […] 

What about the costs, and the memory of them? Yes, indeed memory is important while playing game-theory-like games, as his mainly in our daily reality, but in one-two generations it will be probably lost (I hope not), and once again all will start:

[…]  the real-estate collapse of the early 1990s in which the now defunct savings and loan industry required a taxpayer-funded bailout of more than half a trillion dollars. The Federal Reserve bank protected them at our expense: when “conservative” bankers make profits, they get the benefits; when they are hurt, we pay the costs.

Should we be surprised? In fact, this is not new. Somehow, fallacy goes on (as George Monbiot tackle it with extreme precision over Guardian Journal very recently – Sep. 30). Not only we were not reacting to these power-law consequences, as many of those economic agents playing within the systems core itself, were thinking of something else:

[…] Once again, recall the story of banks hiding explosive risks in their portfolios. It is not a good idea to trust corporations with matters such as rare events because the performance of these executives is not observable on a short-term basis, and they will game the system by showing good performance so they can get their yearly bonus. The Achilles’ heel of capitalism is that if you make corporations compete, it is sometimes the one that is most exposed to the negative Black Swan that will appear to be the most for survival.[…] As if we did not have enough problems, banks are now more vulnerable to the Black Swan and the ludic fallacy than ever before with “scientists” among their staff taking care of exposures. The giant J. P. Morgan put the entire world at risk by introducing in the nineties RiskMetrics, a phony method aiming at managing people’s risks, causing the generalized use of the ludic fallacy, and bringing Dr. Johns into power in place of the skeptical Fat Tonys. (a related method called “Value-at-Risk,” which relies on the quantitative measurement of risk, has been spreading.) […]

Starting with the distribution and frequency of these kind of events (among many others), all of these words were written in the period 2003-2006. Since then, you could follow Taleb’s war on “Value at Risk” over here.  Or here, at which I highly recommend.

Meanwhile, apart from what markets are suffering and complex science may enlightened us, life goes on. Not necessarily as we supposed. As we know, reality, many times excels fiction; one single video frame could value one thousand words. Right at your neighborhood. As you may see below, consequences could be much worse than a tornado:

Vodpod videos no longer available.

more about “Foreclosure Alley – SoCal Connected“, posted with vodpod



[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter


Blog Stats

  • 256,420 hits