28cha
This is a list of articles about chaos theory, complexity theory, synergetics. If you want to see my real blogs please go to: http://www.0nothing1.blogspot.com/ it's in Russian, and: http://www.0dirtypurple1.blogspot.com/ it's in English -- some of my posts on Facebook. Это список статей о теории хаоса, теории сложности, синергетике. Если вы хотите увидеть мои настоящие блоги, перейдите к ссылкам выше.
вторник, 10 сентября 2013 г.
пятница, 25 января 2013 г.
Toward A New View of Law and Society: Complexity And Power In The Legal System
Caryn Devins and Stuart Kauffman
Contrary to its aim of promoting justice and equality before the law, in practice the American legal system increasingly favors moneyed and politically influential groups. The capture of Congress by campaign donors and lobbyists, accelerated by the Supreme Court's decision in Citizens United, is one prominent example, but this power dynamic is ubiquitous in political and legal institutions. This favoritism for the powerful can be best understood as deeply intertwined with, and even an inevitable result of, increasing complexity in legal institutions.
Corruption is a dynamic process. There is a symbiotic relationship between the legal system and powerful, regulated interests, which mutually benefit as they grow more complex and all-encompassing. The symbiosis between law and power is fractal in nature and can be found at all levels of hierarchy in the legal system.
First, laws enable new, partially unprestatable, strategy spaces for actors within the system. Creative actors seek adjacent-possible actions within the prevailing legal environment to achieve their desired ends. Naturally, these innovations produce unanticipated behaviors.
The unintended consequences are addressed through the creation of new laws, that again create partially unprestatable strategy spaces which are predictably further manipulated by powerful interests. This can create closed loops of mutually reinforcing laws and actions that is the basis of power structures.
Ironically, this self-defeating cycle ensures both the defeat of lawmaker's intentions and empowerment of the lawmakers. When regulated entities creatively evade the intent of legislation, it should represent a failure on the part of lawmakers. Instead, it empowers them to draft even more laws to remedy the defects of the old ones.
For example, drug prohibition laws empower police to intervene in drug trafficking networks. Increasing police intervention, however, raises the risk of selling drugs and consequently the price. This attracts more drug dealers and entices them to sell even more drugs. Even worse, prohibition spurs the development of new, dangerous compounds that evade existing laws, as well as more potent, concentrated forms of existing drugs for easier concealment and transport. These new societal problems alarm the community and inspire the passage of even harsher laws.
Police authority (and power derived from it) and prisons flourish with the drug trade in a mutually dependent relationship. As Milton Friedman once quipped, "If you look at the drug war from a purely economic point of view, the role of the government is to protect the drug cartel. That's literally true."
Second, the positive feedback loop between regulator and regulated re-enforces itself at a systemic level as vast networks of laws generate increasing legal complexity. This emergent complexity creates its own partially unprestatable strategy spaces that benefit knowledgeable, repeat actors over their less sophisticated counterparts.
During litigation, for example, parties with deep pockets exploit various laws to bury their opponents in discovery and file flurries of pre-trial motions to force dismissal of the suit or a favorable settlement. Large corporations also often prefer complex regulatory schemes because they shut out potential competitors by raising the barriers to entry. While small farmers struggle to comply with extensive FDA, EPA and USDA regulations, for example, large agribusinesses hire armies of attorneys to navigate these regulations. Due to its increasing complexity, legal regulation often empowers the very same entities that it intends to disempower.
Third, this co-evolution of law and action does far more than produce partially unprestatable and, hence, exploitable strategy spaces for regulated entities. Crucially, it enables moneyed interests to influence the substance of laws, their implementation or positions of power within the legal system. The establishment of government institutions in order to regulate economic activity, for example, creates the opportunity for corporate interests to infiltrate regulatory bodies and thus "capture" these institutions.
This capture may be overt and intentional, or arise naturally from the incestuous relationship created by the "revolving door" between industry and regulatory bodies. Either way, a cursory examination of American administrative agencies, regulatory bodies and even presidential cabinets and Congress shows that both Democratic and Republican administrations have been thoroughly infiltrated by industry-sympathetic technocrats. Perhaps this corruption is a feature, not a bug. Money loves power and self-reinforcing loops of legal regulation and their enabled strategy spaces concentrate both.
This model of the evolution of law as a co-evolutionary process challenges the prevailing view that policy makers can control legal outcomes. The idea that we can control assumes that our actions are both knowable beforehand by those seeking legal control and also cause whatever outcomes are produced. But the legal system exists in an unbounded state space where the possibilities enabled by legal institutions cannot be predicted ahead of time.
Laws that were created for specific reasons can be used for myriad other purposes based on unprestatable societal changes, which then influence the directionality of the laws in richly cross-connected and self-reinforcing feedback loops. As the legal system expands its diversity, specialization and redundancy, increased complexity benefits groups best able to exploit its burgeoning ecological niches.
The language of cause and effect must be replaced with enablement of partially unprestatable strategy spaces that jointly form self-reinforcing power structures.
The use of law to regulate social behavior can radically alter the power structures embedded within society. We should carefully consider the possibility that, as the legal system covers a greater breadth of human conduct, the laws serve as adjacent-possible niches for the benefit of the powerful and to the detriment of the powerless.
Caryn Devins is a third year, Law Review member of the Duke Law School
Почему до сих пор вялотекущая война за передел мира не переходит в глобальную фазу?
Я уже писал, что война за передел мира уже идёт. Вялотекущая. Пока в фазе серии локальных войн колониального типа. Но война мировая, с уже почти мировым охватом. На двух континентах.
Если кто помнит историю, то Вторая Мировая война тоже началась не в Европе. А с вторжения Италии в Абиссинию. (Кажется, в 1936 году?) И вторжение Японии в Китай в 1937году.
Почему война до сих пор не перешла в глобальный ядерный конфликт?
Ну, во первых, из-за страха перед ядерным оружием и неприемлемым ущербом от ответного удара. Пока не существует средств гарантированно уничтожить средства ответного удара или парировать ответный удар средствами ПРО-ПВО. США усиленно создают средства уничтожения вражеского оружия ответного удара и средства ПРО. Но средствам обезоруживания мешают быть эффективными российские расстояния. Гарантировано поразить наши межконтинентальные ракеты (прежде, чем оно взлетит), оружием, запущенным с любой точки Земли пока не возможно. А ядерное оружие в космосе пока не размещено, насколько известно. И ПРО пока не совершенна. И средства её преодоления пока превышают возможности ПРО.
А во вторых, и это самое главное, пока не создана предвоенная геополитическая конфигурация.
В геополитике самая устойчивая конфигурация – треугольник сил. Когда существуют три стороны потенциального конфликта. Каждая может начать войну с каждой, но в случае начала войны выигрывает та сторона из трёх, которая не воюет. Или вступит в войну последней.
А вот когда третья сторона конфликта исчезает, сливаясь с одной из сторон, или разваливаясь из-за внутреннего конфликта, тогда геополитическая конфигурация становится неустойчивой. Выигрывает та сторона из двух, которая первая подготовится и нанесёт удар.
Вторая Мировая война перешла в глобальную фазу тогда, когда одна две силы из трёх слились в одну.
Сейчас тоже имеется три стороны:
1) НАТО и примкнувшие к ней по договору взаимопомощи между третьими странами, не входящими в НАТО и США (сателлиты типа Японии или Кореи).
2) Китай.
3) И Россия с её мощным ядерным потенциалом, доставшимся ей от СССР. Который вполне может нажатием одной кнопки уничтожить США.
Когда останется две стороны конфликта из трёх, глобальная война станет неизбежной.
Две из них могут остаться в двух случаях: или стороны заключат союз, или одна из сторон развалится.
Кто первый из двух оставшихся сторон подготовится, тот и жахнет. Внезапно. Предлог найдётся.
Примечание: Меня в своё время учили, что такое внезапность в современной ядерной войне.
Стратегической внезапности достигнуть не удастся. Все (Элиты. Не народ.) будут знать, что война начнётся. Борьба в предвоенный период будет вестись ради тактической внезапности. Если кому то из потенциальных противников покажется, что враг готовится, он тут же начнёт мобилизацию. В современных условиях мобилизационные мероприятия означают не только и не столько призыв резервистов, но прежде всего рассредоточение населения из городов по сёлам, рассредоточение арсеналов, самолётов, ракет,кораблей, воинских контингентов и вообще всего. А также приведение оружия в боеспособное состояние.
Кто первый закончит эти мероприятия, тот и нажмёт кнопку. Вот это и будет тактической внезапностью, которая даст стратегический перевес. Победит тот, кто сохранит больший людской, промышленный и военный потенциал после обмена ядерными ударами.
Сейчас военные союзы не закреплены. И даже не формируются. Многоугольник пока не превратился в «двухугольник». И Китай, и Россия, и США не закрыли себе возможность военного союза с любым из двух других сил. Сейчас идёт между ними идёт стратегическое маневрирование с целью максимально ослабить каждого игрока(именно в связи с этим США и проводит политику развала России) и лишить его стимула заключать союз с чужой стороной. И при этом как можно больше отшелушить у каждой из сторон её союзников. Реальных и потенциальных. При этом все три державы активно наращивают свой стратегический потенциал с целью разрушить треугольник путём достижения абсолютного военного превосходства одной из сторон. Впрочем, вряд ли достижимого.
При этом и Китай, и США, и Россия оставляют себе открытой дверь для заключения союза с любой стороной треугольника.
Пока сохраняется такое положение, геополитический расклад не изменится. Глобальной войны не будет. Борьба за передел мира будет вестись путём локальных конфликтов, цветных революций и государственных переворотов с возможным последующим вторжением одной из сил в страну, охваченную хаосом, с целью утирания слезинки ребёнка с помощью крылатых воинов добра.
Что мы и наблюдаем.
И в современных условиях России выгодно, чтобы геополитический треугольник сохранялся как можно дольше. Выгодно убедительно пугать каждую из сторон возможностью заключения союза с другой стороной, не заключая этого самого союза ни с кем из сторон.
Пока, вроде, так и действуют.
А там время покажет, куда кривая выведет.
четверг, 10 января 2013 г.
Sci-Finance: The Great Cybernetic Experiment, Part 2
Before you start to think this all sounds too far-fetched, let’s connect some of these concepts back to one of the most famous descriptions of the market: a beauty contest.
The Keynesian beauty contest is the view that much of investment is driven by expectations about what other investors think, rather than expectations about the fundamental profitability of a particular investment. John Maynard Keynes, the most influential economist of the 20th century, believed that investment is volatile because investment is determined by the herd-like “animal spirits” of investors. Keynes observed that investment strategies resembled a contest in a London newspaper of his day that featured pictures of a hundred or so young women. The winner of the contest was the newspaper reader who submitted a list of the top five women that most clearly matched the consensus of all other contest entries. A naïve strategy for an entrant would be to rely on his or her own concepts of beauty to establish rankings. Consequently, each contest entrant would try to second guess the other entrants’ reactions, and then sophisticated entrants would attempt to second guess the other entrants’ second guessing. And so on. Instead of judging the beauty of people, substitute alternative investments. Each potential entrant (investor) now ignores fundamental value (i.e., expected profitability based on expected revenues and costs), instead trying to predict “what the market will do.” [eg, news, charts, etc.] The results are (a) that investment is extremely volatile because fundamental value becomes irrelevant, and (b) that the most successful investors are either lucky or masters at understanding mob psychology
Sci-Finance: The Great Cybernetic Experiment, Part 2
Peak Prosperity: Daily Digest 1/5
Sci-Finance: The Great Cybernetic Experiment
Sci-Finance: The Great Cybernetic Experiment (adam)
Right off the bat, the first thing we should recognize is the following: big banking and finance have fully merged with cutting edge math, science, and technology—the very reason those “who yield more power than any potentate in the history of the world” are getting their PhDs from MIT and not your typical business school.
Related:
Complexity and the Emergent Market
Pandora's Black Box
A.I.: The New God of Economics, Banking, and Finance
Peak Prosperity: Daily Digest 12/29
четверг, 20 декабря 2012 г.
UNDERSTANDING IS A POOR SUBSTITUTE FOR CONVEXITY (ANTIFRAGILITY)
Nassim Nicholas Taleb
The point we will be making here is that logically, neither trial and error nor "chance" and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress.
NASSIM NICHOLAS TALEB, essayist and former mathematical trader, is Distinguished Professor of Risk Engineering at NYU’s Polytechnic Institute. He is the author the international bestseller The Black Swan and the recently published Antifragile: Things That Gain from Disorder. (US: Random House; UK: Penguin Press)
Nassim Nicholas Taleb's Edge Bio
UNDERSTANDING IS A POOR SUBSTITUTE FOR CONVEXITY (ANTIFRAGILITY)
Something central, very central, is missing in historical accounts of scientific and technological discovery. The discourse and controversies focus on the role of luck as opposed to teleological programs (from telos, "aim"), that is, ones that rely on pre-set direction from formal science. This is a faux-debate: luck cannot lead to formal research policies; one cannot systematize, formalize, and program randomness. The driver is neither luck nor direction, but must be in the asymmetry (or convexity) of payoffs, a simple mathematical property that has lied hidden from the discourse, and the understanding of which can lead to precise research principles and protocols.
MISSING THE ASYMMETRY
The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics—even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.
The point we will be making here is that logically, neither trial and error nor "chance" and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress.
The beneficial properties have to reside in the type of exposure, that is, the payoff function and not in the "luck" part: there needs to be a significant asymmetry between the gains (as they need to be large) and the errors (small or harmless), and it is from such asymmetry that luck and trial and error can produce results. The general mathematical property of this asymmetry is convexity (which is explained in Figure 1); functions with larger gains than losses are nonlinear-convex and resemble financial options. Critically, convex payoffs benefit from uncertainty and disorder. The nonlinear properties of the payoff function, that is, convexity, allow us to formulate rational and rigorous research policies, and ones that allow the harvesting of randomness.
Figure 1- More Gain than Pain from a Random Event. The performance curves outward, hence looks "convex". Anywhere where such asymmetry prevails, we can call it convex, otherwise we are in a concave position. The implication is that you are harmed much less by an error (or a variation) than you can benefit from it, you would welcome uncertainty in the long run.
OPAQUE SYSTEMS AND OPTIONALITY
Further, it is in complex systems, ones in which we have little visibility of the chains of cause-consequences, that tinkering, bricolage, or similar variations of trial and error have been shown to vastly outperform the teleological—it is nature's modus operandi. But tinkering needs to be convex; it is imperative. Take the most opaque of all, cooking, which relies entirely on the heuristics of trial and error, as it has not been possible for us to design a dish directly from chemical equations or reverse-engineer a taste from nutritional labels. We take hummus, add an ingredient, say a spice, taste to see if there is an improvement from the complex interaction, and retain if we like the addition or discard the rest. Critically we have the option, not the obligation to keep the result, which allows us to retain the upper bound and be unaffected by adverse outcomes.
This "optionality" is what is behind the convexity of research outcomes. An option allows its user to get more upside than downside as he can select among the results what fits him and forget about the rest (he has the option, not the obligation). Hence our understanding of optionality can be extended to research programs — this discussion is motivated by the fact that the author spent most of his adult life as an option trader. If we translate François Jacob's idea into these terms, evolution is a convex function of stressors and errors —genetic mutations come at no cost and are retained only if they are an improvement. So are the ancestral heuristics and rules of thumbs embedded in society; formed like recipes by continuously taking the upper-bound of "what works". But unlike nature where choices are made in an automatic way via survival, human optionality requires the exercise of rational choice to ratchet up to something better than what precedes it —and, alas, humans have mental biases and cultural hindrances that nature doesn't have. Optionality frees us from the straightjacket of direction, predictions, plans, and narratives. (To use a metaphor from information theory, if you are going to a vacation resort offering you more options, you can predict your activities by asking a smaller number of questions ahead of time.)
While getting a better recipe for hummus will not change the world, some results offer abnormally large benefits from discovery; consider penicillin or chemotherapy or potential clean technologies and similar high impact events ("Black Swans"). The discovery of the first antimicrobial drugs came at the heel of hundreds of systematic (convex) trials in the 1920s by such people as Domagk whose research program consisted in trying out dyes without much understanding of the biological process behind the results. And unlike an explicit financial option for which the buyer pays a fee to a seller, hence tend to trade in a way to prevent undue profits, benefits from research are not zero-sum.
THINGS LOVE UNCERTAINTY
What allows us to map a research funding and investment methodology is a collection of mathematical properties that we have known heuristically since at least the 1700s and explicitly since around 1900 (with the results of Johan Jensen and Louis Bachelier). These properties identify the inevitability of gains from convexity and the counterintuitive benefit of uncertainty ii iii. Let us call the "convexity bias" the difference between the results of trial and error in which gains and harm are equal (linear), and one in which gains and harm are asymmetric ( to repeat, a convex payoff function). The central and useful properties are that a) The more convex the payoff function, expressed in difference between potential benefits and harm, the larger the bias. b) The more volatile the environment, the larger the bias. This last property is missed as humans have a propensity to hate uncertainty.
Antifragile is the name this author gave (for lack of a better one) to the broad class of phenomena endowed with such a convexity bias, as they gain from the "disorder cluster", namely volatility, uncertainty, disturbances, randomness, and stressors. The antifragile is the exact opposite of the fragile which can be defined as hating disorder. A coffee cup is fragile because it wants tranquility and a low volatility environment, the antifragile wants the opposite: high volatility increases its welfare. This latter attribute, gaining from uncertainty, favors optionality over the teleological in an opaque system, as it can be shown that the teleological is hurt under increased uncertainty. The point can be made clear with the following. When you inject uncertainty and errors into airplane ride (the fragile or concave case) the result is worsened, as errors invariably lead to plane delays and increased costs —not counting a potential plane crash. The same with bank portfolios and fragile constructs. But it you inject uncertainty into a convex exposure such as some types of research, the result improves, since uncertainty increases the upside but not the downside. This differential maps the way. The convexity bias, unlike serendipity et al., can be defined, formalized, identified, even on the occasion measured scientifically, and can lead to a formal policy of decision making under uncertainty, and classify strategies based on their ex ante predicted efficiency and projected success, as we will do next with the following 7 rules.
Figure 2 The Antifragility Edge (Convexity Bias). A random simulation shows the difference between a) the process with convex trial and error (antifragile) b) a process of pure knowledge devoid of convex tinkering (knowledge based), c) the process of nonconvex trial and error; where errors are equal in harm and gains (pure chance). As we can see there are domains in which rational and convex tinkering dwarfs the effect of pure knowledge iv.
SEVEN RULES OF ANTIFRAGILITY (CONVEXITY) IN RESEARCH
Next I outline the rules. In parentheses are fancier words that link the idea to option theory.
1) Convexity is easier to attain than knowledge (in the technical jargon, the "long-gamma" property): As we saw in Figure 2, under some level of uncertainty, we benefit more from improving the payoff function than from knowledge about what exactly we are looking for. Convexity can be increased by lowering costs per unit of trial (to improve the downside).
2) A "1/N" strategy is almost always best with convex strategies (the dispersion property): following point (1) and reducing the costs per attempt, compensate by multiplying the number of trials and allocating 1/N of the potential investment across N investments, and make N as large as possible. This allows us to minimize the probability of missing rather than maximize profits should one have a win, as the latter teleological strategy lowers the probability of a win. A large exposure to a single trial has lower expected return than a portfolio of small trials.
Further, research payoffs have "fat tails", with results in the "tails" of the distribution dominating the properties; the bulk of the gains come from the rare event, "Black Swan": 1 in 1000 trials can lead to 50% of the total contributions—similar to size of companies (50% of capitalization often comes from 1 in 1000 companies), bestsellers (think Harry Potter), or wealth. And critically we don't know the winner ahead of time.
Figure 3-Fat Tails: Small Probability, High Impact Payoffs: The horizontal line can be the payoff over time, or cross-sectional over many simultaneous trials.
3) Serial optionality (the cliquet property). A rigid business plan gets one locked into a preset invariant policy, like a highway without exits —hence devoid of optionality. One needs the ability to change opportunistically and "reset" the option for a new option, by ratcheting up, and getting locked up in a higher state. To translate into practical terms, plans need to 1) stay flexible with frequent ways out, and, counter to intuition 2) be very short term, in order to properly capture the long term. Mathematically, five sequential one-year options are vastly more valuable than a single five-year option.
This explains why matters such as strategic planning have never born fruit in empirical reality: planning has a side effect to restrict optionality. It also explains why top-down centralized decisions tend to fail.
4) Nonnarrative Research (the optionality property). Technologists in California "harvesting Black Swans" tend to invest with agents rather than plans and narratives that look good on paper, and agents who know how to use the option by opportunistically switching and ratcheting up —typically people try six or seven technological ventures before getting to destination. Note the failure in "strategic planning" to compete with convexity.
5) Theory is born from (convex) practice more often than the reverse (the nonteleological property). Textbooks tend to show technology flowing from science, when it is more often the opposite case, dubbed the "lecturing birds on how to fly" effect v vi. In such developments as the industrial revolution (and more generally outside linear domains such as physics), there is very little historical evidence for the contribution of fundamental research compared to that of tinkering by hobbyists. vii Figure 2 shows, more technically how in a random process characterized by "skills" and "luck", and some opacity, antifragility —the convexity bias— can be shown to severely outperform "skills". And convexity is missed in histories of technologies, replaced with ex post narratives.
6) Premium for simplicity (the less-is-more property). It took at least five millennia between the invention of the wheel and the innovation of putting wheels under suitcases. It is sometimes the simplest technologies that are ignored. In practice there is no premium for complexification; in academia there is. Looking for rationalizations, narratives and theories invites for complexity. In an opaque operation to figure out ex ante what knowledge is required to navigate is impossible.
7) Better cataloguing of negative results (the via negativa property). Optionality works by negative information, reducing the space of what we do by knowledge of what does not work. For that we need to pay for negative results.
Some of the critics of these ideas —over the past two decades— have been countering that this proposal resembles buying "lottery tickets". Lottery tickets are patently overpriced, reflecting the "long shot bias" by which agents, according to economists, overpay for long odds. This comparison, it turns out is fallacious, as the effect of the long shot bias is limited to artificial setups: lotteries are sterilized randomness, constructed and sold by humans, and have a known upper bound. This author calls such a problem the "ludic fallacy". Research has explosive payoffs, with unknown upper bound —a "free option", literally. And we have evidence (from the performance of banks) that in the real world, betting against long shots does not pay, which makes research a form of reverse-bankingviii .
i Jacob, F. , 1977, Evolution and tinkering. Science, 196(4295):1161–1166.
ii Bachelier, L. ,1900, Theorie de la spéculation, Gauthiers Villard.
iii Jensen, J.L.W.V., 1906, “Sur les fonctions convexes et les inégalités entre les valeurs moyennes.” Acta Mathematica 30.
iv Take F[x] = Max[x,0], where x is the outcome of trial and error and F is the payoff. ∫ F(x) p(x) dx ≥ F(∫ x p(x)) , by Jensen's inequality. The difference between the two sides is the convexity bias, which increases with uncertainty.
v Taleb, N., and Douady, R., 2013, "Mathematical Definition and Mapping of (Anti)Fragility",f.. Quantitative Finance
vi Mokyr, Joel, 2002, The Gifts of Athena: Historical Origins of the Knowledge Economy. Princeton, N.J.: Princeton University Press.
vii Kealey, T., 1996, The Economic Laws of Scientific Research. London: Macmillan.
viii Briys, E., Nock,R. ,& Magdalou, B., 2012, Convexity and Conflation Biases as Bregman Divergences: A note on Taleb's Antifragile.
http://www.edge.org/
Death by Algorithm: West Point Code Shows Which Terrorists Should Disappear First
Paulo Shakarian has an algorithm that might one day help dismantle al-Qaida — or at least one of its lesser affiliates. It’s an algorithm that identifies which people in a terror network really matter, like the mid-level players, who connect smaller cells with the larger militant group. Remove those people, either by drone or by capture, and it concentrates power and authority in the hands of one man. Remove that man, and you’ve broken the organization.
The U.S. military and intelligence communities like to congratulate themselves whenever they’ve taken out a terrorist leader, whether it’s Osama bin Laden or Abu Mussab al-Zarqawi, the bloodthirsty chief of al-Qaida in Iraq. Shakarian, a professor at West Point’s Network Science Center who served two tours as an intelligence officer in Iraq, saw first-hand just how quickly those militant networks regrew new heads when the old ones were chopped off. It became one of the inspirations for him and his colleagues at West Point to craft an algorithm that could truly target a terror group’s weak points.
“I remember these special forces guys used to brag about how great they were at targeting leaders. And I thought, ‘Oh yeah, targeting leaders of a decentralized organization. Real helpful,’” Shakarian tells Danger Room. Zarqawi’s group, for instance, only grew more lethal after his death. “So I thought: Maybe we shouldn’t be so interested in individual leaders, but in how whole organizations regenerate their leadership.”
These days, American counterterror policy is even more reliant on taking out individual militants. How exactly those individuals are picked for drone elimination is the matter of intense debate and speculation. The White House reportedly maintains a “matrix” of the most dangerous militants. Social-network analysis — the science of determining the connections between people — almost certainly plays a role where those militants appear on that matrix.
It’s clearly an imperfect process. Hundreds of civilians have been killed in the drone strikes, along with thousands of militants. And while the core of al-Qaida is clearly weakened, Obama administration officials will only talk in the vaguest terms about when the war against the terror group might some day come to an end.
In a paper to be presented later this month before the Academy of Science and Engineering’s International Conference on Social Informatics, Shakarian and his West Point colleagues argue for a new way of using social-network analysis to target militants. Forget going after the leader of an extremist group, they say. At least right away.
“If you arrest that guy, the number of connections everyone else has becomes more similar. They all become leaders. You force that terror group to become more decentralized. You might be making it harder to defeat these organizations,” Shakarian says.
This chart shows how West Point’s “GREEDY_FRAGILE” algorithm renders a network brittle by removing relatively few nodes.
The second illustration depicts a terror network, as the algorithm centralizes it — and makes it easier to break. Photos: West Point
Instead, counterterrorists should work to remove militant lieutenants in such a way that terror leaders actually become morecentral to their organizations. That’s because a more centralized network is a more fragile one. And a fragile network can ultimately be smashed, once and for all.
The West Point team — which includes professors Devon Callahan, Jeff Nielsen, and Tony Johnson – wrote up a simple (less than 30-line) algorithm in Python they named GREEDY_FRAGILE. It looks for nodes that can be taken out to “maximize network-wide centrality” — concentrate connectivity in the terror leader, in other words. The professors tested GREEDY_FRAGILE against five data sets. the first is the social network of the al-Qaida members involved in the 1998 bombing of the U.S. embassy in Dar es Salaam; the other four are derived from real-world terror groups, but anonymized for academic use.
“In each of the five real-world terrorist networks that we examined, removal of only 12% of nodes can increase the network-wide centrality between 17% and 45%,” the West Point authors note in their paper. In other words, taking out just a few mid-level players make the whole organization much, much more fragile.
Interestingly, GREEDY_FRAGILE works even when the exact shape of the network is unknown — or when certain nodes can’t be targeted, for political or intelligence reasons. In other words, it takes into account some real-world complications that counterterrorists might face.
Now, this is just a lab experiment. No actual terrorists were harmed in the writing of this paper. The algorithm only looks at “degree” centrality — the number of ties a node has. It doesn’t examine metrics like network “closeness,” which finds the shortest possible path between two nodes. Nor does it take into account the different roles played by different nodes — financier, propagandist, bomb-maker. That’s why the work is funded by the Army Research Office, which handles the service’s most basic R&D efforts.
What’s more, the authors stress that their network-breaking techniques might not be a good fit for every counterterror plan. “It may be desirable to keep certain terrorist or insurgent leaders in place to restrain certain, more radical elements of their organization,” they write.
In fact, the authors strongly hint that they’re not necessarily on board with the Obama administration’s kill-don’t-capture approach to handling terror networks.
“We would like to note that the targeting of individuals in a terrorist or insurgent network does not necessarily mean to that they should be killed,” Shakarian and his colleagues write. “In fact, for ‘shaping operations’ as the ones described in this paper, the killing of certain individuals in the network may be counter-productive. This is due to the fact that the capture of individuals who are likely emergent leaders may provide further intelligence on the organization in question.”
That sort of intelligence may suddenly be at a premium again. From the Pentagon chief on down, the U.S. is increasingly worried about al-Qaida’s spread into unfamiliar regions like Mali and its association with new, shadowy militant groups in Libya. GREEDY_FRAGILE, if it works like Shakarian hopes, might show the counterterrorists which militants to target — and which so-called leaders to leave alone. For now.
http://www.wired.com/dangerroom/2012/12/paulos-alogrithm/