четверг, 20 декабря 2012 г.

UNDERSTANDING IS A POOR SUBSTITUTE FOR CONVEXITY (ANTIFRAGILITY)

Death by Algorithm: West Point Code Shows Which Terrorists Should Disappear First

By Noah Shachtman

An infamous 1998 al-Qaida press conference, featuring Osama bin Laden (center). Photo: AP

Paulo Shakarian has an algorithm that might one day help dismantle al-Qaida — or at least one of its lesser affiliates. It’s an algorithm that identifies which people in a terror network really matter, like the mid-level players, who connect smaller cells with the larger militant group. Remove those people, either by drone or by capture, and it concentrates power and authority in the hands of one man. Remove that man, and you’ve broken the organization.

The U.S. military and intelligence communities like to congratulate themselves whenever they’ve taken out a terrorist leader, whether it’s Osama bin Laden or Abu Mussab al-Zarqawi, the bloodthirsty chief of al-Qaida in Iraq. Shakarian, a professor at West Point’s Network Science Center who served two tours as an intelligence officer in Iraq, saw first-hand just how quickly those militant networks regrew new heads when the old ones were chopped off. It became one of the inspirations for him and his colleagues at West Point to craft an algorithm that could truly target a terror group’s weak points.

“I remember these special forces guys used to brag about how great they were at targeting leaders. And I thought, ‘Oh yeah, targeting leaders of a decentralized organization. Real helpful,’” Shakarian tells Danger Room. Zarqawi’s group, for instance, only grew more lethal after his death. “So I thought: Maybe we shouldn’t be so interested in individual leaders, but in how whole organizations regenerate their leadership.”

These days, American counterterror policy is even more reliant on taking out individual militants. How exactly those individuals are picked for drone elimination is the matter of intense debate and speculation. The White House reportedly maintains a “matrix” of the most dangerous militants. Social-network analysis — the science of determining the connections between people — almost certainly plays a role where those militants appear on that matrix.

It’s clearly an imperfect process. Hundreds of civilians have been killed in the drone strikes, along with thousands of militants. And while the core of al-Qaida is clearly weakened, Obama administration officials will only talk in the vaguest terms about when the war against the terror group might some day come to an end.

In a paper to be presented later this month before the Academy of Science and Engineering’s International Conference on Social Informatics, Shakarian and his West Point colleagues argue for a new way of using social-network analysis to target militants. Forget going after the leader of an extremist group, they say. At least right away.

“If you arrest that guy, the number of connections everyone else has becomes more similar. They all become leaders. You force that terror group to become more decentralized. You might be making it harder to defeat these organizations,” Shakarian says.

This chart shows how West Point’s “GREEDY_FRAGILE” algorithm renders a network brittle by removing relatively few nodes.

The second illustration depicts a terror network, as the algorithm centralizes it — and makes it easier to break. Photos: West Point

Instead, counterterrorists should work to remove militant lieutenants in such a way that terror leaders actually become morecentral to their organizations. That’s because a more centralized network is a more fragile one. And a fragile network can ultimately be smashed, once and for all.

The West Point team — which includes professors Devon Callahan, Jeff Nielsen, and Tony Johnson – wrote up a simple (less than 30-line) algorithm in Python they named GREEDY_FRAGILE. It looks for nodes that can be taken out to “maximize network-wide centrality” — concentrate connectivity in the terror leader, in other words. The professors tested GREEDY_FRAGILE against five data sets. the first is the social network of the al-Qaida members involved in the 1998 bombing of the U.S. embassy in Dar es Salaam; the other four are derived from real-world terror groups, but anonymized for academic use.

“In each of the five real-world terrorist networks that we examined, removal of only 12% of nodes can increase the network-wide centrality between 17% and 45%,” the West Point authors note in their paper. In other words, taking out just a few mid-level players make the whole organization much, much more fragile.

Interestingly, GREEDY_FRAGILE works even when the exact shape of the network is unknown — or when certain nodes can’t be targeted, for political or intelligence reasons. In other words, it takes into account some real-world complications that counterterrorists might face.

Now, this is just a lab experiment. No actual terrorists were harmed in the writing of this paper. The algorithm only looks at “degree” centrality — the number of ties a node has. It doesn’t examine metrics like network “closeness,” which finds the shortest possible path between two nodes. Nor does it take into account the different roles played by different nodes — financier, propagandist, bomb-maker. That’s why the work is funded by the Army Research Office, which handles the service’s most basic R&D efforts.

What’s more, the authors stress that their network-breaking techniques might not be a good fit for every counterterror plan. “It may be desirable to keep certain terrorist or insurgent leaders in place to restrain certain, more radical elements of their organization,” they write.

In fact, the authors strongly hint that they’re not necessarily on board with the Obama administration’s kill-don’t-capture approach to handling terror networks.

“We would like to note that the targeting of individuals in a terrorist or insurgent network does not necessarily mean to that they should be killed,” Shakarian and his colleagues write. “In fact, for ‘shaping operations’ as the ones described in this paper, the killing of certain individuals in the network may be counter-productive. This is due to the fact that the capture of individuals who are likely emergent leaders may provide further intelligence on the organization in question.”

That sort of intelligence may suddenly be at a premium again. From the Pentagon chief on down, the U.S. is increasingly worried about al-Qaida’s spread into unfamiliar regions like Mali and its association with new, shadowy militant groups in Libya. GREEDY_FRAGILE, if it works like Shakarian hopes, might show the counterterrorists which militants to target — and which so-called leaders to leave alone. For now.



http://www.wired.com/dangerroom/2012/12/paulos-alogrithm/

Big Brother 101

Could your social networks brand you an enemy of the state?

Hub Bubs
Hub Bubs The people seemingly at the center of social networks-the hubs with the most connections-may not be so central after all. The highly connnected Diane appears to be the main hub here, but no real damage is done when she leaves the picture, below (one reason you never want to be a Diane at your workplace).

Instant Expert:

By some counts, government snoops are sifting through data from a billion or more phone calls and online messages daily. What might they be looking for?

  • WHO: The National Security Agency and other intelligence groups
  • WHAT: Processing and connecting data from phone calls, e-mails, online postings and financial transactions
  • HOW: Using social-network analysis (the study of how people interact) and data-mining techniques (such as pattern-recognition algorithms) first used for artificial intelligence and consumer marketing
  • WHY: To help uncover the structure of potential terrorist groups-far too secretive and dispersed to locate with traditional detection techniques-and decode their intentions

Continue http://www.popsci.com/scitech/article/2006-08/big-brother-101

суббота, 6 октября 2012 г.

Experiments illuminate how order arises in the cosmos

by Breanna Bishop
Lawrence livermore experiments illuminate how order arises in the cosmos

Plasmas stream from the top and bottom to form large-scale electromagnetic fields.

(Phys.org)—One of the unsolved mysteries of contemporary science is how highly organized structures can emerge from the random motion of particles. This applies to many situations ranging from astrophysical objects that extend over millions of light years to the birth of life on Earth.

The surprising discovery of self-organized electromagnetic fields in counter-streaming ionized gases (also known as plasmas) will give scientists a new way to explore how order emerges from chaos in the cosmos. This breakthrough finding was published online in the journal on September 30.

"We've created a model for exploring how electromagnetic fields help organize ionized gas or plasma in astrophysical settings, such as in the plasma flows that emerge from young stars," said lead author Nathan Kugland, a in the High Science Group at Lawrence Livermore National Laboratory (LLNL). "These fields help shape the flows, and likely play a supporting role alongside gravity in the formation of solar systems, which can eventually lead to the creation of planets like the Earth."

"This observation was completely unexpected, since the plasmas move so quickly that they should freely stream past each other," explained Hye-Sook Park, team leader and staff physicist at LLNL. Park added that "laser-driven plasma experiments can study the microphysics of plasma interaction and structure formation under controlled conditions."

Studying astrophysics with laboratory experiments can help answer questions about that are far beyond the reach of direct measurements. This research is being carried out as part of a large , Astrophysical Collisionless Shock Experiments with Lasers (ACSEL), led by LLNL, Princeton University, Osaka University and Oxford University, with many other universities participating.

More information: "Self-organized Electromagnetic Field Structures in Laser-Produced Counter-Streaming Plasmas" Nature, Sept. 30, 2012 www.nature.com/nphys/journal/vaop/ncurrent/full/nphys2434.html

PHYS.ORG

пятница, 30 марта 2012 г.

Энтропию заставили помогать в настройке музыкальных инструментов

Отличие настройки (красным) от стандартной (зеленый). Иллюстрация автора исследования
Отличие настройки (красным) от стандартной (зеленый). Иллюстрация автора исследования

Энтропию заставили помогать в настройке музыкальных инструментов

Физик из Вюрцбургского университета Хайе Хинрихсен придумал метод настройки пианино, основанный на минимизации некоторого функционала, связанного с энтропией Шеннона (известной также как информационная энтропия). Статья ученых пока не принята к публикации, однако, ее препринт доступен на сайте arXiv.org.

Еще в начале прошлого века было обнаружено, что электронная настройка не пригодна для таких сложных инструментов, как, например, пианино - даже если каждая нота настроена точно, инструмент все равно звучит как расстроенный. В 1938 году в журнале Journal of the Acoustical Society of America появилась статья, в которой диссонанс объяснялся отсутствием созвучия обертонов различных нот.

При настройке пианино настройщики избавляются от этого дефекта вручную, подтягивая или опуская звучание некоторых нот. В рамках новой работы Хинрихсен предложил метод математического расчета натяжек. Этот расчет производится следующим образом - исходя из некоторых формул, пианино настраивается и для его спектра вычисляется энтропия Шеннона - величина, которая характеризует информационную непредсказуемость системы.

Затем в настройку по определенной методологии вносятся некоторые случайные возмущения. Для полученного спектра считается энтропия. Если она не больше прежнего значения, то изменения принимаются. В противном случае пианино возвращается к предыдущему звучанию. Работа останавливается после того, как несколько раз подряд случайными возмущениями энтропию уменьшить не удается.

Используя свой метод, Хинрихсен получил настройку, которая очень похожа на правильную настройку пианино. По словам самого исследователя, его метод может пригодиться для создания электронных систем настройки. В частности, его случайный поиск может использоваться как дополнение к уже существующим системам, анализирующим обертона в звучании отдельно взятого инструмента.

http://lenta.ru/news/2012/03/27/entropy/

Ученые призвали к самостоятельной эвакуации во время пожара

Зависимость скорости выхода от количества людей и их стремления следовать за толпой. Иллюстрация авторов исследования
Зависимость скорости выхода от количества людей и их стремления следовать за толпой. Иллюстрация авторов исследования

Ученые призвали к самостоятельной эвакуации во время пожара

Ученые установили, что при эвакуации опаснее следовать за толпой, чем пытаться выбраться самому. Статья исследователей пока не принята к публикации, однако, ее препринт доступен на сайте arXiv.org.

В рамках работы ученые моделировали поведение группы людей, покидающих некий коридор через пару выходов. Они полагали, что их виртуальные подопытные находятся в условиях низкой видимости. В подобных условиях оказываются люди, например, покидающие задымленный зал во время пожара.

Исследователи прогоняли модель с разным количеством людей (от 100 до 10 тысяч), меняя параметр, который определял склонность отдельного человека следовать за толпой. При нулевом значении параметра ученые получали множество точек, находящихся в режиме случайного блуждания, в то время как при положительных значениях люди объединялись в достаточно большие группы.

Анализ динамики позволил ученым установить, что в случае, когда люди объединяются в группы, средняя скорость, с которой они покидают коридор, снижается. По словам исследователей, это происходит из-за заторов, возникающих у выходов. При этом ученые говорят, что их метод, вполне вероятно, не учитывает некоторые эффекты - например, разногласия внутри образовавшихся групп и разногласия между разными группами.

Подобные работы по моделированию поведения толпы в настоящее время являются довольно популярной сферой деятельности для многих ученых. Например, в 2009 году в Physical Review E появилась работа японских физиков, которые придумали эффективный способ эвакуации. Как оказалось, процесс идет лучше, если с одной стороны от двери находится препятствие.


http://lenta.ru/news/2012/03/27/crowd/

понедельник, 13 февраля 2012 г.

The Collapse of Chaos или чудеса самоорганизации

(Это не я - это все знаменитая французская кухня!)

Тут наложилось съесть какие-то фрикадельки с пюре, так вот, эти фрикадельки - не знаю как по-французски они называются, размером с маленький мандарин - будучи вчера вечером пережеванными и за ночь переваренными, сегодня утром, на выходе, вполне сохранили свою первоначальную форму, цвет, и я даже подозреваю, что и свой оригинальный вкус...

Answering Descartes: Beyond Turing

by Stuart Kauffman

http://mitpress.mit.edu/books/chapters/0262297140chap4.pdf

Часть 1-я: Жажда отклика, или Зачем люди идут в социальные сети

Оригинал взят у в Часть 1-я: Жажда отклика, или Зачем люди идут в социальные сети
E-xecutive
Жажда отклика, или Зачем люди идут в социальные сети
Жажда отклика, или Зачем люди идут в социальные сети

Что приводит людей в социальные сети? Неустранимая и неутолимая вампирская жажда ответа. Человек нуждается в реакции окружающих, поэтому медиа будущего станут платформой, предоставляющей сервисы отклика. А газеты умрут. С этим ничего не поделаешь, считает эксперт Андрей Мирошниченко. Читайте интервью, которое он дал участникам Сообщества E-xecutive.ru.Далее



Часть 2-я:  Рантье XXI века: владельцы облаков

E-xecutive
Рантье XXI века: владельцы облаков
Рантье XXI века: владельцы облаков

Знаете ли вы, в скольких базах данных собрана информация о вас? Что известно о пользователях интернет-компаниям? Почему эти сведения хранятся в «облаках», кому они могут быть предложены и с какой целью? Чей это актив? Читайте вторую часть интервью медиаэксперта Андрея Мирошниченко E-xecutive.ru.Далее


воскресенье, 5 февраля 2012 г.

On The Inadequacy Of The Empiricist Tradition In Western Philosophy

by Stuart Kauffman


I find myself beginning to realize that the philosophy that I studied, from Descartes to Hume to Kant to Russell to logical positivism and the early Wittgenstein, and perhaps the late Wittgenstein of the Investigations, is seriously inadequate.

It starts with Descartes who conceived of his task to be a lone mind who would doubt all that could be doubted to find that which could not be doubted about what that single mind can know about the world. The emphasis is on "knowing."

Then we come to Hume of the Scottish Enlightenment, essaying to understand "Human Understanding." How can we know the world? By sense impressions, welded together in "bundles," in which the "self," or "I," itself disappears as just a bundle of perceptions: roughly, "all I am aware of is a jumble of sequential awareness," I am aware of no 'I'."

Kant seeks the conditions of knowing in the inner conditions of the mind, categories of perception such as space and time. He considers the phenomenal world we can know and behind it the noumenal world we can never know.

Russell brings us sense data such as "red here" and the tone, "A flat now," then sense data statements, "For Kauffman, 'red here' is true," and hopes that his recently developed predicate calculus working on sense data statements will allow philosophers to build a maximally reliable way of knowing the world, constructed out of sense data statements linked by logic, including quantifiers such as "there exists" and "for all."

To early Wittgenstein's famous "Tractatus": "The world is the collection of true facts" about that world.

On to logical positivism: "Only those statements (about the world) are meaningful which are empirically verifiable," which, ironically drove Western philosophy, yet whose founding statement just noted is not itself empirically verifiable.

The "empiricist tradition" sought and seeks to elucidate how we know the world.

What is wrong?

In the beginning, 5 billion years ago, no life existed on the forming planet. Either life started here or arrived from elsewhere. Let's assume the former. As a concrete working hypothesis let's take collectively autocatalytic sets of polymers like peptide sets, RNA sets, or DNA sets, all realized experimentally, in some bounding membrane like a liposome. For example Gonen Ashkenazi has a 9 peptide (small protein) collectively autocatalytic set reproducing happily in his Ben Gurion University lab.

So what?

So existing as a self reproducing system in a universe that is non-ergodic, (not repeating) above the level of atoms, where most complex things will never exist, is the first condition of life. "Knowing" is not yet a condition.

But that protocell typically lived in an environment with toxic and food molecules. By hook or crook, say by semipermiable membranes, the protocell "discriminated" poison from food and admitted only the latter, thanks to natural selection on evolving protocells.

We now have the rudiments of agency and knowing. The protocell evolved to do something, i.e., discriminate and admit food and block poison. This discrimination required rudimentary "knowing" and hence "semantics", without invoking consciousness.

What the empiricist tradition entirely misses is living existence and agency. Without the existence of the protocell, there is no evolutionary point in knowing. Without agency there is no use in knowing. Suppose, per contra, that the protocell could discriminate poison from food, but could not selectively block the first and admit the second. It would fail natural selection's harsh sieve.

Without being and doing, no knowing could have emerged in evolution. The empiricist tradition misses this central issue, thus is deeply inadequate.

In summary of this first point: Without being and agency, knowing is both pointless and would not arise in evolution.

Not only do we not know what will happen, we often do not even know what can happen.

But the empiricist tradition runs into a still deeper problem. In past posts I have discussed Darwinian preadaptations, where we cannot prestate their emergence in evolution. This has led my colleagues, senior mathematician, Giuseppe Longo, his post doctoral fellow, Mael Montevil, both of the Ecole Polytechnique, Paris, and myself to submit a paper also posted on ArXiv, entitled, "No entailing laws, but enablement in the evolution of the biosphere."

This article is radical. It claims that no law entails the evolution of the biosophere. The grounds for this include the fact that we cannot prestate the ever newly emerging relevant variables in evolution that selection reveals, therefore the very phase space of evolution changes in ways we cannot know beforehand, so we can write no laws of motion for the evolving biosphere, nor, lacking knowledge of the boundary conditions, could we integrate those laws of motion even were to to have them.

These deep issues mean that often not only do we not know what will happen, as when we flip a fair coin 10,000 times and do not know how many heads will come up, but here know all the possible outcomes, so can construct a probability measure. In evolution we do not even know what can emerge in the Adjacent Possible of the becoming of evolution, so can construct no probability measure for we do not know the sample space of all the possibilities, thus not only do we not know what will happen, we do not even know what can happen.

The empiricist tradition is ignorant of this profound limitation to knowledge "beforehand" as the biosphere "becomes."

Even pragmatism, which seeks to unify knowing and doing, falls prey to this last issue: We often do not even know what can happen. Pragmatism takes no account of this feature of our living world.

Hume famously argued that one cannot deduce "ought" from "is." This is the naturalistic fallacy. But Hume is thinking only of a knowing subject, firmly in the empiricist tradition started by Descartes. Hume ignores agency.

I wrote an entire book, Investigations, attempting to define agency. My try: "A molecular autonomous agent is a self-reproducing system able to do at least one work cycle."

A bacterium swimming up a glucose gradient for food is an agent, reproduces and the rotating flagella is just one of the work cycles the bacterium does. All living cells fulfill the above definition.

But once there is agency, ought enters the universe. If the bacterium is to successfully get food, it "ought" to e.g., swim up the sugar gradient. Without attributing consciousness, one cannot have "actings" without "doing them wisely or poorly," hence ought.

In short, the empiricist tradition, in ignoring agency, wishes to block us from "ought," when we cannot have doing without "ought." The root of the issue is "doing" versus merely "happening," a topic in a near future post.

We need to rethink many problems in philosophy to take account of the issues above.


13.7: Cosmos And Culture : NPR

Comments



Vlad Piaskovskiy (0nothing1) wrote:

1

Stuart wrote:

"Without attributing consciousness, one cannot have "actings" without "doing them wisely or poorly," hence ought."

It seems to me that this phrase is very doubtful. Permissible to say - and I like it! - That living systems possess the innate knowing what for them is good and bad; but this is not consciousness yet. Consciousness - is something more; consciousness involves conflict.

vendredi 3 février 2012 12:40:52


Recommend (0)


Report abuse





Vlad Piaskovskiy (0nothing1) wrote:

2

And this conflict seems to be something more than just a competition between two (or more) concurrent stimuli: in this case, at first glance, there is no need for a particularly complex organization. I think it is the conflict between the immediate response to stimulus (what Stuart called knowing) and the reaction that takes into account the developments in the long run - long run adaptation ("in the long run," of course, is relative, if we're talking about the first, primitive consciousness). Ie the basis of the second trend is the ability to forecast and modeling of the world (note also that consciousness is not reducible to win of the second trend, but to their parity).

But it does mean that the long term prognosis can supress immediate response to a stimulus that Stuart called knowing - ie, this trend is against life itself (as the embodiment of creativity and diversity)!

vendredi 3 février 2012 12:40:20


Recommend (0)


Report abuse





Vlad Piaskovskiy (0nothing1) wrote:

3.

Life without death would destroy itself - if the death is sudden to disappear, it would be a disaster, and we ourselves would have to be urgently re-invent it! Stuart, you say yourself that the world exists because everything has its constraints, then what constrains your creativity and diversity?

Stuart wrote:

"If our religions from the Axial Age 2500 years ago or so, were sufficient, then what say we to the fact that in WWI, German and British soldiers in trenches 150 meters apart prayed to the SAME GOD OF LOVE. 2000 years after Christ taught us to love, we kill."

Stuart, do not you think that this is a sign that your theory does not account for something, and except for of creativity and diversity god includes the force that constrains them (negates)?

vendredi 3 février 2012 12:39:33


Recommend (0)




Vlad Piaskovskiy (0nothing1) wrote:

1

I want to dwell on what might mean knowing, which, according to Stuart, is an inherent property of all living things. Here was an excellent article:
http://www.npr.org/blogs/13.7/2010/03/the_evolution_of_symbolic_lang.html

In a nutshell the essence:
"Most organisms communicate, but humans are unique in communicating via symbolic language. This entails relationships between signifiers (e.g. words) and what's signified (e.g. objects or ideas), where what's special is the construction of a system of relationships among the signifiers themselves, generating a seemingly unlimited web of associations, organized by semantic regularities and constraints, retrieved in narrative form, and enabled by complex memory systems.
Humans are thus a symbolic species: symbols have literally changed the kind of biological organism we are."

there was another article on this topic:
http://www.npr.org/blogs/13.7/2010/03/the_iself_and_our_symbolic_spe.html


samedi 4 février 2012 11:45:02


Recommend (0)


Report abuse





Vlad Piaskovskiy (0nothing1) wrote:

2

So, this view seems to be not true. And Homo sapiens is just the only one capable through awareness (understanding) to de-symbolize perception of the world. Another thing, thanks to the development of speech, only humans have symbols that got autonomy to such an extent that separated from the objects which signify and have themselves become objects.

Think about it: how a primitive organism can know what he should eat? It apparently does not see plants, as we see them, it sees food. All its perception consists of such features: food, a partner for mating, the danger, and so on. It is sort of mosaic pattern, and the more perfect the nervous system, the more completely the perception - mosaic is becoming finer, and the picture of the world becomes more adequate.


samedi 4 février 2012 11:43:41


Recommend (0)


Report abuse





Vlad Piaskovskiy (0nothing1) wrote:

3

The thinking by symbols is typical for some psychiatric patients, and such a condition means a return to a more primitive, archaic form of perception. Interestingly, on the one hand, understanding means the destruction of old infantile features: instead of "grass" we begin to distinguish between different types of plants, and eventually realize that every single plant is unique. On the other hand, we are also creating new, "scientific" features, which more accurately reflect the reality (eg, species, etc.) and these new features have the property that the theory destroying facts - Newton's laws are a good example. I want to say that from this point of view understanding does not imply any new information, but just we have the kind of information transformation.


samedi 4 février 2012 11:42:57


Recommend (0)


Report abuse





Vlad Piaskovskiy (0nothing1) wrote:

4

It is also interesting that symbols, according to Jung, is the content of our collective unconscious, then what in this case means "collective"? If this is, in particular, the same as Alva has in mind when he says:
"Mind is not inside us; it is rather, the dynamic activity of the whole, embodied, environmentally situated human being."

than "the redness of red" is really all the same for all of us. On the other hand, if we assume that individual thinking is our internal speech, it arose as a result of communication. I do not know what that means, it's just my speculation ...


samedi 4 février 2012 11:42:21


Recommend (0)

среда, 18 января 2012 г.

WHAT IS YOUR FAVORITE DEEP, ELEGANT, OR BEAUTIFUL EXPLANATION? - Richard Dawkins

Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science,...

Redundancy Reduction and Pattern Recognition

Deep, elegant, beautiful? Part of what makes a theory elegant is its power to explain much while assuming little. Here, Darwin's natural selection wins hands down. The ratio of the huge amount that it explains (everything about life: its complexity, diversity and illusion of crafted design) divided by the little that it needs to postulate (non-random survival of randomly varying genes through geological time) is gigantic. Never in the field of human comprehension were so many facts explained by assuming so few. Elegant then, and deep—its depths hidden from everybody until as late as the nineteenth century. On the other hand, for some tastes natural selection is too destructive, too wasteful, too cruel to count as beautiful. In any case, coming late to the party as ever, I can count on somebody else choosing Darwin. I'll take his great grandson instead, and come back to Darwin at the end.

Horace Barlow FRS is the youngest grandchild of Sir Horace Darwin, Charles Darwin's youngest child. Now a very active ninety, Barlow is a member of a distinguished lineage of Cambridge neurobiologists. I want to talk about an idea that he published in two papers in 1961, on redundancy reduction and pattern recognition. It is an idea whose ramifications and significance have inspired me throughout my career.

The folklore of neurobiology includes a mythical 'grandmother neurone', which fires only when a very particular image, the face of Jerry Lettvin's grandmother, falls on the retina (Lettvin was a distinguished American neurobiologist who, like Barlow, worked on the frog retina). The point is that Lettvin's grandmother is only one of countless images that a brain is capable of recognising. If there were a specific neurone for everything we can recognise—not just Lettvin's grandmother but lots of other faces, objects, letters of the alphabet, flowers, each one seen from many angles and distances, we would have a combinatorial explosion. If sensory recognition worked on the 'grandmother principle', the number of specific recognition neurones for all possible combinations of nerve impulses would exceed the number of atoms in the universe. Independently, the American psychologist Fred Attneave had calculated that the volume of the brain would have to be measured in cubic light years. Barlow and Attneave independently proposed redundancy reduction as the answer.

Claude Shannon, inventor of Information Theory, coined 'redundancy' as a kind of inverse of information. In English, 'q' is always followed by 'u', so the 'u' can be omitted without loss of information. It is redundant. Wherever redundancy occurs in a message (which is wherever there is nonrandomness), the message can be more economically recoded without loss of information (although with some loss in capacity to correct errors). Barlow suggested that, at every stage in sensory pathways, there are mechanisms tuned to eliminate massive redundancy.

The world at time t is not greatly different from the world at time t-1. Therefore it is not necessary for sensory systems continuously to report the state of the world. They need only signal changes, leaving the brain to assume that everything not reported remains the same. Sensory adaptation is a well-known feature of sensory systems, which does precisely as Barlow prescribed. If a neurone is signalling temperature, for example, the rate of firing is not, as one might naively suppose, proportional to the temperature. Instead, firing rate increases only when there is a change in temperature. It then dies away to a low resting frequency. The same is true of neurones signalling brightness, loudness, pressure and so on. Sensory adaptation achieves huge economies by exploiting the non-randomness in temporal sequence of states of the world.

What sensory adaptation achieves in the temporal domain, the well-established phenomenon of lateral inhibition does in the spatial domain. If a scene in the world falls on a pixellated screen, such as the back of a digital camera or the retina of an eye, most pixels see the same as their immediate neighbours. The exceptions are those pixels which lie on edges, boundaries. If every retinal cell faithfully reported its light value to the brain, the brain would be bombarded with a massively redundant message. Huge economies can be achieved if most of the impulses reaching the brain come from pixel cells lying along edges in the scene. The brain then assumes uniformity in the spaces between edges.

As Barlow pointed out, this is exactly what lateral inhibition achieves. In the frog retina, for example, every ganglion cell sends signals to the brain, reporting on the light intensity in its particular location on the surface of the retina. But it simultaneously sends inhibitory signals to its immediate neighbours. This means that the only ganglion cells to send strong signals to the brain are those that lie on an edge. Ganglion cells lying in uniform fields of colour (the majority) send few if any impulses to the brain because they, unlike cells on edges, are inhibited by all their neighbours. The spatial redundancy in the signal is eliminated.

The Barlow analysis can be extended to most of what is now known about sensory neurobiology, including Hubel and Wiesel's famous horizontal and vertical line detector neurones in cats (straight lines are redundant, reconstructable from their ends), and in the movement ('bug') detectors in the frog retina, discovered by the same Jerry Lettvin and his colleagues. Movement represents a non-redundant change in the frog's world. But even movement is redundant if it persists in the same direction at the same speed. Sure enough, Lettvin and colleagues discovered a 'strangeness' neurone in their frogs, which fires only when a moving object does something unexpected, such as speeding up, slowing down, or changing direction. The strangeness neurone is tuned to filter out redundancy of a very high order.

Barlow pointed out that a survey of the sensory filters of a given animal could, in theory, give us a read-out of the redundancies present in the animal's world. They would constitute a kind of description of the statistical properties of that world. Which reminds me, I said I'd return to Darwin. In Unweaving the Rainbow, I suggested that the gene pool of a species is a 'Genetic Book of the Dead', a coded description of the ancestral worlds in which the genes of the species have survived through geological time. Natural selection is an averaging computer, detecting redundancies—repeat patterns—in successive worlds (successive through millions of generations) in which the species has survived (averaged over all members of the sexually reproducing species). Could we take what Barlow did for neurones in sensory systems, and do a parallel analysis for genes in naturally selected gene pools? Now that would be deep, elegant and beautiful.

http://www.edge.org/response-detail/2825/what-is-your-favorite-deep-elegant-or-beautiful-explanation

среда, 11 января 2012 г.

Is The Possible Ontologically Real?

I adduce four lines of evidence for a radical claim: Perhaps the "possible" is ontologically real and the world consists of two realms, "Possibles" and "Actuals."

Empedocles, in ancient Greece, claimed that what was real in the universe was what was Actual. Aristotle flirted with "potentia." Alfred North Whitehead in Process and Reality proposed that Possibles gave rise to Actuals which in turn gave rise to Possibles: P -> A -> P -> A. Few take Whitehead's proposal seriously. Yet maybe he is right.

Read More