Day 043 - Augmented emotional reality

Submitted by Sam on 3 July, 2011 - 00:13

Whilst emotional contagion is an effective means of transmitting information and engendering synchronized emotions in a group, the effects are only beneficial if the emotions being transmitted are appropriately recognized and interpreted. Misreading of emotion is surprisingly common in human social interaction as the spectrum of human emotion manifests itself through facial and bodily indicators so subtle and myriad that they are often only partially observed and partially classified, leading to miscommunication and potentially negative social consequences.

In the pursuit of improving our ability to correctly identify these signals (often referred to as our emotional intelligence), various technologies are being developed to artificially classify emotional expression with an above-human level of accuracy, aiming to help us better understand each other by removing ambiguity from the interpretation of physically expressed emotions.

Rosalind Picard from the MIT Media Lab has developed a prototype of one such technology to boost our emotional intelligence – a pair of glasses containing a camera connected to a computer which interprets facial expression. The camera sends a feed to software which analyzes faces for the thousands of tiny muscle movements which constitute expressions, interpreting them and relaying them back to the wearer either though earphones or a computer screen. By tracking twenty-four “feature points” on the face, the software analyzes micro-expressions and compares them with its database of six known expressions, divided into “thinking”, “agreeing”, “concentrating”, “interested”, “confused” and “disagreeing”, correctly identifying them 64% of the time, compared to the human accuracy of 54%. These figures indicate that this model could aid not only people suffering from impaired emotional intelligence (such as those with autism), but could in fact assist the majority of people in sensing the mood of the people they are talking to.

The prototype consequently carries commercial value, and the team behind the glasses have established a company called Affectiva selling their expression-recognition software to companies wishing to measure how people feel about their adverts or products, for instance.

However, the recognition of emotional states from facial expressions is still far from foolproof, and can be subject to the same manipulations and misreadings that effect human interpretation. The goal of complete emotional knowledge and perfect brain-state sharing remains the domain of connectome projects like The Blue Brain Project.

Day 042 - Mirroring brain states

Submitted by Sam on 2 July, 2011 - 01:46

The principle of emotional contagion describes the tendency of emotional states to quickly spread around a group through the mimicry of expressions and behaviour, allowing the rapid dissemination of information through the transference of emotional signals. This system of non-verbal interaction enables rapid communication about risk and reward, and is beneficial to groups of social animals, mediating and facilitating interaction.

The discovery of mirror neurons by a group of Italian neurophysiologists in the 1990s has triggered research which is uncovering the neural mechanisms responsible for this phenomenon, showing how the actions of individual neurons can correlate to emotions and the empathetic understanding of emotions in others.

When a person (or a primate – the original experimental evidence was derived from macaque monkeys) performs a specific action, certain neurons in the front of their brain fire, commanding or encoding that particular action. A subset of these neurons, the mirror neurons, will also fire when that person sees another person performing the same action, “mirroring” the other person's view by performing a neural-simulation of their action.

A variety of research has corroborated the relationships between mirror neurons and emotional transference. An fMRI study was conducted in which participants were presented with malodorous smells and then videos showing the emotional facial expressions of disgust. The core findings of the study concluded that the same sites in the brain were activated by the actual disgusting stimulus as by the observation of faces expressing disgust, showing that merely observing an emotion activates the neural representation of that emotion. 1

The findings of this study illustrate how emotional contagion can function as a primitive mechanism to protect young infants (and young monkeys) from food-poisoning, by transferring the neural correlate of the emotion of disgust through facial expressions alone.

This is a highly effective method to transfer knowledge, goals and values without the need for sophisticated cognitive skills, relying on a “common substrate for feeling” which allows people to simulate each other in order to empathetically understand them, all without the need for language.

  • 1. Wicker, Bruno, Christian Keysers, Jane Plailly, Jean-Pierre Royet, Vittorio Gallese, and Giacomo Rizzolatti. "Both of Us Disgusted in My Insula The Common Neural Basis of Seeing and Feeling Disgust." Neuron 40.3 (2003): 655-64. Print.

Day 041 - Emotional transference of goals and values

Submitted by Sam on 1 July, 2011 - 00:55

In order for a system to be able to have goals, it must be able to make value judgements. Genetic inheritance can account for some extremely primitive judgements about what is good for the organism and what is bad, but most such distinctions have to be learned. For highly complex organisms in highly dynamical environments, it becomes evolutionary advantageous (and then necessary) to parse experience into transferable wisdom to allow useful goals and values to be transmitted from organism to organism without the need for each individual in each generation to invent them themselves through first-hand experience. Accordingly, in human society, we have established various forms of cultural knowledge, traditions and heritages which can be passed on from one group to another in a variety of forms. The easiest way to encode this information now is to use language, but there are more basic, “general-purpose” mechanisms whereby higher-organisms can transfer things like values and goals between each other. Perhaps the expression of recognizable emotions is one of the most effective ways of transmitting ready-made brain-states from one organism to another.

If we learn to associate the visual input of a tiger with the emotion “fear” (perhaps by first learning how “fear” expresses itself through the body-language and facial expressions of peers, and then observing these indications expressed when they also see the tiger), then we can rapidly activate all the agents in our brain associated with “fear” ourselves, thereby bypassing the learn-by-first-hand-experience mode of knowledge acquisition and instantly activating the same groups of agents in our brain that have proven evolutionarily advantageous to be activated in similar situations.

In this way, emotions might be seen to suppress certain features of normal thinking, and activating specific thought-processes associated with each particular emotion. By having a common emotional heritage (reflecting a common neural architecture), we can reliably learn which stimuli should trigger (sub-consciously perhaps) the group of agents associated with emotions such as love and fear, dread and happiness, by seeing when they are expressed by others.

The elemental distinctions of pain and pleasure constitute the primary drivers of emotion, functioning respectively to suppress all but one goal – remove the pain and maintain the pleasurable state. Emotions are complexes of pleasure and pain, leading to aggregates like “anger” which can perform a number of inhibitory and excitatory roles, such as disabling slower brain processes, most long-range goals and plans, high-level reasoning, but activating fast-reacting processes and special gestures and expressions.

Animals which can express and 'read' emotions gain an ability to turn on specific tailored ways of thinking when confronted with new problems, activating and suppressing agents in an evolutionarily optimized fashion.

Minsky touches on some of these implications of emotions in his rather sprawling talk at MIT in September 2007, below.

Day 040 - Think the same way with the investment principle

Submitted by Sam on 30 June, 2011 - 00:47

It is extremely difficult to challenge long-held beliefs. The modular, additive mechanism of mind proposed by Minsky offers a compelling argument as to why this is the case, through what he describes as “the investment principle”. This is the tendency for humans to approach new situations and unfamiliar problems using modes of thinking that they have learned perform well in other scenarios, rather than try to confront novel environments with entirely novel ways of thinking. This tendency arises because our oldest ideas have an unfair advantage over ideas which arrive later – we have invested more in them by layering dependent skills on top of them. Older ideas will therefore have a greater number of skills (or ways of thinking, beliefs, conceptions) reliant on them than newer ideas, which will have to compete against this large mass of skills in order to become established as beneficial modes of thinking.

Whilst this pattern of thinking minimizes the amount of energy that the brain uses by providing it with strategies which work most of the time, thereby preventing the constant reformulation of policies for dealing with the world, the investment principle does however place certain constraints on how open-minded one can ever really be when confronted with entirely new events. Through the hierarchical layering of agents relying on other agents creating interdependent skill-sets and thought-pathways, we come to apply patterns of past experience to present and future behaviours, reinforcing our longest-held beliefs by continuing to invest time and energy in them, galvanising them by mobilizing them in novel situations, even if they are not adequately equipped to deal with them.

Evolution provides clear illustrations of how this process of investment can become enslaving to development. Once the pattern of centralizing neural networks in an organism's head became evolutionarily beneficial, a delicate network of anatomical dependencies became established around this structure that could not be drastically disturbed by random mutation without a high chance of deleterious side-effects. Whilst parcelling the brain into the head is excellent for most organisms, some could perhaps be better served by a brain placed in a different location – woodpeckers, for example. So many fundamental structures have based themselves around the generally reliable and generally efficient principle of “brain goes in the head” that some organisms are stuck with sub-optimal anatomies.

As a generalizing tool, the investment principle works most of the time, but it is critical that we are aware of its effects. As our brains mature, and as our patterns of thought become more and more entrenched, it becomes harder and harder to dislodge them even in the face of highly compelling new ideas. An awareness of the investment principle can help understand why it is so difficult to challenge working beliefs, and perhaps facilitate introspective change.

Day 039 - Avoiding circularity

Submitted by Sam on 28 June, 2011 - 22:57

Some theories of mind merely reproduce or defer the problems that they try to resolve, creating fallacious arguments which result in infinite regress. A particularly well-known example is the 'homunculus argument' of mind, which arises in half-formed theories of vision. In such theories, the light which falls onto the eye's retina is 'watched' and interpreted by some process (or someone, a 'little man', or homunculus) as if it were a television screen. Such a Cartesian theatre merely defers the question of how decisions are made and sensory input interpreted, resulting in an internal homunculus with its own homunculus to interpret its own television screen, ad infinitum.

Questions like “What caused the universe and why?” and “How can you tell what is good?” are characterized by similar slips into circular reasoning as they never seem to have a final cause, always yielding to yet another recursion of “what caused that cause?”. Every culture has evolved strategies to resist such paradoxes, helping to prevent minds endlessly dwelling on such questions and steering us away from endless loops to allow more soluble problems to be tackled. Through social consensus, shame, taboo, awe and mystery, institutions of law, religion and philosophy provide the authority to engender social consensus and defuse such lines of inquiry by offering stand-in responses and ways of thinking to deflect such questions. Whilst this may seem like dogma and indoctrination, it does serve the socially-beneficial function of pushing minds towards productive work rather than wasting time in ultimately fruitless reasoning.

However, in the resonant words of Minsky, it is worth remembering that “one can acquire certainty only by amputating inquiry.”

Day 038 - Why can't we just do what we want?

Submitted by Sam on 27 June, 2011 - 23:08

During moments of internal conflict, where we simultaneously hold competing desires, we often wonder why we can't just tell ourselves what to do. If we are in control of our 'selves', why can't we make decisions to prioritize goals to the exclusion of others? When faced with the conflict between the desire to write and the desire to sleep, for instance, I cannot simply and completely override one with the other. Through will-power alone I am unable to completely disable those agents in my conscious mind which alert me of the need to sleep, but instead I must resist them with one or more conscious strategies, employing counter-agents as indirect methods to depress (but never disable) the need-for-sleep agents' nagging effects.

The reason for the mind's lack of control over its own agents is that most of its workings are utterly hidden from itself. Our 'selves' are not conscious of any of the processes which generate their own machinations, but merely of the high-level effects of these processes, much as a computer user is not constantly aware of all of the minute processes through which pressing a key on a keyboard creates a symbol on a screen.

If one agent could seize and maintain control from all of the other competing agents, then the mind could change itself at will. In order to do so, the mind would have to know how all of its agents worked, and give itself the option of managing each agent on an individual basis. Disregarding the computational expense of such a recursive architecture, a single example is enough to convincingly demonstrate how evolutionarily disadvantageous such self-knowledge could be. If we were able to exert full control over our pleasure and reward systems, we would be able to reproduce the sense of success and achievement without the need for any actual accomplishment, and would be able to pursue personal desires to the absolute exclusion of all else, regardless of cost.

Such a mind with sufficient knowledge of its own workings to switch parts of itself on and off and counteract all of its own plans would contain not only the ability but the propensity to self-destruct at will. The reality of this statement can be seen from the experimental evidence shown by the rats and monkeys who self-administered intravenous injections from an unlimited supply of stimulants to the point of severe weight loss and death 1. It is for these reasons that we do not know ourselves perfectly, and that the illusion of 'self' is protected from facilitated self-destruction by not knowing how to control its own agents.

  • 1. Roy A. Wise, Brain Reward Circuitry: Insights from Unsensed Incentives, Neuron, Volume 36, Issue 2, 10 October 2002

Day 037 - The amnesia of infancy

Submitted by Sam on 26 June, 2011 - 23:12

Describing the action of picking up a cup of tea to drink as a 'simple task' conceals a great complexity of individual sub-tasks, each of which must be mastered and marshalled in the appropriate sequence to allow the completion of the task. Marvin Minsky describes these sub-tasks as agents 1, and categorizes some relatively high-level, conceptually-simple agents involved in picking up a cup of tea as follows:

  • Grasping agents, tasked with holding the cup
  • Balancing agents, tasked with preventing the tea from spilling
  • Thirst agents, which want you to drink the tea
  • Moving agents, tasked with getting the cup to your lips

In Minsky's schema, each agent can be broken down further and further, through chains of hierarchy and interaction in to very small functionally irreducible parts. Each agent, mindless in itself, interacts with many others in very special ways to produce true intelligence. However, through what Minsky terms “the amnesia of infancy”, we assume that many of our extremely complex abilities (like picking up a cup of tea and drinking it) are both simple and ready-made in our minds, forgetting how long it took us as children to learn the myriad of steps which tell us how objects in the real-world interact, thereby concealing the vast complexity of these interacting processes.

As a result of this concealment, answers to questions such as “why is a chain more than its various links?” seem obvious to us as adults, because we cannot remember how hard it was to learn the rules of interaction (such as those which prevent two objects from ever being in the same place) which are now second nature to us. We operate under an illusion of simplicity which results from a distancing from what happened during our infancy, when our first abilities were formed. As each skill was mastered, as each agent matured, additional layers were added on top of them, until the foundational layers, the most basic agents, seem so remote to us as adults that we forget we ever had to learn them.

This distancing effect obscures the fact that things like “common sense” and “simple tasks” are in fact wondrously diverse and intricate, composed from “an immense society of hard-earned practical ideas - of multitudes of life-learned rules and exceptions, dispositions and tendencies, balances and checks”.

  • 1. Minsky, Marvin Lee. The Society of Mind. New York: Simon and Schuster, 1986. Print.

Day 036 - The purpose of memory

Submitted by Sam on 25 June, 2011 - 23:41

If the mind is a control system directed towards the purpose of deciding what to do next, then all components of mind, including memory, must in some way support this goal. Pentti Kanerva, currently a Research Affiliate at the Redwood Center for Theoretical Neuroscience, put forward a theory of memory in 1988 1 consonant with this view, stating that the function of memory is to make available information relevant to the current state of the outside world rapidly enough to allow the organism to predict events in the world, including the consequences of its own actions.

The ability to predict the consequences of actions, however fuzzily, has clear evolutionary benefits. The best way to make predictions is to look at the most recent past, and to compare current events with previously encountered similar situations. Subsequently, there is clear evolutionary advantage in a system which can retrieve earlier situations and their consequences, and match them to the various modes of sensory stimulus which constitute the organism's 'present'.

In this model of memory, the present situation as represented by the current pattern of sensory input acts as a retrieval cue for memories of earlier events, which are used to predict the next sensory input. In a continual process of retrieval and comparison, the organism's internal model of the world is created and updated, comparing and strengthening memories of sequences of events which accurately predict real-world consequences, and modifying those that do not. The system learns by this corrective process of comparison, encoding and integrating information into a predictive model of the world, aiding the individual in deciding what to do next.

  • 1. Kanerva, Pentti. Sparse Distributed Memory. Cambridge, MA: MIT, 1988. Print.

Day 035 - Visual prostheses

Submitted by Sam on 24 June, 2011 - 23:48

Problems with the eye or the optic nerve are the leading causes of new blindness. In such cases, the visual cortex (the part of the brain responsible for processing the visual information) often remains largely intact, presenting the possibility of recovering some level of sight through the integration of an intracortical visual prosthesis.

There has been historical motivation for attempting to restore sight through electrical stimulation of the brain from at least 1918, when two German doctors reported that during an operation to remove bone fragments from a patient's head caused by a bullet wound, the patient described flickering in the right visual field when the left occipital lobe was electrically stimulated.

Using the same principles as those realized by the rat hippocampal prosthesis recently developed by Dr Berger, intracortical visual prostheses are being developed which could conceivably restore vision by simulating the pattern of neural activity usually associated with the sense-data provided by the eyes to provide the visual cortex with meaningful sensory input which it no longer receives.

The basic concept for the prosthesis involves implanting electrodes into the visual cortex, which would be connected to an implanted computer chip powered and controlled wirelessly. A camera would feed images to a computer system, which would encode images and transfer them to the cortex digitally, with each 'point' in the image represented by an electrical stimulation in the implanted electrode. These stimulations would create points of perceptual white light (phosphenes) in the appropriate place.

Whilst this bitmap approach to artificial sight is both promising and intuitive, it needs substantial alteration and improvement in order to be able to generate clear, highly-resolved colour imagery, as it seems that the visual system does not actually work on a point-by-point basis. Rather than encoded as pixels, our visual cortex interprets images in terms of colours, edges, orientation etc. Finding how these dimensions of vision are encoded in the brain, and how they may be mapped by an artificial system are among the stated goals of Illinois Tech's IntraCortical Visual Prosthesis Project.

Day 034 - Neural prosthesis

Submitted by Sam on 23 June, 2011 - 22:02

In a paper published last week, biomedical engineers from the University of Southern California detailed how they were able to selectively turn rats' memories on and off using a computer chip. The team artificially constructed neuron-to-neuron connections between the rats' normal brain circuitry and a computer circuit designed to duplicate the neural activity associated with encoding memory, ultimately allowing the scientists to turn certain memories on and off with the flip of a switch. As the lead author of the paper puts it: "Flip the switch on, and the rats remember. Flip it off, and the rats forget” 1.

The prototype cortical prosthesis demonstrated by the team was applied to the information processing areas of two sub-regions of the rats' hippocampi, areas which have previously been identified as being involved in the formation of long-term memory. The researchers taught the rats a task which involved pressing leavers in order to release droplets of water as a reward. During the learning process, the team used electrical probes to record the activity of the hippocampus, which converts short-term memory into long-term memory. By blockading the neural interactions of the two areas, the researchers were able to make their trained rats forget their long-term learned behaviour.

The team then integrated their artificial hippocampal system, which had been programmed to duplicate the patterns of electrical activity associated with the interaction of the two areas of study. When activated, the chip delivered electrical pulses which conformed to the normal firing of hippocampal output region, thereby simulating the memory-encoding function and restoring long-term memory capability in the pharmacologically blockaded rats.

As well as recovering switched-off memories in compromised rats, the researchers were able to show how the device could actually strengthen the memory generation and capability of rats with a normally functioning hippocampus.

The success of the prosthesis in both restoring and enhancing memory makes it a logical candidate for development into a human-viable prosthesis, with the potential to help sufferers of neurodegenerative diseases such as Alzheimer's, or victims of stroke or brain-injury.

  • 1. "USC - Viterbi School of Engineering - Restoring Memory, Repairing Damaged Brains." USC - Viterbi School of Engineering. 17 June 2011. Web. 23 June 2011
Syndicate content
Drupal theme by Kiwi Themes.