I am a mariner of Odysseus with heart of fire but with mind ruthless and clear

Archive for February, 2010|Monthly archive page

“The Great Silence” -Stephen Hawking & Others Look At Why Life Has Yet to be Discovered Beyond Earth

In astronomy, sci-fi on February 28, 2010 at 10:23 am

“The idea that we are the only intelligent creatures in a cosmos of a hundred billion galaxies is so preposterous that there are very few astronomers today who would take it seriously. It is safest to assume therefore, that they are out there and to consider the manner in which this may impinge upon human society.”

Arthur C. Clarke, physicist and author of 2001: A Space Odyssey

One of the greatest philosophical and scientific challenges that currently confronts humanity is the unsolved question of the existence of extraterrestrial intelligence.

The Fermi paradox is the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilizations and the lack of evidence for or contact with such civilizations.

The 14-billion-year age of the universe and its 130 billion galaxies and a Milky Way Galaxy with some 400 billion stars suggest that if the Earth is typical, should be common. Nobel laureate Enrico Fermi, discussing this observation with colleagues over lunch in 1950, asked, logically: “Where are they?” Why, if advanced extraterrestrial civilizations exist in our Milky Way galaxy, hasn’t evidence such as probes, spacecraft, or radio transmissions been found?

As our technologies become ever more sophisticated and the search for extraterrestrial intelligence continues to fail, the “Great Silence” becomes louder than ever. The seemingly empty cosmos is screaming out to us that something is amiss. Or is it?

Using a computer simulation of our own galaxy, the Milky Way, Rasmus Bjork, a physicist at the Niels Bohr Institute in Copenhagen, proposed an answer to the Fermi Paradox. Bjork proposed that an alien civilization might build intergalactic probes and launch them on missions to search for life.

He found, however, that even if the alien ships could hurtle through space at a tenth of the speed of light, or 30,000km a second, – NASA’s current Cassini mission to Saturn is gliding along at 32km a second – it would take 10 billion years, roughly half the age of the universe, to explore a mere four percent of the galaxy.

Like humans, alien civilizations could shorten the time to find extra-terrestrials by picking up television and radio broadcasts that might leak from colonized planets. “Even then,” he reported, “unless they can develop an exotic form of transport that gets them across the galaxy in two weeks it’s still going to take millions of years to find us. There are so many stars in the galaxy that probably life could exist elsewhere, but will we ever get in contact with them? Not in our lifetime.”

The problem of distance is compounded by the fact that timescales that provide a “window of opportunity” for detection or contact might be quite small. Advanced civilizations may periodically arise and fall throughout our galaxy as they do here, on Earth, but this may be such a rare event, relatively speaking, that the odds of two or more such civilizations existing at the same time are low.

In short, there may have been intelligent civilizations in the galaxy before the emergence of intelligence on Earth, and there may be intelligent civilizations after its extinction, but it is possible that human beings are the only intelligent civilization in existence “now.” “Now” assumes that an extraterrestrial intelligence is not able to travel to our vicinity at faster-than-light speeds, in order to detect an intelligence 1,000 light-years distant, that intelligence will need to have been active 1,000 years ago.

There is also a possibility that archaeological evidence of past civilizations may be detected through deep space observations — especially if they left behind large artifacts such as Dyson spheres.

Perhaps…but in our search for life and intelligence we have to keep in mind that the Milky Way Galaxy is two or three times the age of our Solar System, so there are going to be some societies out there that are millions of years, maybe more, beyond ours, which may have proceeded beyond biology—that have invented intelligent, self-replicating machines and it could be that what we first find is something that’s artificially constructed if we have the ability to recognize it as such. It may very well be that our greatest discovery will be that the very nature of alien communication will prevent our being able to communicate with it.

In his famous lecture on Life in the Universe, Stephen Hawking asks: “What are the chances that we will encounter some alien form of life, as we explore the galaxy?”

If the argument about the time scale for the appearance of life on Earth is correct, Hawking says “there ought to be many other stars, whose planets have life on them. Some of these stellar systems could have formed 5 billion years before the Earth. So why is the galaxy not crawling with self-designing mechanical or biological life forms?”

Why hasn’t the Earth been visited, and even colonized? Hawking asks. “I discount suggestions that UFO’s contain beings from outer space. I think any visits by aliens, would be much more obvious, and probably also, much more unpleasant.”

Hawking continues: “What is the explanation of why we have not been visited? One possibility is that the argument, about the appearance of life on Earth, is wrong. Maybe the probability of life spontaneously appearing is so low, that Earth is the only planet in the galaxy, or in the observable universe, in which it happened. Another possibility is that there was a reasonable probability of forming self reproducing systems, like cells, but that most of these forms of life did not evolve intelligence.”

We are used to thinking of intelligent life, as an inevitable consequence of evolution, Hawking emphasized, but it is more likely that evolution is a random process, with intelligence as only one of a large number of possible outcomes.

Intelligence, Hawking believes contrary to our human-centric existece, may not have any long-term survival value. In comparison the microbial world, will live on, even if all other life on Earth is wiped out by our actions. Hawking’s main insight is that intelligence was an unlikely development for life on Earth, from the chronology of evolution: “It took a very long time, two and a half billion years, to go from single cells to multi-cell beings, which are a necessary precursor to intelligence. This is a good fraction of the total time available, before the Sun blows up. So it would be consistent with the hypothesis, that the probability for life to develop intelligence, is low. In this case, we might expect to find many other life forms in the galaxy, but we are unlikely to find intelligent life.”

Another possibility is that there is a reasonable probability for life to form, and to evolve to intelligent beings, but at some point in their technological development “the system becomes unstable, and the intelligent life destroys itself. This would be a very pessimistic conclusion. I very much hope it isn’t true.”

Hawkling prefers another possibility: that there are other forms of intelligent life out there, but that we have been overlooked. If we should pick up signals from alien civilizations, Hawking warns,”we should have be wary of answering back, until we have evolved” a bit further. Meeting a more advanced civilization, at our present stage,’ Hawking says “might be a bit like the original inhabitants of America meeting Columbus. I don’t think they were better off for it.”


Cell-inspired electronics

In computer science on February 26, 2010 at 3:11 pm

Cell-inspired electronics

A single cell in the human body is approximately 10,000 times more energy-efficient than any nanoscale digital transistor, the fundamental building block of electronic chips. In one second, a cell performs about 10 million energy-consuming chemical reactions, which altogether require about one picowatt (one millionth millionth of a watt) of power.

MIT’s Rahul Sarpeshkar is now applying architectural principles from these ultra-energy-efficient cells to the design of low-power, highly parallel, hybrid analog-digital electronic circuits. Such circuits could one day be used to create ultra-fast supercomputers that predict complex cell responses to drugs. They may also help researchers to design synthetic genetic circuits in cells.

In his new book, Ultra Low Power Bioelectronics (Cambridge University Press, 2010), Sarpeshkar outlines the deep underlying similarities between chemical reactions that occur in a cell and the flow of current through an analog . He discusses how biological cells perform reliable computation with unreliable components and noise (which refers to random variations in signals — whether electronic or genetic). Circuits built with similar design principles in the future can be made robust to electronic noise and unreliable electronic components while remaining highly energy efficient. Promising applications include image processors in cell phones or brain implants for the blind.

“Circuits are a language for representing and trying to understand almost anything, whether it be networks in biology or cars,” says Sarpeshkar, an associate professor of electrical engineering and computer science. “There’s a unified way of looking at the biological world through circuits that is very powerful.”

Circuit designers already know hundreds of strategies to run analog circuits at low power, amplify signals, and reduce noise, which have helped them design low-power electronics such as mobile phones, mp3 players and laptop computers.

“Here’s a field that has devoted 50 years to studying the design of complex systems,” says Sarpeshkar, referring to electrical engineering. “We can now start to think of biology in the same way.” He hopes that physicists, engineers, biologists and biological engineers will work together to pioneer this new field, which he has dubbed “cytomorphic” (cell-inspired or cell-transforming) electronics.

Finding connections

Sarpeshkar, an electrical engineer with many years of experience in designing low-power and biomedical circuits, has frequently turned his attention to finding and exploiting links between electronics and biology. In 2009, he designed a low-power radio chip that mimics the structure of the human cochlea to separate and process cell phone, Internet, radio and television signals more rapidly and with more energy efficiency than had been believed possible.

That chip, known as the RF (radio frequency) cochlea, is an example of “neuromorphic electronics,” a 20-year-old field founded by Carver Mead, Sarpeshkar’s thesis advisor at Caltech. Neuromorphic circuits mimic biological structures found in the nervous system, such as the cochlea, retina and brain cells.

Sarpeshkar’s expansion from neuromorphic to cytomorphic electronics is based on his analysis of the equations that govern the dynamics of chemical reactions and the flow of electrons through analog circuits. He has found that those equations, which predict the reaction’s (or circuit’s) behavior, are astonishingly similar, even in their noise properties.

Chemical reactions (for example, the formation of water from hydrogen and oxygen) only occur at a reasonable rate if enough energy is available to lower the barriers that prevent such reactions from occurring. A catalyst such as an enzyme can lower such barriers. Similarly, electrons flowing through a circuit in a transistor exploit input voltage energy to allow them to reduce the barrier for electrons to flow from the transistor’s source to the transistor’s drain. Changes in the input voltage lower the barrier and increase current flow in transistors, just as adding an enzyme to a chemical reaction speeds it up.

Essentially, cells may be viewed as circuits that use molecules, ions, proteins and DNA instead of electrons and transistors. That analogy suggests that it should be possible to build electronic chips — what Sarpeshkar calls “cellular chemical computers” — that mimic chemical reactions very efficiently and on a very fast timescale.

One potentially powerful application of such circuits is in modeling genetic network — the interplay of genes and proteins that controls a cell’s function and fate. In a paper presented at the 2009 IEEE Symposium on Biological Circuits and Systems, Sarpeshkar designed a circuit that allows any genetic network reaction to be simulated on a chip. For example, circuits can simulate the interactions between genes involved in lactose metabolism and the transcription factors that regulate their expression in bacterial cells.

In the long term, Sarpeshkar plans to develop circuits that mimic interactions within entire cellular genomes, which are important in enabling scientists to understand and treat complex diseases such as cancer and diabetes. Eventually, researchers may be able to use such chips to simulate the entire human body, he believes. Such chips would be much faster than computer simulations now, which are highly inefficient at modeling the effects of noise in the large-scale nonlinear circuits within cells.

He is also investigating how circuit design principles can help genetically engineer cells to perform useful functions, for example, the robust and sensitive detection of toxins in the environment.

Sarpeshkar’s focus on modeling cells as analog rather than digital circuits offers a new approach that will expand the frontiers of synthetic biology, says James Collins, professor of biomedical engineering at Boston University. “Rahul has nicely laid a foundation that many of us in synthetic biology will be able to build on,” he says.

Provided by Massachusetts Institute of Technology

Basic quantum computing circuit built

In computer science, physics, science on February 26, 2010 at 3:01 pm

Exerting delicate control over a pair of atoms within a mere seven-millionths-of-a-second window of opportunity, physicists at the University of Wisconsin-Madison created an atomic circuit that may help quantum computing become a reality.

Quantum computing represents a new paradigm in information processing that may complement classical computers. Much of the dizzying rate of increase in traditional computing power has come as transistors shrink and pack more tightly onto chips — a trend that cannot continue indefinitely.

“At some point in time you get to the limit where a single transistor that makes up an  is one atom, and then you can no longer predict how the transistor will work with classical methods,” explains UW-Madison physics professor Mark Saffman. “You have to use the physics that describes atoms — .”

At that point, he says, “you open up completely new possibilities for processing information. There are certain calculational problems… that can be solved exponentially faster on a quantum computer than on any foreseeable .”

With fellow physics professor Thad Walker, Saffman successfully used neutral atoms to create what is known as a controlled-NOT (CNOT) gate, a basic type of circuit that will be an essential element of any quantum computer. As described in the Jan. 8 issue of the journal , the work is the first demonstration of a  between two uncharged atoms.

The use of neutral atoms rather than charged ions or other materials distinguishes the achievement from previous work. “The current gold standard in experimental  has been set by trapped ions… People can run small programs now with up to eight ions in traps,” says Saffman.

However, to be useful for computing applications, systems must contain enough , or qubits, to be capable of running long programs and handling more complex calculations. An ion-based system presents challenges for scaling up because ions are highly interactive with each other and their environment, making them difficult to control.

“Neutral atoms have the advantage that in their ground state they don’t talk to each other, so you can put more of them in a small region without having them interact with each other and cause problems,” Saffman says. “This is a step forward toward creating larger systems.”

The team used a combination of lasers, extreme cold (a fraction of a degree above absolute zero), and a powerful vacuum to immobilize two rubidium atoms within “optical traps.” They used another laser to excite the atoms to a high-energy state to create the CNOT quantum gate between the two atoms, also achieving a property called entanglement in which the states of the two atoms are linked such that measuring one provides information about the other.

Writing in the same journal issue, another team also entangled neutral atoms but without the CNOT gate. Creating the gate is advantageous because it allows more control over the states of the atoms, Saffman says, as well as demonstrating a fundamental aspect of an eventual quantum computer.

The Wisconsin group is now working toward arrays of up to 50 atoms to test the feasibility of scaling up their methods. They are also looking for ways to link qubits stored in atoms with qubits stored in light with an eye toward future communication applications, such as “quantum internets.”

Πηγή: http://www.physorg.com/print186333950.html

Οι μεγαλύτεροι και πιο εξελιγμένοι εγκέφαλοι είναι «αργόστροφοι»

In brain, science on February 26, 2010 at 2:51 pm

Αναλύοντας τις νευροανατομικές και λειτουργικές διαφορές που προέκυψαν από τη λεπτομερή σύγκριση του εγκεφάλου των πιθήκων μακάκων, των χιμπαντζήδων και των ανθρώπων, επιστήμονες διαπίστωσαν ότι η υπερβολική διόγκωση του ανθρώπινου εγκεφάλου στην πορεία της εξέλιξης είχε, μεταξύ άλλων, συνέπεια την εμφανή λειτουργική βραδύτητα αυτού του πολύτιμου οργάνου της σκέψης.

Ο,τι μας διαφοροποιεί εμφανώς από τα άλλα πρωτεύοντα θηλαστικά είναι ο υπερβολικά μεγάλος εγκέφαλός μας. Η ανθρώπινη γλώσσα και η συνείδηση, η κατασκευή εργαλείων και έργων τέχνης δεν θεωρούνται πλέον οι αινιγματικές ικανότητες μιας άυλης ψυχής αλλά τα προϊόντα της λειτουργίας και της επικοινωνίας των υπερδιογκωμένων εγκεφάλων μας. Το ποιες ακριβώς συνέπειες είχε κατά την πορεία της εξέλιξης και εξακολουθεί να έχει μέχρι σήμερα η εμφάνιση του υπερτραφούς ανθρώπινου εγκεφάλου αποτελεί προφανώς ένα ερώτημα αποφασιστικής σημασίας για την ανθρώπινη αυτογνωσία. Θέλοντας να ρίξει κάποιο φως σ’ αυτό το σκοτεινό ερώτημα, μια διεθνής ομάδα από ειδικούς ερευνητές του εγκεφάλου αποφάσισε να μελετήσει συγκριτικά τις ανατομικές και νευροβιολογικές διαφορές που υπάρχουν ανάμεσα στον ογκώδη ανθρώπινο εγκέφαλο και τον πολύ μικρότερο εγκέφαλο των μακάκων και των χιμπατζήδων. Οι τελευταίοι μάλιστα θεωρούνται οι στενότεροι βιολογικοί συγγενείς μας.

Τα συμπεράσματα που προέκυψαν από αυτήν τη σημαντική έρευνα δημοσιεύτηκαν πριν από λίγες ημέρες στο εγκυρότατο αμερικανικό περιοδικό «Proceedings of National Academy of Science» (PNAS). Το εκτενές άρθρο υπογράφουν οι διευθυντές των αντίστοιχων ερευνητικών ομάδων που συνεργάστηκαν για την υλοποίηση της έρευνας: Roberto Caminti (Ιταλία), Hassan Ghaziri (Ελβετία), Ralf Galuske (Γερμανία), Patrick Hof (ΗΠΑ), Giorgio Innocenti (Σουηδία).

Ηταν από πολύ καιρό γνωστό ότι οι εγκέφαλοι όλων των ανώτερων θηλαστικών, δηλαδή των πρωτευόντων (πίθηκοι και άνθρωποι), παρουσιάζουν εντυπωσιακές ομοιότητες ως προς τη βασική ανατομική και λειτουργική τους οργάνωση, γεγονός που μάλλον επιβεβαιώνει την υπόθεση μιας κοινής κατά το μακρινό παρελθόν εξελικτικής ιστορίας. Παρά τις εμφανείς ανατομικές ομολογίες, δηλαδή τις κοινές ανατομικές και λειτουργικές δομές του εγκεφάλου των πρωτευόντων, υπάρχει ωστόσο μια θεμελιώδης ανατομική διαφορά: το μέγεθος. Ο ανθρώπινος εγκέφαλος, με τα 1.350 κυβικά εκατοστά του, είναι αναμφίβολα ο μεγαλύτερος εγκέφαλος που εμφανίστηκε ποτέ στη διάρκεια της εξέλιξης. Το μέγεθός του είναι κυριολεκτικά τεράστιο σε σχέση με το σχετικά μικρό μέγεθος του ανθρώπινου σώματος.

Πράγματι, ο εγκέφαλός μας είναι 2,3 φορές μεγαλύτερος απ’ ό,τι θα περίμενε κανείς να βρει από τις αναλογίες σώματος-εγκεφάλου που επικρατούν στα άλλα πρωτεύοντα (π.χ. στους χιμπατζήδες), ενώ είναι 3 φορές μεγαλύτερος από τον εγκέφαλο των υπόλοιπων θηλαστικών. Σύμφωνα με όλα τα διαθέσιμα σήμερα παλαιοντολογικά δεδομένα, η σημαντική αυτή απόκλιση στον όγκο του ανθρώπινου εγκεφάλου δεν πραγματοποιήθηκε διά μιας, αλλά προέκυψε από τη μακρά -αλλά καθόλου γραμμική- εξελικτική πορεία διόγκωσης των εγκεφάλων των πρωτανθρώπων.

Στην πρόσφατη αυτή έρευνα οι νευροεπιστήμονες μελέτησαν συγκριτικά το πώς επικοινωνούν τα εγκεφαλικά ημισφαίρια διαφορετικών πρωτευόντων (μακάκων, χιμπατζήδων, ανθρώπων) και διαπίστωσαν ότι οι διαφορετικές εγκεφαλικές περιοχές που βρίσκονται στο κάθε ημισφαίριο επικοινωνούν μεταξύ τους μέσω νευρικών ινών, οι οποίες διαφέρουν σημαντικά τόσο ως προς τη διάμετρο όσο και ως προς το μήκος, και συνεπώς μεταφέρουν τις νευρικές πληροφορίες από το ένα ημισφαίριο στο άλλο με εντελώς διαφορετικές ταχύτητες. Οσο μεγαλύτερη διάμετρο και μικρότερο μήκος έχουν οι νευρικές ίνες που συνδέουν τις διαφορετικές εγκεφαλικές περιοχές τόσο ταχύτερη είναι η μεταφορά των πληροφοριών. Και αντιστρόφως, όσο λεπτότερες και μακριές είναι οι νευρικές ίνες που συνδέουν τις περιοχές που βρίσκονται στα αντίθετα εγκεφαλικά ημισφαίρια τόσο βραδύτερη είναι η μεταγωγή των πληροφοριών μέσα από αυτές.

«Οι μεγάλες διαστάσεις του ανθρώπινου εγκεφάλου, η ανατομική και λειτουργική ασυμμετρία των δύο ημισφαιρίων του υποδεικνύουν ότι οι διασυνδέσεις ανάμεσα στα δύο ημισφαίρια έχουν υποστεί ουσιαστική αναδιοργάνωση κατά τη διάρκεια της εξέλιξης των πρωτευόντων. Οι χρόνοι που απαιτούνται για τις αλληλεπιδράσεις ανάμεσα στα ημισφαίρια είναι ένα όριο, ένας περιορισμός, που έπαιξε τεράστια σημασία σε αυτές τις διεργασίες αναδιοργάνωσης», όπως γράφουν στο σχετικό άρθρο. Οταν μάλιστα συνέκριναν τις υπερβολικές διαστάσεις του ανθρώπινου εγκεφάλου με αυτές του χιμπατζή διαπίστωσαν ότι, παραδόξως, η διάμετρος των νευρικών ινών που συνδέουν τις πιο εκτεταμένες εγκεφαλικές περιοχές στο κάθε ημισφαίριο του διογκωμένου ανθρώπινου εγκεφάλου ήταν περίπου η ίδια με αυτή των αντίστοιχων νευρικών ινών του χιμπατζή! Είναι πράγματι εντυπωσιακό: ο σύγχρονος ανθρώπινος εγκέφαλος διαθέτει τις ίδιες περίπου διασυνδέσεις για τη σύνδεση και την επικοινωνία των δύο ημισφαιρίων με αυτές που διέθεταν οι πρωτάνθρωποι Αυστραλοπίθηκοι! Αυτό το γεγονός ισοδυναμεί με την εξελικτική επιλογή να αντισταθμίζεται η επέκταση και η πολυπλοκοποίηση του εγκεφαλικού ιστού με τη βραδύτερη μεταφορά των νευρικών πληροφοριών στο εσωτερικό του εγκεφάλου. Οσο πιο πολύ μεγαλώνουν οι διαστάσεις ενός εγκεφάλου, όσο δηλαδή μεταβάλλονται τα όρια των διαφορετικών εγκεφαλικών περιοχών σε κάθε ημισφαίριο, τόσο στενότερες είναι οι νευρικές ίνες που τις συνδέουν και άρα τόσο πιο αργή γίνεται η μεταξύ τους επικοινωνία.

Για παράδειγμα, διαπίστωσαν ότι οι αρχαιότερες εξελικτικά κινητικές και αισθητικές περιοχές του εγκεφάλου επικοινωνούν πολύ ταχύτερα απ’ ό,τι οι συνειρμικές περιοχές που ευθύνονται για τις ανώτερες και πιο αφηρημένες νοητικές λειτουργίες: ανάμεσα στον κινητικό φλοιό του εγκεφάλου και τον μετωπιαίο συνειρμικό φλοιό η ταχύτητα μετάδοσης των νευρικών σημάτων είναι σχεδόν διπλάσια.

Η εκ νέου ανακάλυψη της «βραδύτητας» ως τυπικού γνωρίσματος των πιο πολύπλοκων βιολογικών συστημάτων αποτελεί μια σαφή πρόκληση για την εποχή μας, που προκρίνει τους ιλιγγιώδεις ρυθμούς ανάπτυξης σε μέγιστη αρετή της και προάγει άκριτα την ταχύτητα σε προφανή και αυταπόδεικτη αξία.

Πηγή: Ελευθεροτυπία

Scientists reveal driving force behind evolution

In evolution, science on February 25, 2010 at 4:11 pm

Scientists at the University of Liverpool have provided the first experimental evidence that shows that evolution is driven most powerfully by interactions between species, rather than adaptation to the environment.

The team observed viruses as they evolved over hundreds of generations to infect bacteria. They found that when the bacteria could evolve defences, the viruses evolved at a quicker rate and generated greater diversity, compared to situations where the bacteria were unable to adapt to the viral infection.

The study shows, for the first time, that the American Leigh Van Valen was correct in his ‘Red Queen Hypothesis’. The theory, first put forward in the 1970s, was named after a passage in Lewis Carroll’s Through the Looking Glass in which the Red Queen tells Alice, ‘It takes all the running you can do to keep in the same place’. This suggested that species were in a constant race for survival and have to continue to evolve new ways of defending themselves throughout time.

Dr Steve Paterson, from the University’s School of Biosciences, explains: “Historically, it was assumed that most  was driven by a need to adapt to the environment or habitat. The Red Queen Hypothesis challenged this by pointing out that actually most  will arise from co-evolutionary interactions with other species, not from interactions with the environment.

“This suggested that  was created by ‘tit-for-tat’ adaptations by species in constant combat. This theory is widely accepted in the science community, but this is the first time we have been able to show evidence of it in an experiment with living things.”

Dr Michael Brockhurst said: “We used fast-evolving viruses so that we could observe hundreds of generations of evolution. We found that for every viral strategy of attack, the bacteria would adapt to defend itself, which triggered an endless cycle of co-evolutionary change. We compared this with evolution against a fixed target, by disabling the bacteria’s ability to adapt to the virus.

“These experiments showed us that co-evolutionary interactions between species result in more genetically diverse populations, compared to instances where the host was not able to adapt to the parasite. The virus was also able to evolve twice as quickly when the  were allowed to evolve alongside it.”

The team used high-throughput DNA sequencing technology at the Centre for Genomic Research to sequence thousands of virus genomes. The next stage of the research is to understand how co-evolution differs when interacting species help, rather than harm, one another.

The research is published in journal Nature.

Πηγή: http://www.physorg.com/news186311100.html

Scientists find first physiological evidence of brain’s response to inequality

In brain, psychology on February 25, 2010 at 11:50 am

Scientists find first physiological evidence of brain's response to inequality

This saggital view of the brain shows activity in both the ventromedial prefrontal cortex and the ventral striatum.

Credit: Elizabeth Tricomi, Rutgers University

Specifically, the team found that the reward centers in the human  respond more strongly when a poor person receives a  than when a rich person does. The surprising thing? This activity pattern holds true even if the brain being looked at is in the rich person’s head, rather than the poor person’s.

These conclusions, and the  (fMRI) studies that led to them, are described in the February 25 issue of the journalNature.

“This is the latest picture in our gallery of human nature,” says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech and one of the paper’s coauthors. “It’s an exciting area of research; we now have so many tools with which to study how the brain is reacting.”

It’s long been known that we humans don’t like inequality, especially when it comes to . Tell two people working the same job that their salaries are different, and there’s going to be trouble, notes John O’Doherty, professor of psychology at Caltech, Thomas N. Mitchell Professor of  at the Trinity College Institute of Neuroscience, and the principal investigator on the Nature paper.

But what was unknown was just how hardwired that dislike really is. “In this study, we’re starting to get an idea of where this inequality aversion comes from,” he says. “It’s not just the application of a social rule or convention; there’s really something about the basic processing of rewards in the brain that reflects these considerations.”

The  “rewards”—things like food, money, and even pleasant music, which create positive responses in the body—in areas such as the (VMPFC) and ventral striatum.

In a series of experiments, former Caltech postdoctoral scholar Elizabeth Tricomi (now an assistant professor of psychology at Rutgers University)—along with O’Doherty, Camerer, and Antonio Rangel, associate professor of economics at Caltech—watched how the VMPFC and ventral striatum reacted in 40 volunteers who were presented with a series of potential money-transfer scenarios while lying in an fMRI machine.

For instance, a participant might be told that he could be given $50 while another person could be given $20; in a second scenario, the student might have a potential gain of only $5 and the other person, $50. The fMRI images allowed the researchers to see how each volunteer’s brain responded to each proposed money allocation.

But there was a twist. Before the imaging began, each participant in a pair was randomly assigned to one of two conditions: One participant was given what the researchers called “a large monetary endowment” ($50) at the beginning of the experiment; the other participant started from scratch, with no money in his or her pocket.

As it turned out, the way the volunteers—or, to be more precise, the reward centers in the volunteers’ brains—reacted to the various scenarios depended strongly upon whether they started the experiment with a financial advantage over their peers.

“People who started out poor had a stronger brain reaction to things that gave them money, and essentially no reaction to money going to another person,” Camerer says. “By itself, that wasn’t too surprising.”

What was surprising was the other side of the coin. “In the experiment, people who started out rich had a stronger reaction to other people getting money than to themselves getting money,” Camerer explains. “In other words, their brains liked it when others got money more than they liked it when they themselves got money.”

“We now know that these areas are not just self-interested,” adds O’Doherty. “They don’t exclusively respond to the rewards that one gets as an individual, but also respond to the prospect of other individuals obtaining a reward.”

What was especially interesting about the finding, he says, is that the brain responds “very differently to rewards obtained by others under conditions of disadvantageous inequality versus advantageous inequality. It shows that the basic reward structures in the human brain are sensitive to even subtle differences in social context.”

This, O’Doherty notes, is somewhat contrary to the prevailing views about human nature. “As a psychologist and cognitive neuroscientist who works on reward and motivation, I very much view the brain as a device designed to maximize one’s own self interest,” says O’Doherty. “The fact that these basic brain structures appear to be so readily modulated in response to rewards obtained by others highlights the idea that even the basic reward structures in the human brain are not purely self-oriented.”

Camerer, too, found the results thought provoking. “We economists have a widespread view that most people are basically self-interested, and won’t try to help other people,” he says. “But if that were true, you wouldn’t see these sort of reactions to other people getting money.”

Still, he says, it’s likely that the reactions of the “rich” participants were at least partly motivated by self-interest—or a reduction of their own discomfort. “We think that, for the people who start out rich, seeing another person get money reduces their guilt over having more than the others.”

Having watched the brain react to inequality, O’Doherty says, the next step is to “try to understand how these changes in valuation actually translate into changes in behavior. For example, the person who finds out they’re being paid less than someone else for doing the same job might end up working less hard and being less motivated as a consequence. It will be interesting to try to understand the brain mechanisms that underlie such changes.”

Πηγή: http://www.physorg.com/news186238210.html

MIT Team Offers ‘Snapshot’ of Life in Other Universes

In astronomy on February 25, 2010 at 11:29 am

Modern cosmology theory holds that our universe may be just one in a vast collection of universes known as the multiverse. MIT physicist Alan Guth has suggested that new universes (known as “pocket universes”) are constantly being created, but they cannot be seen from our universe.

In this view, “nature gets a lot of tries — the universe is an experiment that’s repeated over and over again, each time with slightly different physical laws, or even vastly different physical laws,” says Jaffe.

Some of these universes would collapse instants after forming; in others, the forces between particles would be so weak they could not give rise to atoms or molecules. However, if conditions were suitable, matter would coalesce into galaxies and planets, and if the right elements were present in those worlds, intelligent life could evolve.

Some physicists have theorized that only universes in which the laws of physics are “just so” could support life, and that if things were even a little bit different from our world, intelligent life would be impossible. In that case, our physical laws might be explained “anthropically,” meaning that they are as they are because if they were otherwise, no one would be around to notice them.

MIT physics professor Robert Jaffe and his collaborators felt that this proposed anthropic explanation should be subjected to more careful scrutiny, and decided to explore whether universes with different physical laws could support life.

The MIT physicists have showed that universes quite different from ours still have elements similar to carbon, hydrogen, and oxygen, and could therefore evolve life forms quite similar to us, even when the masses of elementary particles called quarks are dramatically altered.

Jaffe and his collaborators felt that this proposed anthropic explanation should be subjected to more careful scrutiny, so they decided to explore whether universes with different physical laws could support life. Unlike most other studies, in which varying only one constant usually produces an inhospitable universe, they examined more than one constant.

Whether life exists elsewhere in our universe is a longstanding mystery. But for some scientists, there’s another interesting question: could there be life in a universe significantly different from our own?

In work recently featured in a cover story in Scientific American, Jaffe, former MIT postdoc, Alejandro Jenkins, and recent MIT graduate Itamar Kimchi showed that universes quite different from ours still have elements similar to carbon, hydrogen, and oxygen, and could therefore evolve life forms quite similar to us. Even when the masses of the elementary particles are dramatically altered, life may find a way.

“You could change them by significant amounts without eliminating the possibility of organic chemistry in the universe,” says Jenkins.

Although bizarre life forms might exist in universes different from ours, Jaffe and his collaborators decided to focus on life based on carbon chemistry. They defined as “congenial to life” those universes in which stable forms of hydrogen, carbon and oxygen would exist.

“If you don’t have a stable entity with the chemistry of hydrogen, you’re not going to have hydrocarbons, or complex carbohydrates, and you’re not going to have life,” says Jaffe. “The same goes for carbon and oxygen. Beyond those three we felt the rest is detail.”

They set out to see what might happen to those elements if they altered the masses of elementary particles called quarks. There are six types of quarks, which are the building blocks of protons, neutrons and electrons. The MIT team focused on “up”, “down” and “strange” quarks, the most common and lightest quarks, which join together to form protons and neutrons and closely related particles called “hyperons.”

In our universe, the down quark is about twice as heavy as the up quark, resulting in neutrons that are 0.1 percent heavier than protons. Jaffe and his colleagues modeled one family of universes in which the down quark was lighter than the up quark, and protons were up to a percent heavier than neutrons. In this scenario, hydrogen would no longer be stable, but its slightly heavier isotopes deuterium or tritium could be. An isotope of carbon known as carbon-14 would also be stable, as would a form of oxygen, so the organic reactions necessary for life would be possible.

The team found a few other congenial universes, including a family where the up and strange quarks have roughly the same mass (in our universe, strange quarks are much heavier and can only be produced in high-energy collisions), while the down quark would be much lighter. In such a universe, atomic nuclei would be made of neutrons and a hyperon called the “sigma minus,” which would replace protons. They published their findings in the journal Physical Review D last year.

Jaffe and his collaborators focused on quarks because they know enough about quark interactions to predict what will happen when their masses change. However, “any attempt to address the problem in a broader context is going to be very difficult,” says Jaffe, because physicists are limited in their ability to predict the consequences of changing most other physical laws and constants.

A group of researchers at Lawrence Berkeley National Laboratory has done related studies examining whether congenial universes could arise even while lacking one of the four fundamental forces of our universe — the weak nuclear force, which enables the reactions that turn neutrons into protons, and vice versa. The researchers showed that tweaking the other three fundamental forces could compensate for the missing weak nuclear force and still allow stable elements to be formed.

That study and the MIT work are different from most other studies in this area in that they examined more than one constant. “Usually people vary one constant and look at the results, which is different than if you vary multiple constants,” says Mark Wise, professor of physics at Caltech, who was not involved in the research. Varying only one constant usually produces an inhospitable universe, which can lead to the erroneous conclusion that any other congenial universes are impossible.

One physical parameter that does appear to be extremely finely tuned is the cosmological constant — a measure of the pressure exerted by empty space, which causes the universe to expand or contract. When the constant is positive, space expands, when negative, the universe collapses on itself. In our universe, the cosmological constant is positive but very small — any larger value would cause the universe to expand too rapidly for galaxies to form. However, Wise and his colleagues have shown that it is theoretically possible that changes in primordial cosmological density perturbations could compensate at least for small changes to the value of the cosmological constant.

In the end, there is no way to know for sure what other universes are out there, or what life they may hold. But that will likely not stop physicists from exploring the possibilities, and in the process learning more about our own universe.

Casey Kazan via MIT News Office

Πηγή: DailyGalaxy

New Research Pinpoints Regions of Human Brain Responsible for Intelligence

In evolution, science on February 25, 2010 at 11:25 am
“One of the main findings that really struck us was that there was a distributed system here. Several brain regions, and the connections between them, were what was most important to general intelligence.”
Jan Gläscher, postdoctoral fellow at California Institute for Technology

The brain regions important for general intelligence are found in several specific places (orange regions shown on the brain on the left). Looking inside the brain reveals the connections between these regions, which are particularly important to general intelligence. In the image on the right, the brain has been made partly transparent. The big orange regions in the right image are connections (like cables) that connect the specific brain regions in the image on the left. Credit:

The research team included and Ralph Adolphs, the Bren Professor of Psychology and Neuroscience and professor of biology. The Caltech scientists teamed up with researchers at the University of Iowa and USC to examine a uniquely large data set of 241 brain-lesion patients who all had taken IQ tests. The researchers mapped the location of each patient’s lesion in their brains, and correlated that with each patient’s IQ score to produce a map of the brain regions that influence intelligence.

“General intelligence, often referred to as Spearman’s g-factor, has been a highly contentious concept,” says Adolphs. “But the basic idea underlying it is undisputed: on average, people’s scores across many different kinds of tests are correlated. Some people just get generally high scores, whereas others get generally low scores. So it is an obvious next question to ask whether such a general ability might depend on specific brain regions.”

The researchers found that, rather than residing in a single structure, general intelligence is determined by a network of regions across both sides of the brain.

“It might have turned out that general intelligence doesn’t depend on specific brain areas at all, and just has to do with how the whole brain functions,” adds Adolphs. “But that’s not what we found. In fact, the particular regions and connections we found are quite in line with an existing theory about intelligence called the ‘parieto-frontal integration theory.’ It says that general intelligence depends on the brain’s ability to integrate—to pull together—several different kinds of processing, such as working memory.”

The researchers say the findings will open the door to further investigations about how the brain, intelligence, and environment all interact.

Casey Kazan via  California Institute of Technology