I am a mariner of Odysseus with heart of fire but with mind ruthless and clear

Archive for 2010|Yearly archive page

Voyager near Solar System’s edge

In astronomy, extraterrestrial, technology on December 14, 2010 at 6:37 pm

This marvelous piece of engineering, after 33 years, is still contributing to science!

According to reports from NASA it is soon reaching the edge of the solar system!

See how BBC news report it, and a brief story of this legendary spacecraft here!

Next report: the interstellar space...

Advertisements

Richard Dawkins Answers Reddit Questions

In Uncategorized on December 9, 2010 at 3:47 pm

What does NASA’s new life-form discovery mean?

In astronomy, evolution, extraterrestrial on December 3, 2010 at 4:37 pm

What does NASA’s new life-form discovery mean?

Scientists’ announcement of a new form of microbe raises questions about extraterrestrial life. An expert explains

By Christopher R. Walker

 

*

 

What does NASA’s new lifeform discovery mean?

Jodi Switzer Blum/NASA

GFAJ-1 grown on arsenic, left, and the Mono Lake Research area

 

In a much anticipated press conference yesterday afternoon, NASA astrobiologists announced the discovery of an amazing new kind of microbes, which extend the boundaries of what we may rightly call life. According to the press release, “NASA-funded astrobiology research has changed the fundamental knowledge about what comprises all known life on Earth.” Discovered in Mono Lake, an extremely salty and alkaline body of water near Yosemite National Park in California, the microorganism is the first known specimen to substitute arsenic for phosphorus in its cell components, and has raised questions about what the discovery means for extraterrestrial life.

 

To find out what it really means, we called Robert Shapiro, a professor of chemistry at New York University who has written extensively about life’s origins on earth and its potential existence in outer space.

What does this mean for the discovery of life in our solar system or universe?

Not much, except that people may need to broaden their perspectives, and that we should be less “Terracentric” as we seek out new forms of life. Mostly, this discovery adds a new extremophile [organism that lives in an extreme environment] to our inventory — it pushes the boundaries out a little farther. The grand prize would be to discover an independent origin of life: life with its very own chemistry. Such a discovery wouldn’t just say that evolution is robust, it would say that life is abundant. But this discovery doesn’t do that: These organisms are not completely different in their chemical makeup from what we already know.

 

From what I can tell, the microbes prefer to live “normally” but may insert arsenic as a substitute for phosphorus when conditions demand it — arsenic can play the same role that phosphorus would play under normal circumstances. This is a great novelty. Arsenic is bigger and heavier than phosphorus, and its compounds are less stable. These organisms would not have done this unless they didn’t have any other choice. Just like Dr. Gerald Joyce, who was quoted in the New York Times today, I feel sorry for these creatures. Their living conditions are horrible — their environment would be poisonous to most other life on Earth.

Are there any lessons about where to focus our search for extraterrestrial life?

 

Broader searches are better searches. I always marveled at how parochial the searches were that focused on existing genetic assumptions. Hopefully, these findings will shift attention at NASA from [Jupiter moon] Europa — where life may be more familiar, but trapped under a deep ice cap — to [Saturn moon] Titan — where surface life could exist, but conditions are most hostile to traditional life-forms.

That said, it does reinforce Paul Davies’ “Shadow Biosphere” theory that suggests we may be missing major strains of life right here on Earth — either in places traditionally deemed too hostile to life or maybe even right under our noses. An obvious question, then, would be to ask how alternate forms of life could have escaped our notice all this time. Some argue that carbon life may have evolved from mineral life with no carbon of its own, and one could imagine experiments to test this hypothesis. You could simply introduce a carbon-free broth to a carbon-free environment, for example, and see what grows. Or as some people suggest, there could be benefits to testing radioactive environments.

You mentioned that arsenic is poisonous. Are there any industrial applications of these critters that spring to mind?

 

No, there’s no obvious industrial applications. It just shakes up our thinking about what’s possible.

 

So what’s the takeaway, then?

 

It’s an exciting time for risky ideas. Let’s try them. If one in 10 or one in 100 work, wow!

 

Source: salon.com

Infant, 30-Year-Old Black Hole -Youngest Ever Discovered

In astronomy, physics, science on November 16, 2010 at 2:23 pm

It’s estimated that there are millions of unseen black holes in the Milky Way. The ghosts of once massive stars. This composite image by astronomers using NASA’s Chandra X-ray Observatoryby shows a supernova within the galaxy M100 that may contain the youngest known black hole in our cosmic neighborhood. The 30-year-old black hole could help scientists better understand how massive stars explode, which ones leave behind black holes or neutron stars, and the number of black holes in our galaxy and others.

The 30-year-old object is a remnant of SN 1979C, a supernova in the galaxy M100 approximately 50 million light years from Earth.
Data from Chandra, NASA’s Swift satellite, the European Space Agency’s XMM-Newton and the German ROSAT observatory revealed a bright source of X-rays that has remained steady during observation from 1995 to 2007. This suggests the object is a black hole being fed either by material falling into it from the supernova or a binary companion. “If our interpretation is correct, this is the nearest example where the birth of a black hole has been observed,” said Daniel Patnaude of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. who led the study. The scientists think SN 1979C, first discovered by an amateur astronomer in 1979, formed when a star about 20 times more massive than the sun collapsed. Many new black holes in the distant universe previously have been detected in the form of gamma-ray bursts (GRBs). However, SN 1979C is different because it is much closer and belongs to a class of supernovas unlikely to be associated with a GRB. Theory predicts most black holes in the universe should form when the core of a star collapses and a GRB is not produced. “This may be the first time the common way of making a black hole has been observed,” said co-author Abraham Loeb, also of the Harvard-Smithsonian Center for Astrophysics. “However, it is very difficult to detect this type of black hole birth because decades of X-ray observations are needed to make the case.” The idea of a black hole with an observed age of only about 30 years is consistent with recent theoretical work. In 2005, a theory was presented that the bright optical light of this supernova was powered by a jet from a black hole that was unable to penetrate the hydrogen envelope of the star to form a GRB. The results seen in the observations of SN 1979C fit this theory very well. Although the evidence points to a newly formed black hole in SN 1979C, another intriguing possibility is that a young, rapidly spinning neutron star with a powerful wind of high energy particles could be responsible for the X-ray emission. This would make the object in SN 1979C the youngest and brightest example of such a “pulsar wind nebula” and the youngest known neutron star. Casey Kazan via JPL/NASA

Quantum computers may be much easier to build than previously thought: study

In Uncategorized on November 9, 2010 at 6:06 pm
November 9th, 2010 in Physics / Quantum Physics
Quantum computers may be much easier to build than previously thought: study

Enlarge


Illustration of the error correcting code used to demonstrate robustness to loss errors. Each dot represents a single qubit. The qubits are arranged on a lattice in such a way that the encoded information is robust to losing up to 25 percent of the qubits. Credit: Sean Barrett and Thomas Stace

 

Quantum computers should be much easier to build than previously thought, because they can still work with a large number of faulty or even missing components, according to a study published today in Physical Review Letters. This surprising discovery brings scientists one step closer to designing and building real-life quantum computing systems – devices that could have enormous potential across a wide range of fields, from drug design, electronics, and even code-breaking.

Scientists have long been fascinated with building computers that work at a quantum level – so small that the parts are made of just single atoms or electrons. Instead of ‘bits’, the building blocks normally used to store electronic information, quantum systems use quantum bits or ‘qubits’, made up of an arrangement of entangled atoms.

Materials behave very differently at this tiny scale compared to what we are used to in our everyday lives – quantum particles, for example, can exist in two places at the same time. “Quantum computers can exploit this weirdness to perform powerful calculations, and in theory, they could be designed to break public key encryption or simulate complex systems much faster than conventional computers,” said Dr Sean Barrett, the lead author of the study, who is a Royal Society University Research Fellow in the Department of Physics at Imperial College London.

Quantum computers may be much easier to build than previously thought: study

Enlarge


The machines have been notoriously hard to build, however, and were thought to be very fragile to errors. In spite of considerable buzz in the field in the last 20 years, useful quantum computers remain elusive.

 

Barrett and his colleague Dr. Thomas Stace, from the University of Queensland in Brisbane, Australia, have now found a way to correct for a particular sort of error, in which the qubits are lost from the computer altogether. They used a system of ‘error-correcting’ code, which involved looking at the context provided by the remaining qubits to decipher the missing information correctly.

“Just as you can often tell what a word says when there are a few missing letters, or you can get the gist of a conversation on a badly-connected phone line, we used this idea in our design for a quantum computer,” said Dr Barrett. They discovered that the computers have a much higher threshold for error than previously thought – up to a quarter of the qubits can be lost – but the computer can still be made to work. “It’s surprising, because you wouldn’t expect that if you lost a quarter of the beads from an abacus that it would still be useful,” he added.

The findings indicate that quantum computers may be much easier to build than previously thought, but as the results are still based on theoretical calculations, the next step is to actually demonstrate these ideas in the lab. Scientists will need to devise a way for scaling the computers to a sufficiently large number of qubits to be viable, says Barrett. At the moment the biggest quantum computers scientists have built are limited to just two or three qubits.

“We are still some way off from knowing what the true potential of a quantum computer might be, says Barrett. “At the moment quantum computers are good at particular tasks, but we have no idea what these systems could be used for in the future,” he said. “They may not necessarily be better for everything, but we just don’t know. They may be better for very specific things that we find impossible now.”

More information: “Fault tolerant quantum computation with very high threshold for loss errors” Physical Review Letters 09 November 2010, DOI:10.1103/PhysRevLett.105.200502 . Link to paper on pre-print server: http://arxiv.org/abs/1005.2456

Provided by Imperial College London

by

CERN completes transition to lead-ion running at the Large Hadron Collider

In Uncategorized on November 9, 2010 at 3:01 pm

Four days is all it took for the LHC operations team at CERN to complete the transition from protons to lead ions in the LHC. Afterextracting the final proton beam of 2010 on 4 November, commissioning the lead-ion beam was underway by early afternoon. First collisions were recorded at 00:30 CET on 7 November, and stable running conditions marked the start of physics with heavy ions at 11:20 CET today.

 

“The speed of the transition to lead ions is a sign of the maturity of the LHC,” said  Director General Rolf Heuer. “The machine is running like clockwork after just a few months of routine operation.”

Operating the LHC with lead ions – lead atoms stripped of electrons – is completely different from operating the machine with . From the source to collisions, operational parameters have to be re-established for the new type of beam. For lead-ions, as for protons before them, the procedure started with threading a single beam round the ring in one direction and steadily increasing the number of laps before repeating the process for the other beam. Once circulating beams had been established they could be accelerated to the full energy of 287 TeV per beam. This energy is much higher than for proton beams because lead ions contain 82 protons. Another period of careful adjustment was needed before lining the beams up for collision, and then finally declaring that nominal data taking conditions, known at CERN as stable beams, had been established. The three experiments recording data with lead ions, ALICE, ATLAS and CMS can now look forward to continuous lead-ion running until CERN’s winter technical stop begins on 6 December.

“It’s been very impressive to see how well the LHC has adapted to lead,” said Jurgen Schukraft, spokesperson of the ALICE experiment. “The ALICE detector has been optimised to record the large number of tracks that emerge from ion collisions and has handled the first collisions very well, so we are all set to explore this new opportunity at LHC.”

“After a very successful proton run, we’re very excited to be moving to this new phase of LHC operation,” said ATLAS spokesperson Fabiola Gianotti. “The ATLAS detector has recorded first spectacular heavy-ion events, and we are eager to study them in detail.”

“We designed CMS as a multi-purpose detector,” said Guido Tonelli, the collaboration’s spokesperson, “and it’s very rewarding to see how well it’s adapting to this new kind of collision. Having data collected by the same detector in proton-proton and heavy-ion modes is a powerful tool to look for unambiguous signatures of new states of matter.”

Lead-ion running opens up an entirely new avenue of exploration for the LHC programme, probing matter as it would have been in the first instants of the Universe’s existence. One of the main objectives for lead-ion running is to produce tiny quantities of such matter, which is known as quark-gluon plasma, and to study its evolution into the kind of matter that makes up the Universe today. This exploration will shed further light on the properties of the strong interaction, which binds the particles called quarks, into bigger objects, such as protons and neutrons.

Following the winter technical stop, operation of the collider will start again with protons in February and physics runs will continue through 2011.

Provided by CERN

Five features Google needs to deliver in Android 2.3

In Uncategorized on November 9, 2010 at 2:57 pm

(from ArsTechnica)

Android 2.3, codenamed Gingerbread, is expected to materialize this month. Little is known about Gingerbread’s features, however, because Google develops the operating system behind closed doors and doesn’t publish a roadmap. This has fueled a lot of speculation among Android enthusiasts.

Google has hinted that 2.3 could bring a user interface refresh that will reduce the need for handset makers to broadly deviate from the standard user experience. Various leaks have suggested that the platform is being overhauled to boost its suitability for tablet devices. Google’s new WebM multimedia format, which uses the VP8 codec, will likely be supported out of the box. It’s also possible that Gingerbread will include some of the music library streaming and synchronization features that the search giant demonstrated this year at the Google I/O conference.

We have some ideas of our own about what Google should be doing. We think that Android’s messaging applications need an overhaul, Google should make a stronger effort to deliver good first-party software, and the home screen could use some better widgets.

1. Fix the Android e-mail client

One area where Android is still disappointingly weak is conventional e-mail. Google’s own Gmail application is nice, but those of us who still use IMAP feel like second-class citizens. I have had all kinds of problems with Android’s mail application and have learned that I simply can’t rely on it to perform as expected. Google has some work to do to catch up with superior third-party mail applications like K-9.

One of my pet peeves is the native mail client’s lack of support for moving messages between folders—a deficiency that makes it impossible for me to use the program for triaging my e-mail. A feature request calling for the ability to move messages between IMAP folders was filed in Android’s official issue tracker in 2008 and was finally marked as implemented in September of this year. I’m going to be deeply disappointed if the fix doesn’t land in Android 2.3.

Another annoyance is the program’s inability to represent the user’s IMAP folder hierarchy as an actual tree when switching between folders. Instead, I get a massive flat list where each name includes the full path. This is especially obnoxious when I’m trying to get to a deeply nested folder, because the end of the names get truncated, making it impossible to differentiate between individual subfolders. I often have to guess and try multiple times before I find the right folder.

2. Deliver good first-party applications

Tight integration of Google’s Web services is arguably one of Android’s major selling points, yet there are still a number of important Google services that are poorly supported on Android. It’s mystifying that the search giant hasn’t built its own native Android applications for Google Docs or Google Reader. In both cases, users are forced to rely on third-party offerings that aren’t particularly compelling. I’ve also been deeply unimpressed with the buggy Google Finance application, which has never worked reliably for me. I’d really like to see those first-party application gaps closed in future versions of the operating system.

3. Unify Android messaging

Another frustration with Android is the lack of cohesion between the various messaging applications. Google Voice, Google Talk, Messaging, and the standard dialer are all little silos that don’t naturally flow together. It’s not always obvious which application the user should open to access the specific features that they want. The fact that the Talk and Voice icons are nearly identical just adds to the confusion. A more streamlined interface that brings all of the features together in a more natural and intuitive way would greatly improve the Android user experience.

4. More flexible home screen with better widgets

We recently reviewed LauncherPro, an excellent third-party Android home screen replacement that offers a lot of really impressive features and a very slick set of custom widgets that were loosely inspired by HTC’s Sense user interface. I happily paid $2.99 for the “Plus” version of LauncherPro just for the great scrolling agenda widget. It also has a really good widget resizing feature and support for a multitude of customization options. It makes the default Android home screen seem quaint or crippled by comparison.

It’s amazing that a single third-party developer can so vastly out-engineer Google at building a quality home-screen experience. I think that Android needs to match LauncherPro’s feature set out of the box in order to be competitive. I’m hoping that the rumored Android user interface overhaul will bring a superior home screen, but if it doesn’t, then I think the folks at Google should seriously consider hiring/acquiring LauncherPro’s prolific and highly talented developer.

5. Support for higher resolution and a real tablet UI

Although hardware vendors like Samsung are adopting Android for their tablet products, the platform is not designed for the tablet form factor. There seem to be conflicting views within Google about Android’s suitability for tablets in light of the manner in which the platform’s compatibility definition and APIs are structured. The early prototypes have largely failed to impress and some hardware makers like LG have said that they arewaiting for future versions of the platform before they will do Android tablets.

Leaks indicate that a new tablet user experience for Android could potentially be introduced in either Gingerbread or the rumored Honeycomb version. We are hoping that it happens sooner rather than later because there seem to be a lot of gadget makers that are ready to deliver the hardware today and simply need better software.

A related issue is the need for native support for higher screen resolutions. Google’s official documentation doesn’t really address resolutions that are higher than WVGA. We’d like to see Google encouraging Android hardware vendors to move towards something like the iPhone’s retina display. There is also a clear need for more netbook-like resolutions on tablet products.

Waiting for Gingerbread

A fresh round of sketchy Internet rumors claim that Gingerbread will start hitting Nexus One handsets in an over-the-air update this week. These rumors are based on a tweet written in Spanish by someone who is thought to be a leading member of the Open Handset Alliance (the fact that he misspells both “Android” and “Alliance” in his LinkedIn profile doesn’t help the credibility of these rumors, though he does appear to have given Android-related presentations at some mobile conferences).

I think it’s likely that the SDK will emerge at some point this month or in December, but I’m a bit skeptical about the claim that the Nexus One update is going to start rolling out this week. Even if they push a test version to a limited number of developer phones, it’s highly unlikely to be the actual final build. Regardless of when it lands, we are looking forward to seeing what new features Google has cooked up.

Where are the Missing Neutron Galaxies of the Early Universe?

In astronomy on August 4, 2010 at 3:16 pm

Big-1

Ultradense cosmic cannonballs used to tear around the universe, punching through regular galaxies like a bullet through candyfloss, going their own way and heaven help whatever got in their way – and scientists don’t know where they are now.  Luckily this is cosmology, not cinema, or the answer would be “Right behind you!”

Because of the speed of light, staring into space is essentially looking back in time, and scientists have seen ultra-intense galaxies zipping around the first five billion years of existence.  Similar in principle to the intense density of neutron stars ( a collapsed star with a core so dense that a single spoonful would weigh 200 billion pounds) these galaxies were a thousand times denser than regular star-scatterings, packing as much mass as the Milky Way into 0.1% of the volume and far before regular galaxies had time to form.

Scientists suspect that these objects collapsed directly from vast clouds of proto-star material, unlike regular galaxies which form by multiple mergers of smaller galaxies.  But more important than where they came from is finding out where they went.  Three hundred billion stars isn’t the kind of thing you lose down the back of the sofa.

It’s unlikely they merged with other galaxies, since they’d punch through regular star collections like an armor-piercing round with only minimal effects on themselves, and collision with another ultra-dense galaxy would only create an even bigger wildly intense star selection.  Other processes which could hide them, such as building up a diffuse gas cloud or expanding due to stellar detonations, would seem to take longer than the universe has actually had so far.  Despite being awesome.

Space.  Every time we look there’s something cooler.

On the path to quantum computers: Ultra-strong interaction between light and matter realized

In Uncategorized on August 2, 2010 at 3:25 pm

July 30, 2010One more step on the path to quantum computers

Enlarge

This is an impression of the interaction between a superconducting electrical circuit and a microwave photon. Credit: Dr. A. Marx, Technische Universitaet Muenchen

Researchers around the world are working on the development of quantum computers that will be vastly superior to present-day computers. Here, the strong coupling of quantum bits with light quanta plays a pivotal role. Professor Rudolf Gross, a physicist at the Technische Universitaet Muenchen, Germany, and his team of researchers have now realized an extremely strong interaction between light and matter that may represent a first step in this direction.

The interaction between matter and  represents one of the most fundamental processes in physics. Whether a car that heats up like an oven in the summer due to the absorption of light quanta or  that extract electricity from light or light-emitting diodes that convert electricity into light, we encounter the effects of these processes throughout our daily lives. Understanding the interactions between individual  – photons – and atoms is crucial for the development of a quantum computer.

Physicists from the Technische Universitaet Muenchen (TUM), the Walther-Meissner-Institute for Low Temperature Research of the Bavarian Academy of Sciences (WMI) and the Augsburg University have now, in collaboration with partners from Spain, realized an ultrastrong interaction between microwave photons and the atoms of a nano-structured circuit. The realized interaction is ten times stronger than levels previously achieved for such systems.

The simplest system for investigating the interactions between light and is a so-called cavity resonator with exactly one light particle and one atom captured inside (cavity quantum electrodynamics, cavity QED). Yet since the interaction is very weak, these experiments are very elaborate. A much stronger interaction can be obtained with nano-structured circuits in which metals like aluminum become superconducting at temperatures just above absolute zero (circuit QED). Properly configured, the billions of atoms in the merely nanometer thick conductors behave like a single artificial atom and obey the laws of quantum mechanics. In the simplest case, one obtains a system with two energy states, a so-called quantum bit or qubit.

Coupling these kinds of systems with microwave resonators has opened a rapidly growing new research domain in which the TUM Physics, the WMI and the cluster of excellence Nanosystems Initiative Munich (NIM) are leading the field. In contrast to cavity QED systems, the researchers can custom tailor the circuitry in many areas. 

One more step on the path to quantum computers

Enlarge

This is an electron microscopical picture of the superconducting circuit (red: Aluminum-Qubit, grey: Niob-Resonator, green: Silicon substrate). Credit: Thomasz Niemczyk, Technische Universitaet Muenchen

To facilitate the measurements, Professor Gross and his team captured the photon in a special box, a resonator. This consists of a superconducting niobium conducting path that is configured with strongly reflective “mirrors” for microwaves at both ends. In this resonator, the artificial atom made of an aluminum circuit is positioned so that it can optimally interact with the photon. The researchers achieved the ultrastrong interactions by adding another superconducting component into their circuit, a so-called Josephson junction.

The measured interaction strength was up to twelve percent of the resonator frequency. This makes it ten times stronger than the effects previously measureable in circuit QED systems and thousands of times stronger than in a true cavity resonator. However, along with their success the researchers also created a new problem: Up to now, the Jaynes-Cummings theory developed in 1963 was able to describe all observed effects very well. Yet, it does not seem to apply to the domain of ultrastrong interactions. “The spectra look like those of a completely new kind of object,” says Professor Gross. “The coupling is so strong that the atom-photon pairs must be viewed as a new unit, a kind of molecule comprising one atom and one photon.

Experimental and theoretical physicists will need some time to examine this more closely. However, the new experimental inroads into this domain are already providing researchers with a whole array of new experimental options. The targeted manipulation of such atom-photon pairs could hold the key to quanta-based information processing, the so-called quantum computers that would be vastly superior to today’s computers
.

Has Life Spread Virally Through the Universe?

In astronomy, evolution, extraterrestrial on August 2, 2010 at 3:05 pm

Life originated in a nebular cloud, over 10 billion years ago, but may have had multiple origins in multiple locations, including in galaxies older than the Milky Way according to Rudolf Schild of Harvard-Smithsonian Center for Astrophysics and Rhawn Joseph of the Brain Research Laboratory. Multiple origins, they believe, could account for the different domains of life: archae, bacteria, eukaryotes.

The first steps toward life may have been achieved when self-replicating nano-particles initially comprised of a mixture of carbon, calcium, oxygen, hydrogen, phosphorus, sugars, and other elements and gasses were combined and radiated, forming a nucleus around which a lipid-like permeable membrane was established, and within which DNA-bases were laddered together with phosphates and sugars; a process which may have taken billions of years.

DNA-based life, they propose, may be a “cosmic imperative” such that life can only achieve life upon acquiring a DNA genome. Alternatively, the “Universal Genetic Code” may have won out over inferior codes through natural selection. When the first microbe evolved, it immediately began multiplying and spreading throughout the cosmos via panspermia carried by solar winds, Bolide impact, comets, ejection of living planets prior to supernova which are then captured by a newly forming solar system, galactic collisions and following the exchange of stars between galaxies.

Bacteria, archae, and viruses, act as intergalactic genetic messengers, acquiring genes from and transferring genes to life forms dwelling on other planets. Viruses also serve as gene depositories, storing vast numbers of genes which may be transferred to archae and bacteria depending on cellular needs. The acquisition of these genes from the denizens of other worlds, enables prokaryotes and viruses to immediately adapt to the most extreme environments, including those that might be encountered on other planets.

OrionStarFormation

Whether the universe was created by a Big Bang Universe or an Eternal Infinite Universe, once life was established it began to evolve. Archae, bacteria, and viruses may have combined and mixed genes, fashioning the first multi-cellular eukaryote which continued to evolve. Initially, evolution on Earth-like planets was random and dictated by natural selection. Over time, increasingly complex and intelligent species evolved through natural selection whereas inferior competitors became extinct. However, their genes were copied by archae, bacteria, and viruses. If the first steps toward life in this galaxy began 13.6 billion years ago, then using Earth as an example, intelligent life might have evolved within this galaxy by 9 billion years ago. As life continued to spread throughout the cosmos, and as microbes and viruses were cast from world to world, genes continued to be exchanged via horizontal gene transfer and copies of genes coding for advanced and complex characteristics were acquired from and transferred to eukaryotes and highly evolved intelligent life.

Eventually descendants of these microbes, viruses, and their vast genetic libraries, fell to the new born Earth. The innumerable genes stored and maintained in the genomes of these viruses, coupled with prokaryote genes and those transferred to eurkaryotes, made it possible to biologically modify and terraform new Earth, and in so doing, some of these genes, now within the eurkaryote genome, were activated and expressed, replicating various species which had evolved on other worlds. Genes act on genes, and genes act on the environment and the altered environment activates and inhibits gene expression, thereby directly influencing the evolution of species.

On Earth, Schild and Joseph conclude, “the progression from simple cell to sentient intelligent being is due to the activation of viral, archae, and bacteria genes acquired from extra-terrestrial life and inserted into the Earthly eukaryote genome. What has been described as a random evolution is in fact the metamorphosis and replication of living creatures which long ago lived on other planets.”

Jason McManus via Journal of Cosmology