I am a mariner of Odysseus with heart of fire but with mind ruthless and clear

Archive for November, 2010|Monthly archive page

Infant, 30-Year-Old Black Hole -Youngest Ever Discovered

In astronomy, physics, science on November 16, 2010 at 2:23 pm

It’s estimated that there are millions of unseen black holes in the Milky Way. The ghosts of once massive stars. This composite image by astronomers using NASA’s Chandra X-ray Observatoryby shows a supernova within the galaxy M100 that may contain the youngest known black hole in our cosmic neighborhood. The 30-year-old black hole could help scientists better understand how massive stars explode, which ones leave behind black holes or neutron stars, and the number of black holes in our galaxy and others.

The 30-year-old object is a remnant of SN 1979C, a supernova in the galaxy M100 approximately 50 million light years from Earth.
Data from Chandra, NASA’s Swift satellite, the European Space Agency’s XMM-Newton and the German ROSAT observatory revealed a bright source of X-rays that has remained steady during observation from 1995 to 2007. This suggests the object is a black hole being fed either by material falling into it from the supernova or a binary companion. “If our interpretation is correct, this is the nearest example where the birth of a black hole has been observed,” said Daniel Patnaude of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. who led the study. The scientists think SN 1979C, first discovered by an amateur astronomer in 1979, formed when a star about 20 times more massive than the sun collapsed. Many new black holes in the distant universe previously have been detected in the form of gamma-ray bursts (GRBs). However, SN 1979C is different because it is much closer and belongs to a class of supernovas unlikely to be associated with a GRB. Theory predicts most black holes in the universe should form when the core of a star collapses and a GRB is not produced. “This may be the first time the common way of making a black hole has been observed,” said co-author Abraham Loeb, also of the Harvard-Smithsonian Center for Astrophysics. “However, it is very difficult to detect this type of black hole birth because decades of X-ray observations are needed to make the case.” The idea of a black hole with an observed age of only about 30 years is consistent with recent theoretical work. In 2005, a theory was presented that the bright optical light of this supernova was powered by a jet from a black hole that was unable to penetrate the hydrogen envelope of the star to form a GRB. The results seen in the observations of SN 1979C fit this theory very well. Although the evidence points to a newly formed black hole in SN 1979C, another intriguing possibility is that a young, rapidly spinning neutron star with a powerful wind of high energy particles could be responsible for the X-ray emission. This would make the object in SN 1979C the youngest and brightest example of such a “pulsar wind nebula” and the youngest known neutron star. Casey Kazan via JPL/NASA
Advertisements

Quantum computers may be much easier to build than previously thought: study

In Uncategorized on November 9, 2010 at 6:06 pm
November 9th, 2010 in Physics / Quantum Physics
Quantum computers may be much easier to build than previously thought: study

Enlarge


Illustration of the error correcting code used to demonstrate robustness to loss errors. Each dot represents a single qubit. The qubits are arranged on a lattice in such a way that the encoded information is robust to losing up to 25 percent of the qubits. Credit: Sean Barrett and Thomas Stace

 

Quantum computers should be much easier to build than previously thought, because they can still work with a large number of faulty or even missing components, according to a study published today in Physical Review Letters. This surprising discovery brings scientists one step closer to designing and building real-life quantum computing systems – devices that could have enormous potential across a wide range of fields, from drug design, electronics, and even code-breaking.

Scientists have long been fascinated with building computers that work at a quantum level – so small that the parts are made of just single atoms or electrons. Instead of ‘bits’, the building blocks normally used to store electronic information, quantum systems use quantum bits or ‘qubits’, made up of an arrangement of entangled atoms.

Materials behave very differently at this tiny scale compared to what we are used to in our everyday lives – quantum particles, for example, can exist in two places at the same time. “Quantum computers can exploit this weirdness to perform powerful calculations, and in theory, they could be designed to break public key encryption or simulate complex systems much faster than conventional computers,” said Dr Sean Barrett, the lead author of the study, who is a Royal Society University Research Fellow in the Department of Physics at Imperial College London.

Quantum computers may be much easier to build than previously thought: study

Enlarge


The machines have been notoriously hard to build, however, and were thought to be very fragile to errors. In spite of considerable buzz in the field in the last 20 years, useful quantum computers remain elusive.

 

Barrett and his colleague Dr. Thomas Stace, from the University of Queensland in Brisbane, Australia, have now found a way to correct for a particular sort of error, in which the qubits are lost from the computer altogether. They used a system of ‘error-correcting’ code, which involved looking at the context provided by the remaining qubits to decipher the missing information correctly.

“Just as you can often tell what a word says when there are a few missing letters, or you can get the gist of a conversation on a badly-connected phone line, we used this idea in our design for a quantum computer,” said Dr Barrett. They discovered that the computers have a much higher threshold for error than previously thought – up to a quarter of the qubits can be lost – but the computer can still be made to work. “It’s surprising, because you wouldn’t expect that if you lost a quarter of the beads from an abacus that it would still be useful,” he added.

The findings indicate that quantum computers may be much easier to build than previously thought, but as the results are still based on theoretical calculations, the next step is to actually demonstrate these ideas in the lab. Scientists will need to devise a way for scaling the computers to a sufficiently large number of qubits to be viable, says Barrett. At the moment the biggest quantum computers scientists have built are limited to just two or three qubits.

“We are still some way off from knowing what the true potential of a quantum computer might be, says Barrett. “At the moment quantum computers are good at particular tasks, but we have no idea what these systems could be used for in the future,” he said. “They may not necessarily be better for everything, but we just don’t know. They may be better for very specific things that we find impossible now.”

More information: “Fault tolerant quantum computation with very high threshold for loss errors” Physical Review Letters 09 November 2010, DOI:10.1103/PhysRevLett.105.200502 . Link to paper on pre-print server: http://arxiv.org/abs/1005.2456

Provided by Imperial College London

by

CERN completes transition to lead-ion running at the Large Hadron Collider

In Uncategorized on November 9, 2010 at 3:01 pm

Four days is all it took for the LHC operations team at CERN to complete the transition from protons to lead ions in the LHC. Afterextracting the final proton beam of 2010 on 4 November, commissioning the lead-ion beam was underway by early afternoon. First collisions were recorded at 00:30 CET on 7 November, and stable running conditions marked the start of physics with heavy ions at 11:20 CET today.

 

“The speed of the transition to lead ions is a sign of the maturity of the LHC,” said  Director General Rolf Heuer. “The machine is running like clockwork after just a few months of routine operation.”

Operating the LHC with lead ions – lead atoms stripped of electrons – is completely different from operating the machine with . From the source to collisions, operational parameters have to be re-established for the new type of beam. For lead-ions, as for protons before them, the procedure started with threading a single beam round the ring in one direction and steadily increasing the number of laps before repeating the process for the other beam. Once circulating beams had been established they could be accelerated to the full energy of 287 TeV per beam. This energy is much higher than for proton beams because lead ions contain 82 protons. Another period of careful adjustment was needed before lining the beams up for collision, and then finally declaring that nominal data taking conditions, known at CERN as stable beams, had been established. The three experiments recording data with lead ions, ALICE, ATLAS and CMS can now look forward to continuous lead-ion running until CERN’s winter technical stop begins on 6 December.

“It’s been very impressive to see how well the LHC has adapted to lead,” said Jurgen Schukraft, spokesperson of the ALICE experiment. “The ALICE detector has been optimised to record the large number of tracks that emerge from ion collisions and has handled the first collisions very well, so we are all set to explore this new opportunity at LHC.”

“After a very successful proton run, we’re very excited to be moving to this new phase of LHC operation,” said ATLAS spokesperson Fabiola Gianotti. “The ATLAS detector has recorded first spectacular heavy-ion events, and we are eager to study them in detail.”

“We designed CMS as a multi-purpose detector,” said Guido Tonelli, the collaboration’s spokesperson, “and it’s very rewarding to see how well it’s adapting to this new kind of collision. Having data collected by the same detector in proton-proton and heavy-ion modes is a powerful tool to look for unambiguous signatures of new states of matter.”

Lead-ion running opens up an entirely new avenue of exploration for the LHC programme, probing matter as it would have been in the first instants of the Universe’s existence. One of the main objectives for lead-ion running is to produce tiny quantities of such matter, which is known as quark-gluon plasma, and to study its evolution into the kind of matter that makes up the Universe today. This exploration will shed further light on the properties of the strong interaction, which binds the particles called quarks, into bigger objects, such as protons and neutrons.

Following the winter technical stop, operation of the collider will start again with protons in February and physics runs will continue through 2011.

Provided by CERN

Five features Google needs to deliver in Android 2.3

In Uncategorized on November 9, 2010 at 2:57 pm

(from ArsTechnica)

Android 2.3, codenamed Gingerbread, is expected to materialize this month. Little is known about Gingerbread’s features, however, because Google develops the operating system behind closed doors and doesn’t publish a roadmap. This has fueled a lot of speculation among Android enthusiasts.

Google has hinted that 2.3 could bring a user interface refresh that will reduce the need for handset makers to broadly deviate from the standard user experience. Various leaks have suggested that the platform is being overhauled to boost its suitability for tablet devices. Google’s new WebM multimedia format, which uses the VP8 codec, will likely be supported out of the box. It’s also possible that Gingerbread will include some of the music library streaming and synchronization features that the search giant demonstrated this year at the Google I/O conference.

We have some ideas of our own about what Google should be doing. We think that Android’s messaging applications need an overhaul, Google should make a stronger effort to deliver good first-party software, and the home screen could use some better widgets.

1. Fix the Android e-mail client

One area where Android is still disappointingly weak is conventional e-mail. Google’s own Gmail application is nice, but those of us who still use IMAP feel like second-class citizens. I have had all kinds of problems with Android’s mail application and have learned that I simply can’t rely on it to perform as expected. Google has some work to do to catch up with superior third-party mail applications like K-9.

One of my pet peeves is the native mail client’s lack of support for moving messages between folders—a deficiency that makes it impossible for me to use the program for triaging my e-mail. A feature request calling for the ability to move messages between IMAP folders was filed in Android’s official issue tracker in 2008 and was finally marked as implemented in September of this year. I’m going to be deeply disappointed if the fix doesn’t land in Android 2.3.

Another annoyance is the program’s inability to represent the user’s IMAP folder hierarchy as an actual tree when switching between folders. Instead, I get a massive flat list where each name includes the full path. This is especially obnoxious when I’m trying to get to a deeply nested folder, because the end of the names get truncated, making it impossible to differentiate between individual subfolders. I often have to guess and try multiple times before I find the right folder.

2. Deliver good first-party applications

Tight integration of Google’s Web services is arguably one of Android’s major selling points, yet there are still a number of important Google services that are poorly supported on Android. It’s mystifying that the search giant hasn’t built its own native Android applications for Google Docs or Google Reader. In both cases, users are forced to rely on third-party offerings that aren’t particularly compelling. I’ve also been deeply unimpressed with the buggy Google Finance application, which has never worked reliably for me. I’d really like to see those first-party application gaps closed in future versions of the operating system.

3. Unify Android messaging

Another frustration with Android is the lack of cohesion between the various messaging applications. Google Voice, Google Talk, Messaging, and the standard dialer are all little silos that don’t naturally flow together. It’s not always obvious which application the user should open to access the specific features that they want. The fact that the Talk and Voice icons are nearly identical just adds to the confusion. A more streamlined interface that brings all of the features together in a more natural and intuitive way would greatly improve the Android user experience.

4. More flexible home screen with better widgets

We recently reviewed LauncherPro, an excellent third-party Android home screen replacement that offers a lot of really impressive features and a very slick set of custom widgets that were loosely inspired by HTC’s Sense user interface. I happily paid $2.99 for the “Plus” version of LauncherPro just for the great scrolling agenda widget. It also has a really good widget resizing feature and support for a multitude of customization options. It makes the default Android home screen seem quaint or crippled by comparison.

It’s amazing that a single third-party developer can so vastly out-engineer Google at building a quality home-screen experience. I think that Android needs to match LauncherPro’s feature set out of the box in order to be competitive. I’m hoping that the rumored Android user interface overhaul will bring a superior home screen, but if it doesn’t, then I think the folks at Google should seriously consider hiring/acquiring LauncherPro’s prolific and highly talented developer.

5. Support for higher resolution and a real tablet UI

Although hardware vendors like Samsung are adopting Android for their tablet products, the platform is not designed for the tablet form factor. There seem to be conflicting views within Google about Android’s suitability for tablets in light of the manner in which the platform’s compatibility definition and APIs are structured. The early prototypes have largely failed to impress and some hardware makers like LG have said that they arewaiting for future versions of the platform before they will do Android tablets.

Leaks indicate that a new tablet user experience for Android could potentially be introduced in either Gingerbread or the rumored Honeycomb version. We are hoping that it happens sooner rather than later because there seem to be a lot of gadget makers that are ready to deliver the hardware today and simply need better software.

A related issue is the need for native support for higher screen resolutions. Google’s official documentation doesn’t really address resolutions that are higher than WVGA. We’d like to see Google encouraging Android hardware vendors to move towards something like the iPhone’s retina display. There is also a clear need for more netbook-like resolutions on tablet products.

Waiting for Gingerbread

A fresh round of sketchy Internet rumors claim that Gingerbread will start hitting Nexus One handsets in an over-the-air update this week. These rumors are based on a tweet written in Spanish by someone who is thought to be a leading member of the Open Handset Alliance (the fact that he misspells both “Android” and “Alliance” in his LinkedIn profile doesn’t help the credibility of these rumors, though he does appear to have given Android-related presentations at some mobile conferences).

I think it’s likely that the SDK will emerge at some point this month or in December, but I’m a bit skeptical about the claim that the Nexus One update is going to start rolling out this week. Even if they push a test version to a limited number of developer phones, it’s highly unlikely to be the actual final build. Regardless of when it lands, we are looking forward to seeing what new features Google has cooked up.