Search This Blog

Saturday, May 24, 2008

Microsoft to shut down book scanning operations


Microsoft said Friday it is ending its quest to create an online library of the world's books as the technology titan revamps its strategy to battle Internet search king Google.
Live Search Books and Live Search Academics projects are being cancelled and the websites will be taken down next week, Microsoft senior vice president of search Satya Nadella said in an online posting.
"This also means that we are winding down our digitization initiatives, including our library scanning and our in-copyright book programs," Nadella wrote.
"Based on our experience, we foresee that the best way for a search engine to make book content available will be by crawling content repositories created by book publishers and libraries."
Microsoft launched its online library projects after Google embarked on an ambitious and controversial campaign to make all written works available free online in digital format.
Microsoft Corp. is abandoning its effort to scan whole libraries and make their contents searchable, a sign it may be getting choosier about the fights it will pick with Google Inc.

The world's largest software maker is under pressure to show it has a coherent strategy for turning around its unprofitable online business after its bid for Yahoo Inc., last valued at $47.5 billion, collapsed this month.

Digitizing books and archiving academic journals no longer fits with the company's plan for its search operation, wrote Satya Nadella, senior vice president of Microsoft's search and advertising group, in a blog post Friday.

Microsoft will take down two separate sites for searching the contents of books and academic journals next week, and Live Search will direct Web surfers looking for books to non-Microsoft sites, the company said.

Nadella said Microsoft will focus on "verticals with high commercial intent."

"We believe the next generation of search is about the development of an underlying, sustainable business model for the search engine, consumer and content partner," Nadella wrote.

At an advertising confab at Microsoft's Redmond, Wash., headquarters this week, he demonstrated a new system that rewards customers with cash rebates for using Live Search to find and buy items on advertisers' sites.

Microsoft entered the book-scanning business in 2005 by contributing material to the Open Content Alliance, an industry group conceived by the Internet Archive and Yahoo. In 2006, it unveiled its competing MSN book search site.

Unlike Google, whose decision to scan books still protected under copyright law has provoked multiple lawsuits, Microsoft stuck to scanning books with the permission of publishers or that were firmly in the public domain.

The company said it will give publishers digital copies of the 750,000 books and 80 million journal articles it has amassed.

Microsoft's search engine is a distant third behind Google's and Yahoo's, in terms of the number of queries performed each month, despite the company's many attempts to emulate Google's innovative search features and create some of its own.

more........

Microsoft Will Shut Down Book Search Program
Microsoft said Friday that it was ending a project to scan millions of books and scholarly articles and make them available on the Web, a sign that it is retrenching in some areas of Internet search in the face of competition from Google, the industry leader.

The announcement, made on a company blog, comes two days after Microsoft said it would focus its Internet search efforts on certain areas where it sees an opportunity to compete against Google. On Wednesday, Microsoft unveiled a program offering rebates to users who buy items that they find using the company’s search engine.

Some search experts said Microsoft’s decision to end its book-scanning effort suggested that the company, whose search engine has lagged far behind those of Google and Yahoo, was giving up on efforts to be comprehensive.

“It makes you wonder what else is likely to go,” said Danny Sullivan, editor in chief of the blog Search Engine Land. “One of the reasons people turn to Google is that it tries to be a search player in all aspects of search.”

Mr. Sullivan said that the number of people using book search services from Microsoft and Google was relatively small, but it included librarians, researchers and other so-called early adopters who often influence others. These users are now likely to turn to Google with increasing frequency, he said.

Both Microsoft and Google have been scanning older books that have fallen into the public domain, as well as copyright-protected books under agreements with some publishers. Google also scans copyrighted works without permission so it can show short excerpts to searchers, an approach that has drawn fire from publishers.

Microsoft’s decision also leaves the Internet Archive, the nonprofit digital archive that was paid by Microsoft to scan books, looking for new sources of support. Several major libraries said that they had chosen to work with the Internet Archive rather than with Google, because of restrictions Google placed on the use of the new digital files.

“We’re disappointed,” said Brewster Kahle, chairman of the Internet Archive. Mr. Kahle said, however, that his organization recognized that the project, which has been scanning about 1,000 books each day, would not receive corporate support indefinitely. Mr. Kahle said that Microsoft was reducing its support slowly and that the Internet Archive had enough money to keep the project “going for a while.”

“Eventually funding will come from the public sphere,” Mr. Kahle said.

Some libraries that work with the Internet Archive and Microsoft also said they planned to continue their book-scanning projects.

“We certainly expect to go on with this,” said Carole Moore, chief librarian at the University of Toronto. “Corporate sponsors are interested in whatever works for their commercial interests and their shareholders. Long-term preservation is not something you can look to the commercial sector to provide. It is what research libraries have always done.”

Microsoft acknowledged on its blog that commercial considerations played a part in its decision to end the program.

“Given the evolution of the Web and our strategy, we believe the next generation of search is about the development of an underlying, sustainable business model for the search engine, consumer and content partner,” Satya Nadella, Microsoft’s senior vice president for search, portal and advertising, wrote on the blog.

Microsoft said it had digitized 750,000 books and indexed 80 million journal articles.

Google, which works with libraries like the New York Public Library and those at Harvard, Stanford, the University of Michigan and Oxford, said it had scanned more than a million books. It plans to scan 15 million in the next decade. Google makes the books it scans freely available through its search engine but does not allow other search engines to use its database.

“We are extremely committed to Google Book Search, Google Scholar and other initiatives to bring more content online,” said Adam Smith, product management director at Google.

Friday, May 23, 2008

Shuttle's mission to repair Hubble telescope delayed till Oct. 8

An Aug. 28 mission to repair the Hubble Space Telescope has been moved to Oct.8 because of delays in building fuel tanks needed for the mission, NASA officials announced Thursday.

The shuttle's external tank was redesigned for safer launches after the 2003 Columbia accident. Falling foam from the tank punched a hole in the shuttle's wing, later exposing it to dangerous gases and heat on re-entry.

Shuttle Atlantis is scheduled to make the Hubble trip, with her sister ship Endeavour ready for a rescue mission in case something goes wrong. Unlike missions to the international space station, a shuttle mission to Hubble offers no safe harbor if there's trouble.

If Endeavour isn't tapped for a rescue mission, the shuttle will launch its own mission to the space station Nov.10, a delay from its previous target of Oct.16. NASA officials also said they would use Atlantis for two additional missions after Hubble, ensuring use of all three orbiters until the program ends.

NASA plans to retire the shuttles in 2010 to make room for its successor, the Constellation program.

In a previous announcement, NASA officials said the most recent delay likely would push back all remaining shuttle missions by about five weeks, although they were confident they could meet the 2010 deadline.

more..........

Hubble Mission Is Moved Back
The National Aeronautics and Space Administration set Oct. 8 as the date for the final mission to the Hubble Space Telescope. A crew of seven aboard the space shuttle Atlantis was to repair and upgrade the 18-year-old telescope at the end of August, but the mission was delayed because more time was needed to build fuel tanks for the shuttle flight and a potential rescue mission. The agency also rescheduled a supply mission to the International Space Station to Nov. 10 from Oct. 16. The shuttle Discovery is scheduled for launching on May 31 to deliver and install the Japanese laboratory Kibo at the space station.

Google Co-Founder Makes Pitch For Unused Airwaves Access






Google Inc. co-founder Larry Page this week made an unprecedented appeal to policy makers for access to unused television airwaves.
Mr. Page traveled to Washington, D.C., Wednesday and Thursday to meet with members of Congress and the Federal Communications Commission. Speaking at a public event Thursday, he said the unused airwaves, dubbed "white space," would hasten the goal of blanketing the country with Internet access.
Mr. Page said current "Wi-Fi" wireless technology that allows Internet connections in many urban areas is less useful in rural parts of the country because its range is limited. Using TV white spaces, "You can really get a lot more range," he said.
Mr. Page singled out the National Association of Broadcasters as its main adversary in the battle over TV's unused spectrum. "Part of why I'm here is I just I don't want people to be misled by people who have an interest in this to cause the country to do the wrong thing," Mr. Page said. "Should you really be listening to the NAB which wants to keep the spectrum for its own use?"
Other groups, such as wireless microphone manufacturers and sports leagues, also have expressed concerns about white space devices interfering with their own wireless communications equipment.
The FCC is testing a white space device from Motorola Inc. Other companies, including Microsoft Corp. and Philips Electronics NV, have submitted devices for testing. White space devices have malfunctioned at the FCC on several occasions.
The FCC is remaining mum about whether such devices should be permitted and whether the unused television space should be licensed. The commission is unlikely to make a policy decision on those points until it is presented with a device that is proven to work without interference.
Mr. Page said he is confident a successful device is in the works. "I bet 100% that it will happen. It's just a question of what year it happens in," he said.
In Page's visit to Washington, he met with FCC Chairman Kevin Martin and Commissioner Michael Copps, one of two Democrats on the commission.
Mr. Page also met with three senior members of the House Energy and Commerce Committee -- Chairman John Dingell (D., Mich.) Telecommunications Subcommittee Chairman Edward Markey (D. Mass.) and Rep. Cliff Stearns (R., Fla), the subcommittee's senior Republican.
During Mr. Page's Thursday presentation, he also said Google is "concerned" about the prospects of a merger between Microsoft and Yahoo Inc. Merger talks between Microsoft and Yahoo broke down earlier this month, but the two companies recently have reopened discussions.
"If Yahoo and Microsoft were to merge, they'd have something like 90% of all the communications market," Mr. Page said. "I think that's a really big risk."
Separately, Google and Yahoo are negotiating an advertising deal in which Yahoo would carry search ads served by Google. Civic groups have criticized that deal as all but monopolizing the Internet advertising market.
Mr. Page responded to that critique. "Obviously, we do have a large advertising share and so on, but we also feel like there are ways in which to structure a deal with Yahoo that would be reasonable from that standpoint," he said.


more.........
Google's White-Space Fixation
Google co-founder Larry Page made a rare trip to Washington this week. No, he wasn't lobbying for net neutrality or being grilled about Internet censorship in China. It was all about the white spaces—and Google's growing fixation with wireless communications.
With opposition mounting, Page came to bolster Google's push to gain public access to these white spaces, slivers of wireless spectrum between the broadcast channels used by TV stations. These slivers were originally designed to prevent interference between over-the-air TV broadcasts. But with TV stations moving to new frequencies under a government-ordered switch to digital broadcasting, some see opportunity in those white spaces.
Google (GOOG) and some odd bedfellows, including Microsoft (MSFT), have urged the Federal Communications Commission (FCC) to turn this spectrum over to the public for free, unlicensed use—much like there are designated slices of the airwaves for Wi-Fi networks set up by homes, businesses, and cities. Until recently, though some broadcasters opposed the idea, it looked as if the technology companies would get their way, and that it was only a matter of time before consumers might be allowed to use white spaces for speedier mobile Internet access.
Arguing for the Status Quo
Not anymore. The first hint of trouble came in late March, when the trade group that represents the U.S. cellular industry urged the FCC to auction off the spectrum to the highest bidder instead. "We believe it's a superior approach," says Joel Farren, spokesman for CTIA-The Wireless Assn. "It's a proven model. It protects service quality for consumers." And, based on the $20 billion raised earlier this year in a federal spectrum auction, "there's a strong demand for licensed spectrum," he argues.
Then there are those who want to leave things just the way they are. And the white spaces are in fact already used for limited purposes. In early May, the country music and sports industries voiced concerns to the FCC that unlicensed devices might interfere with wireless microphones used by musicians and sportscasters during live events. Similarly, GE Healthcare (GE) recently warned "about the potential for harmful interference" to medical equipment that use white spaces, asking the FCC to delay redeployment of some spectrum until 2010 so hospitals have time to phase out older machines.
The debate is growing more vocal now because the FCC, which has been reviewing the issue for four years, may be inching closer to a decision. Interested parties have contacted the FCC—via letters and personal visits—nearly twice as many times in May than in April.
FCC Report Could Come Within Weeks
In the past year, several prototypes of white-space devices failed FCC tests. But the agency recently lab-tested several newer prototypes made by Motorola (MOT), Philips Electronics, and a startup named Adaptrum, and may begin field testing soon. "The FCC is committed to moving forward on TV white spaces testing," says FCC spokesman Robert Kenny.
The FCC may issue a report on the field tests within weeks. "We expect a rule by late summer," says Brian Peters, spokesperson for Wireless Innovation Alliance, which promotes unlicensed use of the spectrum on behalf of members including Microsoft, Google, Dell (DELL), and Hewlett-Packard (HPQ). But with all the lobbying from the opposition, "it's less certain now" what that rule will look like, says Rebecca Arbogast, principal at Stifel Nicolaus.

Enter Larry Page. During his May 22 speech to the New America Foundation, a think tank where Google CEO Eric Schmidt is chairman-elect, Page used a wireless microphone to downplay interference concerns. "I don't think there's any technical credence to this at all," he said.
Better Broadband Access
Page also argued that unlicensed white spaces offer a way for the U.S. to catch up with the rest of the world in broadband access. For the second year running, the U.S. ranked 15th among the 30 members of the Organization for Economic Cooperation & Development in terms of broadband availability, a recent survey found (BusinessWeek.com, 5/22/08). Today, 10% of Americans still don't have access to DSL or cable broadband, according to consultancy Parks Associates.
Google and others also see white spaces as a way to reignite interest in municipal Wi-Fi networks, many of which are struggling or even being turned off due to financial and service-quality problems. Because the white-space spectrum is more robust, networks using those frequencies would require a fourth to a fifth as many Wi-Fi transmitters to cover an area, according to Michael Calabrese, vice-president of the New America Foundation. Thus, network construction would cost less, while the wireless connections would be speedier.
Should white spaces be approved for unlicensed use, Page hinted, Google might even build some networks for cities with its own funds. "We have money to invest," he said. "We'd probably do it if we could do it on a reasonable scale." Google currently operates a Wi-Fi network in Mountain View, Calif., used by 40,000 people.
Google Has Plenty to Gain
Yet Google has hinted at major wireless incursions before, only to hang by the sidelines. Before the last FCC auction, the Internet search company pushed hard for open-access rules requiring mobile operators to allow more devices and services on their networks. Google vowed to participate in the auction if such rules were adopted, but once they were, the company made what appeared to be just a token bid before withdrawing.
That said, Google does have plenty to gain from open access to white spaces. White spaces are critical to the adoption of the Android operating system for cell phones that Google spearheaded last year with hopes that some Android-based devices would connect with that unlicensed spectrum in addition to traditional cellular networks.
And since Google makes most of its money from displaying ads alongside its search results and on other Web pages, more ubiquitous access to the Internet could mean more business. "If we have 10% better (broadband) connectivity in the U.S., it translates into 10% more revenues for us," Page said.

Thursday, May 22, 2008

US researchers have created 'living computers' by genetically altering bacteria.


Three-dimensional computer-rendered E. coli bacteria.
New Meaning For The Term 'Computer Bug': Genetically Altered Bacteria For Data Storage.
US researchers have created 'living computers' by genetically altering bacteria. The findings of the research demonstrate that computing in living cells is feasible, opening the door to a number of applications including data storage and as a tool for manipulating genes for genetic engineering.

A research team from the biology and the mathematics departments of Davidson College, North Carolina and Missouri Western State University, Missouri, USA added genes to Escherichia coli bacteria, creating bacterial computers able to solve a classic mathematical puzzle, known as the burnt pancake problem.

The burnt pancake problem involves a stack of pancakes of different sizes, each of which has a golden and a burnt side. The aim is to sort the stack so the largest pancake is on the bottom and all pancakes are golden side up. Each flip reverses the order and the orientation (i.e. which side of the pancake is facing up) of one or several consecutive pancakes. The aim is to stack them properly in the fewest number of flips.

In this experiment, the researchers used fragments of DNA as the pancakes. They added genes from a different type of bacterium to enable the E. coli to flip the DNA 'pancakes'. They also included a gene that made the bacteria resistant to an antibiotic, but only when the DNA fragments had been flipped into the correct order. The time required to reach the mathematical solution in the bugs reflects the minimum number of flips needed to solve the burnt pancake problem.

"The system offers several potential advantages over conventional computers" says lead researcher, Karmella Haynes. "A single flask can hold billions of bacteria, each of which could potentially contain several copies of the DNA used for computing. These 'bacterial computers' could act in parallel with each other, meaning that solutions could potentially be reached quicker than with conventional computers, using less space and at a lower cost." In addition to parallelism, bacterial computing also has the potential to utilize repair mechanisms and, of course, can evolve after repeated use.

Wednesday, May 21, 2008

Large Hadron Collider is gearing up to slam protons together at energies that have not yet been studied on Earth.

A visitor snaps a picture of the Large Hadron Collider's underground beamline during an open house in April, which was the last opportunity for the public to see the facility before the scheduled start of operations.
BIG-BANG BATTLE PLAN SET

The schedule is taking shape for the startup of the world's biggest particle-smasher — and for the lawsuit seeking to shut it down.
The plaintiffs in that lawsuit have served the federal government with a summons, and Justice Department lawyers are due to respond by June 24. One of the other parties in the case, Europe’s CERN particle-physics center, is supposed to be served this week in Switzerland, according to Walter Wagner, one of the plaintiffs.
CERN's Large Hadron Collider is gearing up to slam protons together at energies that have not yet been studied on Earth. The peak energy of 14 trillion electron volts approaches levels seen in the first microseconds after the big bang - which is why the collider has been nicknamed the "Big Bang Machine."
Wagner and his co-plaintiff, Luis Sancho, are worried that when the collider reaches full power, it could create black holes or strangelets that would grow and gobble up our planet.
Physicists at CERN and the world's other top-level research facilities have been saying for years that that's mere science-fiction silliness. Nevertheless, Wagner, Sancho and other critics continue to sound the alarm. They want operations at the collider to be put on hold for at least four months, pending further safety reviews that would address the black-hole question and other potential risks.
Among other defendants, the lawsuit names the U.S. Department of Energy, the National Science Foundation and Fermilab in Illinois, the laboratory that is playing the lead U.S. role in the Large Hadron Collider. The Justice Department is handling the federal government's legal response, and Justice spokesman Andrew Ames said he would not comment on the suit until the response is filed next month.
Federal attorneys are likely to focus their defense on relatively narrow legal issues – for example, claiming that the government as well as government-funded scientists have complied with environmental guidelines, or that the LHC project is not subject to U.S. regulations, or that the lawsuit should be thrown out because of technicalities. That’s how Wagner’s challenges to previous particle-collider experiments have been handled.
Although anything can happen (even the sudden eruption of a rogue black hole in the courtroom), I wouldn’t expect the attorneys’ brief to focus on the globe-gobbling question. That element of the controversy will be addressed in a safety report currently being reviewed by CERN and outside experts. The report, which is said to underline and amplify previous conclusions that the LHC is safe, could be released by the end of this month, CERN spokesman James Gillies told me.
The technical report is currently undergoing a final review by CERN’s scientific policy committee as well as outside experts, and Gillies is writing up a version in easier-to-understand language for the benefit of us non-physicists.
First beams in July?Meanwhile, CERN’s startup schedule is coming into better focus as well: The LHC team is due to start cooling down the last sectors of the collider’s beamline to near absolute zero on Wednesday, with the expectation that cooldown will be complete by mid-June, Gillies said. That would clear the way for a final round of equipment testing, with the first attempt to inject proton beams into the collider “likely to be in the second half of July,” he said.
The exact date would be set four to six weeks in advance – leaving enough time to plan a big media event around the first beam injection. Gillies said the first injection will provide a convenient hook for coverage, including a live BBC broadcast of the turn-on around 9:30 a.m. CET (3:30 a.m. ET) on the appointed day. However, he stressed that the beam injection was just one step in a months-long commissioning process.
“It’s not like launching a space shuttle or anything like that,” Gillies said.
The first low-power proton collisions would come later in the summer or fall, leading up to a VIP ceremony on Oct. 21. The collider won’t reach its full power until next year, after CERN’s winter break. Any legal questions should be resolved by the time the Large Hadron Collider gets anywhere close to post-big-bang energies. At least that’s what the Justice Department and CERN would expect.
Weighing the risksFor his part, Wagner wants to see the safety report first. Despite all the expert claims that the LHC will be safe, the former nuclear health physicist insisted that nothing he's seen so far has absolutely ruled out the black-hole doomsday scenario.
"For all I know, they will come up with some other novel argument that proves this can't happen. We want to see an argument that absolutely proves it ... because otherwise it ends up being [a statement that] 'we have no way of calculating.' And that, to me, is a scary proposition."
I should emphasize here that most scientists, even the ones who think way outside the box, are not scared. Here's how theoretical physicist Michio Kaku, the author of "Physics of the Impossible," put it to me back in February:
"I'm going to sleep well when that machine is turned on, because I know that cosmic rays have more energy than the Large Hadron Collider, and you don't see black holes from outer space. These are microscopic in size, and they don't last long."
Of course, there are always counterarguments, and counter-counterarguments. For a sampling, you can check out LHC Concerns and the BackReaction blog, among many other resources.

Tuesday, May 20, 2008

A fragment of DNA from the Tasmanian tiger has been brought back to life.


Australian scientists extracted genetic material from a 100-year-old museum specimen, and put it into a mouse embryo to study how it worked.
It is the first time DNA of an extinct species has been used in this way, says a University of Melbourne team.
The study, published online by the Public Library of Science (PLoS), suggests the marsupial's genetic biodiversity may not be lost.
Dr Andrew Pask, of the Department of Zoology, who led the research, said it was the first time that DNA from an extinct species had been used to carry out a function in a living organism.
"As more and more species of animals become extinct, we are continuing to lose critical knowledge of gene function and its potential," he said.
"Up until now we have only been able to examine gene sequences from extinct animals. This research was developed to go one step further to examine extinct gene function in a whole organism."

Genetic heritage
The Tasmanian tiger was hunted to extinction in the wild in the early 1900s. The last known specimen died in captivity in 1936, but several museums around the world still hold tissue samples preserved in alcohol.
The University of Melbourne team extracted DNA from some of these specimens, and injected a gene involved in cartilage formation into developing mouse embryos.
The DNA functioned in a similar way to the equivalent gene in mice, giving information about the genetic make-up of the extinct marsupial.
"At a time when extinction rates are increasing at an alarming rate, especially of mammals, this research discovery is critical," said Professor Marilyn Renfree, also of the University of Melbourne's Department of Zoology.
"For those species that have already become extinct, our method shows that access to their genetic biodiversity may not be completely lost."

Frozen Ark
Prof David Rawson, who was not part of the research team, said the work gave a glimpse of an aspect of an organism that we no longer have.
"We only get a glimpse; we only see a tiny part of the whole picture," he said.
Prof Rawson said the DNA came from a species that only recently died out, and for which there are samples preserved in alcohol. Going further back in time will be more difficult, he added.
"To go back to animals and plants that went extinct thousands of years ago, there is less chance to get a sizeable portion of DNA to unravel it," he explained.
"But modern techniques are developing all the time - we can now get information from material we once thought was impossible."
Some researchers think the method could help reveal the function of genes in species such as the Neanderthals or mammoths.
Prof Rawson, of the LIRANS Institute of Research at the University of Bedfordshire, UK, is one of several UK experts involved in the Frozen Ark, a global project to preserve genetic information from a range of threatened species.
Full details of the Australian study are published in the open-access journal PLoS One.

Ministers are to consider plans for a database of electronic information holding details of every phone call and e-


Phone calls database considered

Ministers are to consider plans for a database of electronic information holding details of every phone call and e-mail sent in the UK, it has emerged.
The plans, reported in the Times, are at an early stage and may be included in the draft Communications Bill later this year, the Home Office confirmed.
A Home Office spokesman said the data was a "crucial tool" for protecting national security and preventing crime.
Ministers have not seen the plans which were drawn up by Home Office officials.
A Home Office spokesman said: "The Communications Data Bill will help ensure that crucial capabilities in the use of communications data for counter-terrorism and investigation of crime continue to be available.
"These powers will continue to be subject to strict safeguards to ensure the right balance between privacy and protecting the public.



The spokesman said changes need to be made to the Regulation of Investigatory Powers Act 2000 "to ensure that public authorities can continue to obtain and have access to communications data essential for counter-terrorism and investigation of crime purposes".
But the Information Commission, an independent authority set up to protect personal information, said the database "may well be a step too far" and highlighted the risk of data being lost, traded or stolen.
Assistant information commissioner Jonathan Bamford said: "We are not aware of any justification for the state to hold every UK citizen's phone and internet records. We have real doubts that such a measure can be justified, or is proportionate or desirable.
"Defeating crime and terrorism is of the utmost importance, but we are not aware of any pressing need to justify the government itself holding this sort of data."
'Appalling record'
A number of data protection failures in recent months, including the loss of a CD carrying the personal details of every child benefit claimant, have embarrassed the government.
The plans also prompted concern from political groups.
The shadow home secretary, David Davis, said: "Given [ministers'] appalling record at maintaining the integrity of databases holding people's sensitive data, this could well be more of a threat to our security than a support."
Liberal Democrat home affairs spokesman Chris Huhne called the proposals "an Orwellian step too far".
He said ministers had "taken leave of their senses if they think that this proposal is compatible with a free country and a free people".
"Given the appalling track record of data loss, this state is simply not to be trusted with such private information," said Mr Huhne.


Discovery passes final review for May 31 launch


Discovery Ready For Space Mission
The shuttle Discovery has been given a green light for the May 31 launch, when it will engage in the second of three flights to launch components of the Japan Aerospace Exploration Agency’s Kibo laboratory. The announcement was made on Monday, during a press conference held at NASA’s Kennedy Space Center in Florida, following the Flight Readiness Review.
According to Shuttle Launch Director Mike Leinbach, all preparations are going well. He also added that shuttle work crews will be able to get some time off for the Memorial Day holiday, due to a smooth processing flow of the pre-launch preparations, and return in time for the launch.
Discovery’s STS-124 mission is to install the Kibo’s Japanese Pressurized Module (JPM) and its remote manipulator system (RMS) on the International Space Station, following the successful installation of the Japanese Experiment Logistics Module, during mission STS-123.
Before any launch, two Flight Readiness Reviews need to be conducted, a program-level review and an executive-level review. The shuttle prepares for a 14-day flight to the International Space Station, carrying the largest payload so far (the Kibo pressurized module alone weights 32,000 pounds).
Discovery will also be in charge of delivering new station crew member Greg Chamitoff and bringing Flight Engineer Garrett Reisman back home, after three months aboard the International Space Station, NASA announced.
The STS-124 mission will include three spacewalks, as follows: on day 4, astronauts Ronald J. Garan Jr. and Michael E. Fossum will transfer the Orbiter Boom Sensor back to the shuttle from its temporary location (during the last mission, the Boom Sensor was left at the station for lack of room) and then prepare for the JPM removal from the shuttle’s payload bay.
The second spacewalk will take place two days after the first one. Garan and Fossum will have the mission to install covers and external television equipment on the JPM and remove covers on the RMS, as well as prepare for the flight day 7 relocation of the Japanese Logistics Module.
The third and final spacewalk will be performed by the same astronauts, whose primary mission will be to replace a failed hydrogen tank assembly on the station’s truss with a spare one that has been temporarily stored on one of the station’s external stowage platforms.

more.........

NASA managers today cleared the shuttle Discovery for launch May 31, at 5:02:09 p.m. EDT, on a long-awaited three-spacewalk mission to deliver and attach Japan's huge Kibo laboratory module to the international space station. The decision to proceed came after a lengthy discussion on the health of the station's Soyuz lifeboat after back-to-back re-entry problems that led to rough, off-course landings.

Russian engineers are still assessing what went wrong during the descent of the Soyuz TMA-11 spacecraft April 19 when two of the three modules making up the vehicle failed to separate properly before atmospheric entry. The propulsion module ultimately broke free of the crew section, allowing Yuri Malenchenko, outgoing station commander Peggy Whitson and a South Korean space tourist to complete a steep but otherwise safe landing in Kazakhstan.
It was the second such entry mishap in a row and Russian engineers have launched a major investigation to determine what went wrong and whether the Soyuz TMA-12 spacecraft currently docked to the station is healthy. It is a critical issue because the three-seat Soyuz is the station crew's only way home in the absence of a space shuttle in the event of an emergency that might force an evacuation.
It is a critical issue for NASA as well because the agency plans to rotate U.S. crew members during Discovery's flight, ferrying Gregory Chamitoff to the station to join Expedition 17 commander Sergei Volkov and Oleg Kononenko and bringing Garrett Reisman back to Earth. Another shuttle is not scheduled to visit the space station until November. The Soyuz TMA-12 spacecraft will serve as the station's lifeboat until October when a fresh crew is launched aboard a fresh Soyuz. Current plans call for Volkov, Kononenko and U.S. space tourist Richard Garriott, who will ride the new Soyuz to orbit, to return to Earth aboard the TMA-12 spacecraft Oct. 23.
Going into today's executive-level flight readiness review to set a launch date for Discovery, NASA managers discussed a variety of options, including whether to delay the shuttle flight until Russian engineers get a better idea about the status of the Soyuz currently in orbit.
But the Russian investigation into what went wrong during the Soyuz TMA-11 descent is not expected to be complete until the end of June or later and a one week to two week delay for Discovery would not improve the station crew's safety margin.
Bill Gerstenmaier, NASA's chief of space operations, said the odds of a station failure that would force a Soyuz evacuation are low - on the order of 1-in-124 over six months - and that a safe landing would be likely even if similar entry problems occurred.
"If something comes out of the investigation that says the Soyuz is not acceptable as a return vehicle, then we would go take some appropriate action," Gerstenmaier said. "But we haven't seen anything along those lines. For emergency return, Soyuz is OK. ... But the Russians are working through it methodically, trying to identify if there's anything that would invalidate its use as an emergency return vehicle. As long as that doesn't occur, then we proceed with our normal plans. And I don't see anything between now and the 31st that's going to change any of that thinking."
Sources familiar with recent NASA discussions on the Soyuz issue said an assessment of the relative risks of various options played a key role. While the odds of a station problem that would force evacuation are thought to be around 1-in-124 over six months, the overall risk of a catastrophic shuttle failure over the course of Discovery's mission - including all phases of launch, orbital operations and re-entry - is on the order of 1-in-78, according to NASA's latest assessment. Given those relative odds, and the belief that the station is five times more likely to suffer a non-recoverable failure in the absence of a crew to repair it, NASA managers opted to press ahead with an on-schedule launch for Discovery.
"It's a fairly low probability that we'd need to use (the Soyuz) in an emergency case," Gerstenmaier said. "In fact, we analyzed that, we did a probabilistic risk assessment of what the chances were of having to use the Soyuz as a rescue vehicle. It's a low probability we're going to have to use it. But if we use it, we think there's a good probability it'll return the crew and do what it needs to do. So as a parachute or a backup system, it has the reliability that we think we need for a backup system. We have yet to prove it has the reliability that we would use for a nominal return situation."
Discovery is in good shape and on schedule for launch May 31. The only technical problem of any real concern since the shuttle was moved to pad 39A was the failure of a multiplexer-demultiplexer computer system that required a changeout. When the MDM, known as FA2, failed, it caused two of the shuttle's four flight computers to lose synchronization. In flight, that could force a crew to switch to a backup flight system computer, limiting their ability to cope with additional failures. But the MDM was successfully replaced and tested and no concerns about it were raised during today's flight readiness review.
"It's an extremely complicated mission," Gerstenmaier said. "Adding the Kibo module is a big deal for the Japanese. This really brings them up to speed. And adding the Kibo module is not easy. ... We need to be careful we don't assume success and take our eye off of what we're doing. We've got to stay focused. The Soyuz is fine, it will take care of itself, we've got time to work that. That needs to get resolved by the fall. The issue right now in front of is us we need to be 100 percent ready to go fly this flight, we've got to be 100 percent ready to get the Kibo attached. ... We need to work all that activity during the flight and that needs to be our focus."
Commander Mark Kelly, pilot Kenneth Ham, flight engineer Ronald Garan, Karen Nyberg, Michael Fossum, Japanese astronaut Akihiko Hoshide and Chamitoff plan to fly to the Kennedy Space Center on May 28 for the 3 p.m. start of their countdown to launch.

Sunday, May 18, 2008

Electrical engineers have created experimental solar cells spiked with nanowires



Nanowires may boost solar cell efficiency, engineers say


University of California, San Diego electrical engineers have created experimental solar cells spiked with nanowires that could lead to highly efficient thin-film solar cells of the future.
Indium phosphide (InP) nanowires can serve as electron superhighways that carry electrons kicked loose by photons of light directly to the device's electron-attracting electrode - and this scenario could boost thin-film solar cell efficiency,
number of electrons that make it from the light-absorbing polymer to an electrode. By reducing electron-hole recombination, the UC San Diego engineers have demonstrated a way to increases the efficiency with which sunlight can be converted to electricity in thin-film photovoltaics.

Including nanowires in the experimental solar cell increased the "forward bias current" - which is a measure of electrical current - by six to seven orders of magnitude as compared to their polymer-only control device, the engineers found.

"If you provide electrons with a defined pathway to the electrode, you can reduce some of the inefficiencies that currently plague thin-film solar cells made from polymer mixtures. More efficient transport of electrons and holes - collectively known as carriers - is critical for creating more efficient solar cells," said Clint Novotny the first author of the NanoLetters paper, and a recent electrical engineering Ph.D. from UC San Diego's Jacobs School of Engineering. Novotny is now working on solar technologies at BAE Systems.

Simplified Nanowire Growth

The engineers devised a way to grow nanowires directly on the electrode. This advance allowed them to create the electron superhighways that deliver electrons from the polymer-nanowire interface directly to an electrode.

"If nanowires are going to be used massively in photovoltaic devices, then the growth mechanism of nanowires on arbitrary metallic surfaces is an issue of great importance," said co-author Paul Yu, a professor of electrical engineering at UC San Diego's Jacobs School of Engineering. "We contributed one approach to growing nanowires directly on metal."

The UCSD electrical engineers grew their InP nanowires on the metal electrode -indium tin oxide (ITO) - and then covered the nanowire-electrode platform in the organic polymer, P3HT, also known as poly(3-hexylthiophene). The researchers say they were the first group to publish work demonstrating growth of nanowires directly on metal electrodes without using specially prepared substrates such as gold nanodrops.

"Growing nanowires directly on untreated electrodes is an important step toward the goal of growing nanowires on cheap metal substrates that could serve as foundations for next-generation photovoltaics that conform to the curved surfaces like rooftops, cars or other supporting structures, the engineers say.

"By growing nanowires directly on an untreated electrode surface, you can start thinking about incorporating millions or billions of nanowires in a single device. I think this is where the field is eventually going to end up," said Novotny. "But I think we are at least a decade away from this becoming a mainstream technology."

Polymer Solar Cells and Nanowires Meet

As in more traditional organic polymer thin-film solar cells, the polymer material in the experimental system absorbs photons of light. To convert this energy to electricity, each photon-absorbing electron must split apart from its hole companion at the interface of the polymer and the nanowire - a region known as the p-n junction.

Once the electron and hole split, the electron travels down the nanowire - the electron superhighway - and merges seamlessly with the electron-capturing electrode. This rapid shuttling of electrons from the p-n junction to the electrode could serve to make future photovoltaic devices made with polymers more efficient.

"In effect, we used nanowires to extend an electrode into the polymer material," said co-author Edward Yu, a professor of electrical engineering at UCSD's Jacobs School of Engineering.

While the electrons travel down the nanowires in one direction, the holes travel along the nanowires in the opposite direction - until the nanowire dead ends. At this point, the holes are forced to travel through a thin polymer layer before reaching their electrode.

Today's thin-film polymer photovoltaics do not provide freed electrons with a direct path from the p-n junction to the electrode - a situation which increases recombination between holes and electrons and reduces efficiency in converting sunlight to electricity. In many of today's polymer photovoltaics, interfaces between two different polymers serve as the p-n junction. Some experimental photovoltaic designs do include nanowires or carbon nanotubes, but these wires and tubes are not electrically connected to an electrode. Thus, they do not minimize electron-hole recombination by providing electrons with a direct path from the p-n junction to the electrode the way the new UCSD design does.

Before these kinds of electron superhighways can be incorporated into photovoltaic devices, a series of technical hurdles must be addressed - including the issue of polymer degradation. "The polymers degrade quickly when exposed to air. Researchers around the world are working to improve the properties of organic polymers," said Paul Yu.

As it was a proof-of-concept project, the UCSD engineers did not measure how efficiently the device converted sunlight to electricity. This explains, in part, why the authors refer to the device in their NanoLetters paper as a "photodiode" rather than a "photovoltaic."

Having a more efficient method for getting electrons to their electrode means that researchers can make thin-film polymer solar cells that are a little bit thicker, and this could increase the amount of sunlight that the devices absorb.










































Technorati : , , , , , , , ,

Morrow’s three-dimensional arrangement of the magnetic and non-magnetic layers creates a material that exhibits promising magnetic properties for data storage



Morrow will graduate from Rensselaer with a doctorate in ,, applied physics, and astronomy.


Student Innovation Could Improve Data Storage, Magnetic Sensors.


Morrow's,of the magnetic and non-magnetic layers creates a material that exhibits promising magnetic properties for data storage


Paul Morrow has come a long way from his days as an elementary school student, pulling apart his mother's cassette player. The talented young physicist has developed two innovations that could vastly improve magnetic data storage and sense extremely low level magnetic fields in everything from ink on counterfeit currency to tissue in the human brain and heart.


First, Morrow developed a nanomaterial that has never before been produced. The nanomaterial is an array of freestanding nanoscale columns composed of alternating layers of magnetic cobalt and non-magnetic copper.

Morrow's three-dimensional arrangement of the magnetic and non-magnetic layers creates a material that exhibits promising magnetic properties for data storage and magnetic field sensing at room temperature. Similar technology is currently in use in hard drives around the world, but they both use a two-dimensional film design for the layers.

"Because the nanostructure is three-dimensional, it has the potential to vastly expand data storage capability," Morrow said. "A disk with increased data storage density would reduce the size, cost, and power consumption of any electronic device that uses a magnetic hard drive, and a read head sensor based on a small number of these nanocolumns has promise for increasing spatial sensitivity, so that bits that are more closely spaced on the disk can be read. This same concept can be applied to other areas where magnetic sensors are used, such as industrial or medical applications."

Morrow has also developed a microscopic technique to measure the minute magnetic properties of his nanocolumns. Prior to his innovation, no such method existed that was fine-tuned enough to sense the magnetic properties of one or even a small number of freestanding nanostructures.

The technique uses a specialized scanning tunneling microscope (STM) that Morrow built that contains no internal magnetic parts. Most STMs in use today have magnetic parts that make it impossible for them to operate reliably in an external magnetic field according to Morrow. With his modified non-magnetic STM, Morrow was able to use an electromagnet to control the magnetic behavior of his nanocolumns and measure the magnetic properties of fewer than 10 nanocolumns at one time


To date it has been extremely difficult to get an instrument to detect magnetic properties on such a small scale," Morrow said. "With this type of sensitivity, engineers will be able to sense the very low level magnetic properties of a material with sub-micron spatial resolution."

He is currently working to fine-tune the device to detect the properties of just one nanocolumn. His technique could have important implications for the study of other magnetic nanostructures for magnetic sensing applications including motion sensors for industrial applications, detection of magnetic ink in currency and other secure documents, and even help detect and further understand the minuscule magnetic fields generated by the human body.

His discoveries have been published in two articles in the journal Nanotechnology.

Morrow proudly originates from the city of Spartanburg, S.C., the only boy in a close family that includes three sisters. His father is a retired chemistry professor at Wofford College, the local liberal arts college that Morrow attended for his bachelor's, and his mother is a master teacher who instructs elementary schoolteachers in improving their teaching methods. "Their love of learning and teaching has inspired me to one day become a teacher myself," Morrow said.


more..............


PaulMorrowResearch


Paul


MorrowPh.D. Project



Title: "Contact magneto-resistance measurements of multilayered nanostructures measured by non-magnetic scanning tunneling microscope"


As of May 2006, I am officially a Ph.D. candidate, passing my Candidacy Examination on 4-26-2006. In my main research project, I designed and built a non-magnetic STM to serve as a contact nanoprobe to measure the current-perpendicular-to-plane giant magneto-resistance (CPP-GMR) of multilayered Co/Cu nanocolumns grown by oblique angle thermal evaporation. The STM module I built was designed to be integrated into a pre-existing ultrahigh vacuum (UHV)-STM system, and it is constructed of nonmagnetic materials so it can operate fairly well in an external magnetic field. I have added a small electromagnet to magnetize the sample during measurement; initially, the electromagnet could reach fields of 1.8 kOe, but after rewrapping the magnet wire a little more tightly, and attaching pole extensions to concentrate the field at the center of the gap, it can now get up to nearly 3 kOe.

Here are some pictures of my STM.





Current-in-plane giant magneto-resistance (CIP-GMR) was first reported in 1988; it is characterized by a large drop in the resistance when an external magnetic field is applied to multilayered ferromagnetic/nonmagnetic (F/N) films with the current flowing in the plane of the layers that make up the film. In 1997 it revolutionized the magnetic recording industry by reducing the size of the read head on computer hard drives, and just this year (2007) the scientists who discovered GMR (Albert Fert and Peter Grünberg) received the Nobel Prize in Physics. Later, the CPP geometry became an area of interest, for two reasons. First, the respective scaling lengths for CIP-and CPP-GMR are the mean free path (λ ~ 1-10 nm) and spin diffusion length (l ~ 10-100 nm) of an electron; this implies that devices based on CPP-GMR can be controlled more precisely due to the less stringent tolerances on dimension. Also, CPP-GMR values are generally higher than CIP-GMR, which makes it the more attractive of the two for sensing and switching applications. Wire- or column-shaped nanostructures with multilayered F/N design are preferable for work in CPP-GMR because their reduced lateral dimension increases their resistance, allowing GMR to be observed more easily (especially at room temperature).














Technorati : , , , , , ,

Saturday, May 17, 2008

New observations show that if life exists on Mars, it's probably going to be much deeper underground than expected


Life on Mars Theories Take a Hit
New observations show that if life exists on Mars, it's probably going to be much deeper underground than expected Scientists have posited that if life exists on Mars presently, it is probably hidden out of view in aquifers beneath the planet's barren surface. Unfortunately, new data collected by NASA's Mars Reconnaissance Orbiter suggests that these aquifers, if they exist, are probably much deeper inside the small ruddy planet than researchers had hoped.Using the orbiter's SHARAD (Shallow Radar) instrument, scientists have been able to get a very detailed picture of Mars' nothern global icecap and the planet's crust below. Though the data has proven very useful in fleshing out the life cycle of the giant ice cap, it also shows that the martian lithosphere, or the outer crust, is very stiff.Earth's lithosphere, in contrast, is somewhat soft. A large buildup of ice on Earth in a situation similar to Mars' would actually cause the crust to sag beneath its weight. On Mars, this is not happening. Scientists believe this shows that Mars' lithosphere is quite thick and cold.Warmed internally by pressure and/or an active core, a planet's lithosphere gradually grows colder towards the outside. Mars' lithosphere being stiff enough to not sag under the immense weight of its icecaps indicates that the any warmth generated internally does not venture far from the core, making the outer crust much colder than anticipated. This cold would prevent liquid water from forming anywhere near the surface. Instead, should it exist, it would be much deeper and most likely inaccessible by any easy means.The Mars Reconnaissance Orbiter's detailed imaging of the icecap did show evidence for a planetary climate, however. Alternating layers of dusty ice and nearly pure ice are thought to show a timetable of approximately one million year intervals. This coincides with the estimate that the cap itself is roughly four million years old. The climate changes are likely caused by variations in the planet's rotational axis and orbit.More surface data from the northern polar region should be available very soon as NASA's Phoenix Mars Lander is slated to set down on the planet's surface in just over a week. The lander will explore the polar region and look for signs of the existence of water on the surface in the past.

Microsoft: Don't Misunderstand UAC


Microsoft: Don't Misunderstand UAC, Other Vista Features
In its continued attempt to convince business customers to adopt Vista, Microsoft has outlined and tried to explain some of what it calls the OS's most "misunderstood" features in a document posted to -- then mysteriously removed from -- its Web site this week.
In the document, "Five Misunderstood Features in Windows Vista," Microsoft lists what it believes are five features of Vista that "cause confusion" and "slow Windows Vista adoption" for most users. The company identified User Account Control, Image Management, Display Driver Model, Windows Search and 64-bit architecture as features that are flummoxing IT professionals when they install Vista across desktops on a network. It offered tips for how to deal with common problems.
The document was posted to the Web site Friday morning; however, by the afternoon, the link was no longer working. It still came up in a Live Search of the Microsoft Web site, but the link provided there also was inactive.
Microsoft did not immediately respond to a request about the document Friday.
Businesses have been slow to adopt Vista since its enterprise introduction in late November 2006, and by now users have identified the features listed in the document as some of their biggest pain points.
One that has been especially problematic -- and even spoofed in an Apple TV commercial -- is User Account Control (UAC). UAC prevents users without administrative privileges from making unauthorized changes to a PC. But because of its settings, it can prevent even authorized users on the network from being able to access applications and features they should normally have access to. It does this through a series of screen prompts that ask the user to verify privileges, and it may require a user to type in a password to perform a task.
In its document, Microsoft said the feature has gotten a "bad rap" because it's a "set of technologies" dispersed throughout the OS and designed to protect the system in a variety of ways, not just one feature that can be controlled in an isolated way.
Microsoft also designed UAC to "help nudge ISVs towards designing applications that function in Standard User mode," one of two user privilege modes in UAC. The other is Local Administrator.
As it stands now, the prompts interrupt normal workflow, even in some mundane tasks, unless a user is set as Local Administrator. This is because the many third-party Windows applications that predate Vista weren't developed to work with UAC's "Standard User" designation, so they default to requiring Local Administrator rights, said Keith Brown, a network administrator for Gwinnett Medical Center in Lawrenceville, Georgia. Gwinnett is a not-for-profit medical network serving more than 700 physicians around the Atlanta area.
If a Standard User asks an application to perform a task that touches a part of the OS that the software says "should not be meddled with," it will prompt the user and require a password to perform that task, he said. This is common, especially when someone tries to install software as a Standard User, Brown said.
"It's an annoyance," he said, which is why most IT administrators will turn off the feature when installing Vista across desktops, which defeats the purpose of Microsoft putting it in to protect the OS in the first place.
One way to get around UAC is to use third-party software, such as Privilege Manager from BeyondTrust, to set user privileges, Brown said. Microsoft even recommended BeyondTrust's product to customers when the company, based in Portsmouth, N.H., came out with Privilege Manager 3.5 last August. That was the first version of the product designed to work with UAC.
John Moyer, CEO of BeyondTrust, said Privilege Manager lets network administrators configure in advance which applications can run or be installed on Vista machines on a network. It assigns the appropriate elevated privileges to Standard Users so they are not prompted even if third-party software does not recognize them as an authorized user of a task. "There is no interruption to the workflow," he said.
Brown said that without Privilege Manager, UAC would probably be turned off for the 30 to 40 Vista desktops his company is testing in its information systems department. He said the incessant prompting from UAC can be turned off from within Vista, but it's extremely time-consuming for the IT department to do that for each user on the network.
Gwinnett Medical Center eventually is planning a broader Vista deployment, but that "won't be this year," Brown added.





Extra............





One wonders, with all of the 'misunderstandings' Vista has generated with people, whether Microsoft would be best served by pushing it, or hyping XP's SP3, withdrawing the kill date for it and letting end users continue to install it (and OEM's to offer it) until at least after the release of Windows 7. XP didn't have this much trouble a year and a half after its release. It was clearly superior to Win 98. The clarity of superiority of Vista to XP is utterly missing insofar as speed, productivity, implementation and hardware costs go. We won't even talk about DRM's or other under the hood things that allows MS to turn you off like a light switch in Vista. People who understand the issues clearly don't want Vista as much as XP. And if Microsoft insists on forcing XP on us, I expect more and more users will turn to Linux. At least with Linux, you expect different - and it costs a lot less. If we can't have XP, we'll take something less expensive and with fewer issues.



Friday, May 16, 2008

Music for Workouts, Motivation and More!



SparkSounds: Music Playlists from SparkPeople


Music is important to most people. It can pick you up when you're feeling down, motivate you when you need a little push, and inspire you to go the extra mile during a workout.

Below you'll find SparkPeople playlists, which were created by our own staff members and experts! By clicking on a playlist below, a small music player will pop up. Keep it playing in the background while you're using the website or watching one of our workout videos. You can also look at the song lists for ideas to add to your own music player! We hope you enjoy these SparkSounds!

SparkGuy's SPARK Mix
These songs SPARK our founder and CEO to keep improving SparkPeople! From SparkGuy, "Warning: some rock songs in this one! :-)"

Coach Dean's Baby Boomer Mix
Some musical "deep thoughts" to soothe a frazzled mind.

Coach Dean's Ballads for Baby Boomers
A short musical tour through some memorable moments (and movements) of the '60s.

Coach Denise's Mellow Mix
Otherwise known as the Jack and John mix...relaxing songs from John Mayer and Jack Johnson

Coach Denise's Rock Mix
Upbeat rock songs!

Coach Nicole's Inspirational Mix
"These are some of the motivational songs I listen to for a quick pick-me-up," says Coach Nicole.

Coach Nicole's Upbeat Mix
These songs came straight from Coach Nicole's iPod. "I use these songs in Spinning all the time!" she says.

SparkPeople Favorites: 80's Fun
Many SparkPeople employees are children of the 80's. Here are over 20 of our favorite upbeat songs from that genre.

SparkPeople Favorites: Adrenaline Mix
These 11 songs will keep you going during an intense workout!

SparkPeople Favorites: Best of Country
Upbeat, serious, and sometimes silly--this playlist features several country music favorites!

SparkPeople Favorites: Relaxing Classical
Listening to classical music while working on the computer can help you relax and avoid outside distractions.



more...........




SparkSounds!a small feature called SparkSounds, where you can listed to streaming music while on SparkPeople and on your computer. Some are upbeat for motivation and energy, while some are relaxed for concentration or reflection.


Right now ;it just have a handful of playlists that were created by our experts, members, and staff. In the future we may do more with music on the site, partly based on how much all of our members like SparkSounds.




Technorati : , , , , ,

The first laser :Twenty-One Discoveries that Changed Science and the World



INVENTION OF THE FIRST LASER


When the first working laser was reported in 1960, it was described as "a solution looking for a problem." But before long the laser's distinctive qualities-its ability to generate an intense, very narrow beam of light of a single wavelength-were being harnessed for science, technology and medicine. Today, lasers are everywhere: from research laboratories at the cutting edge of quantum physics to medical clinics, supermarket checkouts and the telephone network.


Theodore Maiman made the first laser operate on 16 May 1960 at the Hughes Research Laboratory in California, by shining a high-power flash lamp on a ruby rod with silver-coated surfaces. He promptly submitted a short report of the work to the journal Physical Review Letters, but the editors turned it down. Some have thought this was because the Physical Review had announced that it was receiving too many papers on masers-the longer-wavelength predecessors of the laser-and had announced that any further papers would be turned down. But Simon Pasternack, who was an editor of Physical Review Letters at the time, has said that he turned down this historic paper because Maiman had just published, in June 1960, an article on the excitation of ruby with light, with an examination of the relaxation times between quantum states, and that the new work seemed to be simply more of the same. Pasternack's reaction perhaps reflects the limited understanding at the time of the nature of lasers and their significance. Eager to get his work quickly into publication, Maiman then turned to Nature, usually even more selective than Physical Review Letters, where the paper was better received and published on 6 August.


With official publication of Maiman's first laser under way, the Hughes Research Laboratory made the first public announcement to the news media on 7 July 1960. This created quite a stir, with front-page newspaper discussions of possible death rays, but also some skepticism among scientists, who were not yet able to see the careful and logically complete Nature paper. Another source of doubt came from the fact that Maiman did not report having seen a bright beam of light, which was the expected characteristic of a laser. I myself asked several of the Hughes group whether they had seen a bright beam, which surprisingly they had not. Maiman's experiment was not set up to allow a simple beam to come out of it, but he analyzed the spectrum of light emitted and found a marked narrowing of the range of frequencies that it contained. This was just what had been predicted by the theoretical paper on optical masers (or lasers) by Art Schawlow and myself, and had been seen in the masers that produced the longer-wavelength microwave radiation. This evidence, presented in figure 2 of Maiman's Nature paper, was definite proof of laser action. Shortly afterward, both in Maiman's laboratory at Hughes and in Schawlow's at Bell Laboratories in New Jersey, bright red spots from ruby laser beams hitting the laboratory wall were seen and admired.


Maiman's laser had several aspects not considered in our theoretical paper, nor discussed by others before the ruby demonstration. First, Maiman used a pulsed light source, lasting only a few milliseconds, to excite (or "pump") the ruby. The laser thus produced only a short flash of light rather than a continuous wave, but because substantial energy was released during a short time, it provided much more power than had been envisaged in most of the earlier discussions. Before long, a technique known as "Q switching" was introduced at the Hughes Laboratory, shortening the pulse of laser light still further and increasing the instantaneous power to millions of watts and beyond. Lasers now have powers as high as a million billion (1015) watts! The high intensity of pulsed laser light allowed a


wide range of new types of experiment, and launched the now-burgeoning field of nonlinear optics. Nonlinear interactions between light and matter allow the frequency of light to be doubled or tripled, so for example an intense red laser can be used to produce green light.


I had a busy job in Washington at the time when various groups were trying to make the earliest lasers. But I was also supervising graduate students at Columbia University who were trying to make continuously pumped infrared lasers. Shortly after the ruby laser came out I advised them to stop this work and instead capitalize on the power of the new ruby laser to do an experiment on two-photon excitation of atoms. This was one of the early experiments in nonlinear optics, and two-photon excitation is now widely used to study atoms and molecules.


Lasers work by adding energy to atoms or molecules, so that there are more in a high-energy ("excited") state than in some lower-energy state; this is known as a "population inversion." When this occurs, light waves passing through the material stimulate more radiation from the excited states than they lose by absorption due to atoms or molecules in the lower state. This "stimulated emission" is the basis of masers (whose name stands for "microwave amplification by stimulated emission of radiation") and lasers (the same, but for light instead of microwaves).


Before Maiman's paper, ruby had been widely used for masers, which produce waves at microwave frequencies, and had also been considered for lasers producing infrared or visible light waves. But the second surprising feature of Maiman's laser, in addition to the pulsed source, was that he was able to empty the lowest-energy ("ground") state of ruby enough so that stimulated emission could occur from an excited to the ground state. This was unexpected. In fact, Schawlow, who had worked on ruby, had publicly commented that transitions involving the ground state of ruby would not be suitable for lasers because it would be difficult to empty adequately. He recommended a different transition in ruby, which was indeed made to work, but only after Maiman's success. Maiman, who had been carefully studying the relaxation times of excited states of ruby, came to the conclusion that the ground state might be sufficiently emptied by a flash lamp to provide laser action-and it worked.


The ruby laser was used in many early spectacular experiments. One amusing one, in 1969, sent a light beam to the Moon, where it was reflected back from a retro-reflector placed on the Moon's surface by astronauts in the U.S. Apollo program. The round-trip travel time of the pulse provided a measurement of the distance to the Moon. Later, ruby laser beams sent out and received by telescopes measured distances to the Moon with a precision of about three centimeters-a great use of the ruby laser's short pulses.


When the first laser appeared, scientists and engineers were not really prepared for it. Many people said to me-partly as a joke but also as a challenge-that the laser was "a solution looking for a problem." But by bringing together optics and electronics, lasers opened up vast new fields of science and technology. And many different laser types and applications came along quite soon. At IBM's research laboratories in Yorktown Heights, New York, Peter Sorokin and Mirek Stevenson demonstrated two lasers that used techniques similar to Maiman's but with calcium fluoride, instead of ruby, as the lasing substance. Following that-and still in 1960-was the very important helium-neon laser of Ali Javan, William Bennett, and Donald Herriott at Bell Laboratories. This produced continuous radiation at low power but with a very pure frequency and the narrowest possible beam. Then came semiconductor lasers, first made to operate in 1962 by Robert Hall and his associates at the General Electric laboratories in Schenectady, New York. Semiconductor lasers now involve many different materials and forms, can be quite small and inexpensive, and are by far the most common type of laser. They are used, for example, in supermarket bar-code readers, in optical-fiber communications, and in laser pointers.


By now, lasers come in countless varieties. They include the "edible" laser, made as a joke by Schawlow out of flavored gelatin (but not in fact eaten because of the dye that was used to color it), and its companion the "drinkable" laser, made of an alcoholic mixture at Eastman Kodak's laboratories in Rochester, New York. Natural lasers have now been found in astronomical objects; for example, infrared light is amplified by carbon dioxide in the atmospheres of Mars and Venus, excited by solar radiation, and intense radiation from stars stimulates laser action in hydrogen atoms in circumstellar gas clouds. This raises the question: why weren't lasers invented long ago, perhaps by 1930 when all the necessary physics was already understood, at least by some people? What other important phenomena are we blindly missing today?


Maiman's paper is so short, and has so many powerful ramifications, that I believe it might be considered the most important per word of any of the wonderful papers in Nature over the past century. Lasers today produce much higher power densities than were previously possible, more precise measurements of distances, gentle ways of picking up and moving small objects such as individual microorganisms, the lowest temperatures ever achieved, new kinds of electronics and optics, and many billions of dollars worth of new industries. The U.S. National Academy of Engineering has chosen the combination of lasers and fiber optics-which has revolutionized communications-as one of the twenty most important engineering developments of the twentieth century. Personally, I am particularly pleased with lasers as invaluable medical tools (for example, in laser eye surgery), and as scientific instruments-I use them now to make observations in astronomy. And there are already at least ten Nobel Prize winners whose work was made possible by lasers.


There have been great and good developments since Ted Maiman, probably a bit desperately, mailed off a short paper on what was then a somewhat obscure subject, hoping to get it published quickly in Nature. Fortunately, Nature's editors accepted it, and the rest is history.


more..



The Army's Tactical High Energy Laser Advanced Concept Technology Demonstrator (THEL/ACTD) has successfully demonstrated its ability to detect, track, engage and destroy a Katyusha rocket armed with a live warhead. The rocket in flight was successfully intercepted and destroyed in field testing at the Army's High Energy Laser Systems Test Facility, White Sands Missile Range, N.M.


World's First Ray Gun Shoots Down Missile


TRW, the U.S. Army and the Israel Ministry of Defence (IMoD) have blazed a new trail in the history of defensive warfare by using the Army's Tactical High Energy Laser/Advanced Concept Technology Demonstrator (THEL/ACTD), the world's first high-energy laser weapon system designed for operational use, to shoot down a rocket carrying a live warhead.


The successful intercept and destruction of a Katyusha rocket occurred on June 6 at approximately 3:48 p.m. EDT at the Army's High Energy Laser Systems Test Facility (HELSTF), White Sands Missile Range, New Mexico.


The shoot-down was achieved during a high-power laser tracking test conducted as part of the ongoing THEL/ACTD integration process.


"We've just turned science fiction into reality," said Lt. Gen. John Costello, Commanding General, U.S. Army Space & Missile Defense Command.


"This compelling demonstration of THEL's defensive capabilities proves that directed energy weapon systems have the potential to play a significant role in defending U.S. national security interests worldwide."


"This shoot-down is an exciting and very important development for the people of Israel," said Major General Dr. Isaac Ben-Israel, Director of MAFAT, Israel Ministry of Defence.


"With this success, THEL/ACTD has taken the crucial first step to help protect the communities along our northern border against the kind of devastating rocket attacks we've suffered recently."


"The THEL/ACTD shoot-down is a watershed event for a truly revolutionary weapon," said Tim Hannemann, executive vice president and general manager, TRW Space & Electronics Group, the THEL/ACTD system prime contractor.


"It also provides a very positive opportunity for our customers to consider developing more mobile versions of THEL." Any future THEL developments would benefit from continued testing and performance evaluations of the THEL/ACTD's current subsystems, he added.


For this critical first test of THEL/ACTD's defensive capabilities, an armed Katyusha rocket was fired from a rocket launcher placed at a site in White Sands Missile Range.


Seconds later, the THEL/ACTD, located several miles away at HELSTF, detected the launch with its fire control radar, tracked the streaking rocket with its high precision pointer tracker system, then engaged the rocket with its high- energy chemical laser.


Within seconds, the 10-foot-long, 5-inch-diameter rocket exploded.


According to Hannemann, the THEL/ACTD shoot-down represents significant advancements in the maturity of engineering technologies used to design and build deployable directed energy weapon systems.


"In February 1996, as part of the Nautilus laser test program, TRW, the Army and the IMoD used the Mid Infrared Advanced Chemical Laser (MIRACL) and the SeaLite Beam Director installed at HELSTF to intercept and destroy a Katyusha rocket," he said.


"Those tests established high-energy laser lethality against short-range rocket threats, but we had to use a large facility-based laser and beam control system to perform the test." By contrast, he added, THEL/ACTD was designed and produced as a stand-alone defensive weapon system.


Its primary subsystems have been packaged in several transportable, semi-trailer-sized shipping containers, allowing it to be deployed to other test or operational locations.


The U.S. currently has no weapon systems capable of protecting soldiers or military assets involved in regional conflicts against short-range rocket attacks.


Conventional missile-based defense systems, such as the Army's Theater High Altitude Area Defense (THAAD) and Patriot Advanced Capability -3 (PAC-3), are designed to defend against longer range threats such as Scud missiles.


By comparison, tactical directed energy systems such as THEL/ACTD send out "bullets" at the speed of light, allowing them to intercept and destroy "last minute" or low-flying threats such as rockets, mortars or cruise missiles on a very short timeline.


"It's pretty hard to run from a laser," said Hannemann.


The THEL/ACTD was designed, developed and produced by a TRW-led team of U.S. and Israeli contractors for the U.S. Army Space & Missile Defense Command, Huntsville, Ala., and the Israel Ministry of Defence.


Requirements for the system have been driven in part by Israel, which needs to protect civilians living in towns and communities along its northern border against rocket attacks by terrorist guerrillas.


TRW has been engaged in laser research and development since the early 1960s. The company produces solid-state lasers for defense and industrial applications, and designs and develops a variety of high-energy chemical lasers for space, ground and airborne missile defense applications.




Senate Adds Millions for ASAT, Space Laser
The Senate Armed Services Committee ended its FY2001 authorizations by boosting spending on military space programs and technologies by $98.2 million, chairman Sen. John Warner's office announced last week (R-VA.).


Thursday, May 15, 2008

The newly discovered remains mark the youngest known supernova remnant in the Milky Way



The remnant known as G1.9+0.3 came from the most recent supernova in our galaxy. To determine the age of the stellar explosion, astronomers tracked how quickly the remnant was expanding, by comparing a radio image from 1985 (blue) to an X-ray image taken in 2007 (orange).


Youngest supernova (to us) spied in Milky Way


About 140 years ago, in our time, a stellar explosion lit up our galaxy with a blinding flash of light, sending out powerful shock waves to boot. Now, astronomers have spotted the youthful remains from the explosion.


The newly discovered remains mark the youngest known supernova remnant in the Milky Way, snagging the record from the previous holder, 330-year-old Cassiopeia A.


Ever since Cass-A was discovered in the 1950s, astronomers had been searching for "missing supernovae" and their remnants. At around the same time of the Cass-A discovery, astronomers also realized two or three supernovae should light up the Milky Way every century, resulting in about 60 supernova remnants younger than 2,000 years old. To date, just 10 such remnants have been confirmed.


And so G1.9+0.3, the new remnant detailed in the June 10 issue of the Astrophysical Journal Letters, is the prize from the astronomers' 50-year galactic hunt.



Supernovae are considered some of the most violent events in the universe. They are massive stars at the end of their lives, exploding with such force that they generate a flash of radiation and shock waves akin to a sonic boom.


Debris thrown outward by the explosion sometimes crashes into surrounding material, resulting in a supernova remnant. This shell of hot gas and high-energy particles glows as X-rays, radio waves and other wavelengths of radiation for thousands of years.


Supernovae and their remnants are critical for creating and distributing the majority of the elements in the universe through the interstellar medium, spreading everything from cobalt to gold to radium to planets, plants, people and far beyond.


Youthful remains


The new supernova remnant discovery involved NASA's Chandra X-ray Observatory and National Radio Astronomy Observatory's Very Large Array (VLA) radio telescope.


By comparing X-ray and radio images from 1985 and 2007, which show the supernova remnant is expanding, the astronomers estimated G1.9+0.3's age. A new VLA image taken this year confirms the age and expansion rate of 35 million mph (56 million kph), which is an unprecedented expansion speed for a supernova remnant.


The astronomers estimate the centenarian is hiding out about 1,000 light-years from the galactic center, or roughly 25,000 light-years from us. A light-year is the distance light travels in one year, or about 6 trillion miles (10 trillion kilometers). So in reality, the explosion occurred about 25,140 years ago and the light reached us 140 years ago.


However, the bright burst would have been invisible to celestial enthusiasts with only optical telescopes at the time, due to a veil of interstellar gas and dust.


"If not for all the interstellar 'gunk' between us and this object, people would have seen this supernova as a new star in the constellation Sagittarius in the years around 1870 to 1900," said lead researcher Stephen Reynolds, an astrophysicist at North Carolina State University.


G1.9+0.3 most probably originated from a Type Ia supernova, the researchers say, in which a white dwarf star siphons hydrogen from a companion star and thus bulks up its mass. When the white dwarf reaches a weight that's 1.4 times more massive than the sun, the star explodes.




Technorati : , , , , , ,

Find here

Home II Large Hadron Cillider News