Search This Blog

Friday, September 7, 2007

MIT probes secret of bone's strength


New research at MIT has revealed for the first time the role of bone's atomistic structure in a toughening mechanism that incorporates two theories previously proposed by researchers eager to understand the secret behind the material's lightweight strength.


Past experimental studies have revealed a number of different mechanisms at different scales of focus, rather than a single theory. The combination mechanism uncovered by the MIT researchers allows for the sacrifice of a small piece of the bone in order to save the whole, helps explain why bone tolerates small cracks, and seems to be adapted specifically to accommodate bone's need for continuous rebuilding from the inside out.


"The newly discovered molecular mechanism unifies controversial attempts of explaining sources of the toughness of bone, because it illustrates that two of the earlier explanations play key roles at the atomistic scale," said the study's author, Esther and Harold E. Edgerton Professor Markus Buehler of MIT's Department of Civil and Environmental Engineering.


"It's quite possible that each scale of bone--from the molecular on up--has its own toughening mechanism," said Buehler. "This hierarchical distribution of toughening may be critical to explaining the intriguing properties of bone and laying the foundation for new materials design that includes the nanostructure as a specific design variable."


Unlike synthetic building materials, which tend to be homogenous throughout, bone is heterogeneous living tissue whose cells undergo constant change. Scientists have classified bone's basic structure into a hierarchy of seven levels of increasing scale. Level 1 bone consists of bone's two primary components: chalk-like hydroxyapatite and collagen fibrils, which are strands of tough, chewy proteins. Level 2 bone comprises a merging of these two into mineralized collagen fibrils that are much stronger than the collagen fibrils alone. The hierarchical structure continues in this way through increasingly larger combinations of the two basic materials until reaching level 7, or whole bone.


Buehler scaled down his model to the atomistic level, to see how the molecules fit together--and equally important for materials scientists and engineers--how and when they break apart. More precisely, he looked at how the chemical bonds within and between molecules respond to force. Last year, he analyzed for the first time the characteristic staggered molecular structure of collagen fibrils, the precursor to level 1 bone.


In his newer research, he studied the molecular structure of the mineralized collagen fibrils that make up level 2 bone, hoping to find the mechanism behind bone's strength, which is considerable for such a lightweight, porous material.


At the molecular level, the mineralized collagen fibrils are made up of strings of alternating collagen molecules and consistently sized hydroxyapatite crystals. These strings are "stacked" together in a staggered fashion such that the crystals appear in stair-step configurations. Weak bonds form between the crystals and molecules in the strings and between the strings.


When pressure is applied to the fabric-like fibrils, some of the weak bonds between the collagen molecules and crystals break, creating small gaps or stretched areas in the fibrils. This stretching spreads the pressure over a broader area, and in effect, protects other, stronger bonds within the collagen molecule itself, which might break outright if all the pressure were focused on them. The stretching also lets the tiny crystals shift position in response to the force, rather than shatter, which would be the likely response of a larger crystal.


Previously, some researchers suggested that the fundamental key to bone's toughness is the "molecular slip" mechanism that allows weak bonds to break and "stretch" the fabric without destroying it. Others have cited the characteristic length of bone's hydroxyapatite crystals (a few nanometers) as an explanation for bone's toughness; the crystals are too small to break easily.


At the atomistic scale, Buehler sees the interplay of both these mechanisms. This suggests that competing explanations may be correct; bone relies on different toughening mechanisms at different scales.


Buehler also discovered something very notable about bone's ability to tolerate gaps in the stretched fibril fabric. These gaps are of the same magnitude--several hundred micrometers--as the basic multicellular units or BMUs associated with bone's remodeling. BMUs are a combination of cells that work together like a small boring tool that eats away old bone at one end and replaces it at the other, forming small crack-like cavities in between as it works its way through the tissue.


Thus, the mechanism responsible for bone's strength at the molecular scale also explains how bone can remain so strong--even though it contains those many tiny cracks required for its renewal.


This could prove very useful information to civil engineers, who have always used materials like steel that gain strength through density. Nature, however, creates strength in bone by taking advantage of the gaps, which themselves are made possible by the material's hierarchical structure.


"Engineers typically over-dimension structures in order to make them robust. Nature creates robustness by hierarchical structures," said Buehler.


This work was funded by a National Science Foundation CAREER award and a grant from the Army Research Office.








Markus Buehler
Photo / Donna Coveney

MIT Professor Markus Buehler has helped reveal why bones are so tough. The object on the screen is a triple helical tropocollagen molecule, a fundamental building block of bone. Next to the molecule are nanosized hydroxyapatite chalk-like crystals. In his work he simulates the behavior of the composite of tropocollagen and hydroxyapatite during deformation. Enlarge image

CONTACT



Elizabeth A. Thomson
MIT News Office
Phone: 617-258-5402
E-mail: thomson@mit.edu


RELATED


Model helps students visualize nanoscale problems - An educational experiment during IAP demonstrated that students can learn to apply sophisticated atomistic modeling techniques to traditional materials research in just a few classes, an advance that could dramatically change the way civil engineers learn to model the mechanical properties of materials. 4/2/2007


Markus Buehler - MIT Department of Civil and Environmental Engineering


More: Biology


More: Civil engineering


More: Materials science





Technorati :

ISRO aims world Market


24 hours news .INDIAN SPACE RESEARCH ORGANIZATION (ISRO) on Sunday sent into orbit a rocket shipping the substitute for a communications satellite shattered last year, raising its hopes of competing for worldwide satellite launch trade. The rocket sized 49-metre carrying the Insat-4CR satellite blasted off from the Sriharikota space station in southern India at 6:21 pm (1301 GMT) after a two-hour delay due to a technical malfunction. The huge satellite was weighing 2,130 kilograms and to allow digital transmission on each at the same time by several video and audio networks, the satellite is prepared with 12 wideband channels known as transponders.


India's aims to grab a segment of the 2.5-billion-dollar heavy satellite launch industry as well as meet its own successful telecommunications demand, Sunday's launch was very critical. After the earlier failed attempt of the GSLV, the fifth launch of the Geosynchronous Satellite Launch Vehicle rocket (GSLV) was very nervousness driven.


Its assurance additional boosted after a successful satellite launch on Sunday, the Indian Space Research Organisation (ISRO) said it was targeting 5-10 per cent of the global satellite launch market. ISRO could offer very cutthroat satellite launch rates, almost 40 per cent cheaper than the existing rates. In the small satellite launch marketplace ISRO's rates are lower by 20 per cent.


Mapping of natural resources and prediction of the weather to help farmers and the rural poor people is nicely accomplished by these geosynchronous satellites that have been used for years. But India has recently moved towards commercial exploitation of space technology.


ISRO not only earns handsome revenue from the launching of the satellite but also from the telecom and broadcast companies who use its transponders.


Bad weather conditions in the days leading up to the launch, a faulty vent value in the cryogenic stage that put on hold the flight by two hours and the absence of signals for a few seconds after the rocket's launch gave ISRO scientists tense moments over the Rs 300-crore mission's success.


In the way towards the progress of ISRO's moon mission, construction work for the Chandrayaan satellite centre and the ground station had started. ISRO was planning two more launches using the polar satellite launch vehicle (PSLV) in the near future. ISRO is progressing well in its target of four rocket launches a year, this shall be a huge revenue earner.

With its successful launch of the geosynchronous satellite named INSAT-4CR, India is targeting the global market in the satellite launch business and is pretty confident to get ample orders to fill its order book, because of competitive rates and advanced technology.


More ISRO News Facts


BANGALORE: Indian communication satellite INSAT-4CR, launched by the Indian Space Research Organisation (ISRO) and lifted off from the Satish Dhawan Spaceport at Sriharikota in Andhra Pradesh on Sunday by a Geosychronous Satellite Launch Vehicle (GSLV-FO4), will be commissioned in a month's time, ISRO Chairman G Madhavan Nair said on Tuesday.


Speaking to reporters on the sidelines of the second meeting of the International Committee on Global Navigation Satellite Systems (ICG) here, he said, ''Tomorrow we will carry out the first orbit rising process and within a month the satellite will be operational.''


The first orbit raising manoeuvre was successfully carried out by firing 440 Newton Liquid Apogee Motor on board the satellite for a duration of 27 minutes by commanding the satellite from Master Control Facility (MCF) at Hassan in Karnataka, he said.


The satellite had been put to a perigee (nearest point to earth) at 2,983 km and apogee (the farthest point to earth) to 30,702 km. The inclination of the orbit with respect to the equatorial plane had been reduced to 11.1 degree, Nair said.





Technorati :

Virtual schooling growing at K-12 level


24 hours news Students get their lessons online and communicate with their teachers and each other through chat rooms, e-mail, telephone and instant messaging


As a seventh-grader, Kelsey-Anne Hizer was getting mostly D's and F's and felt the teachers at her Ocala middle school were not giving her the help she needed. But after switching to a virtual school for eighth grade, Kelsey-Anne is receiving more individual attention and making A's and B's. She's also enthusiastic about learning, even though she has never been in the same room as her teachers.


Kelsey-Anne became part of a growing national trend when she transferred to Orlando-based Florida Virtual School. Students get their lessons online and communicate with their teachers and each other through chat rooms, e-mail, telephone and instant messaging.


"It's more one-on-one than regular school," Kelsey-Anne said. "It's more they're there; they're listening."


Virtual learning is becoming ubiquitous at colleges and universities but remains in its infancy at the elementary and secondary level, where skeptics have questioned its cost and effect on children's socialization.


However, virtual schools are growing fast - at an annual rate of about 25 percent. There are 25 statewide or state-led programs and more than 170 virtual charter schools across the nation, according to the North American Council for Online Learning.


Estimates of elementary and secondary students taking virtual classes range from 500,000 to 1 million nationally compared to total public school enrollment of about 50 million.


Online learning is used as an alternative for summer school and for students who need remedial help, are disabled, being home schooled or suspended for behavioral problems. It also can help avoid overcrowding in traditional classrooms and provide courses that local schools, often rural or inner-city, do not offer.


Advocates say those niche functions are fine, but that virtual learning has almost unlimited potential. Many envision a blending of virtual and traditional learning.


"We hope that it becomes just another piece of our public schools' day rather than still this thing over here that we're all trying to figure out," said Julie Young, Florida Virtual's president and CEO.


Florida Virtual is one of the nation's oldest and largest online schools, with more than 55,000 students in Florida and around the world, most of them part-time. Its motto is "Any Time, Any Place, Any Path, Any Pace."


Struggling students such as Kelsey-Anne, who suffers from attention deficit disorder, can take more time to finish courses while those who are gifted can go at a faster speed.


Casey Hutcheson, 17, finished English and geometry online in the time it would have taken to complete just one of those courses at his regular high school in Tallahassee.


"I like working by myself because of no distractions, and I can go at my own pace rather than going at the teacher's pace," he said.


For all its potential, virtual schooling has its critics and skeptics.


"There is something to be said for having kids in a social situation learning how to interact in society," said state Rep. Shelley Vana. "I don't think you get that if you're at home."


But virtual students get a different kind of social experience that is just as valuable, said Susan Patrick, president and CEO of the North American Council for Online Learning in Vienna, Va.


"We should socialize them for the world that they live in," she said, suggesting that people spend much of their time interacting via computer these days.


Many policymakers approach virtual learning with dollar signs in their eyes, expecting big savings from schools that do not need buildings, buses and other traditional infrastructure.


"We should not, as stewards of public money, be automatically paying the same or even close to the same amount of money for a virtual school day as we pay for a conventional school day," said Florida Senate Education Committee Chairman Don Gaetz.


Florida Virtual this year is slated to get $6,682 for every full-time equivalent student, just slightly less than the average of $7,306 for all of the state's public schools. Young said her school has expenses that traditional schools do not.


"Our data infrastructure is our building," she said.


Teacher unions have opposed spending public dollars on some virtual schools, mainly those that are privately operated or function as charter schools.


Indiana lawmakers this year refused to fund virtual charter schools. Opponents argued they are unproven and would have siphoned millions of dollars from traditional public schools.


Florida Virtual's Young said she plans to recommend that her state follow the example of Michigan, which passed a requirement that students complete some type of online experience to earn a high school diploma.


If "we do not give them an opportunity to take an online course, we're doing them a tremendous disservice," she said. "It's become the way of the world."




Technorati :

The high-tech savvy


24 hours news .


education era


Students at Hamilton's McMaster University can hear the first lecture of the year for introductory psychology this week without going anywhere near a classroom.


In a break with tradition, the course's main lectures will be prerecorded and posted on the Web, available for students to watch when they have a free half hour and an Internet connection.


The online lectures, on topics such as colour perception and sexual motivation, are available only to students and, to ward off procrastination, are posted for a limited time. They include interactive slides, practice quizzes and a search function.


Students can pause or rewind, join chat groups or e-mail questions.


"I want them to think psychology is cool," says the course's creator, Professor Joe Kim.


"The old lecture model, that is what we are used to, but in a large hall it is not the most satisfying experience."


Prof. Kim, 35, has spent long hours this summer redesigning a course which, with upward of 3,000 students, is easily the largest on campus. He's already recorded all 13 of his fall lectures in a studio, with the help of a homemade teleprompter and a sizable support crew.


Such online, on-demand instruction is a far cry from the standard lecture format, and it's the latest development in a long line of changes in the classroom.


Higher education is becoming increasingly high-tech as professors look for ways to engage a generation raised on the Internet and video games.


The burgeoning use of technology also raises questions about the nature of university education, as schools use it to cope with growing class sizes and limited resources.


The only required face-to-face contact at the McMaster course, for instance, are tutorial groups of 40 to 45 students led by third- or fourth-year undergraduates that meet twice a week.


Is the tech wave a way to enhance education, or simply to make do with less?


"This generation of students, they expect teachers to be more than a talking head," says Veselin Jungic, a math professor at Simon Fraser University in Burnaby.


Prof. Jungic says he has overhauled his teaching style since he came to Canada from his native Bosnia. Along the way, he also has created a superhero - Math Girl - and two animated short films to help explain difficult concepts to his first-year calculus class.


The early morning introductory course is likely the hardest thing his students will face during their fall term, so he created the cartoons - available on YouTube by searching Math Girl - as an unthreatening way to introduce difficult topics, he says.


"At 8:30, you make 500 people laugh. That is a good start. That creates a chance for a teacher to reach those students," he explains.


After showing the cartoon, he spends his lecture discussing the concept and then finishes by playing the cartoon again.


Preliminary studies show that students who watch the cartoons - inspired by a former top student - have a better grasp on the material. "There is something, I call it magic, in that pop culture. It really reaches young people," Prof. Jungic says.


He is working on his third instalment with local artist Lou Crockett, which will focus on pi.


Across the country, professors are experimenting with other approaches to get their messages across.


Many are putting lectures on podcasts for students to listen to at their leisure. A handful have gone one step further than Prof. Kim at McMaster and are using video podcasting, offering all their lectures and course material online without any face-to-face meetings.


Others have introduced "clickers" in the classroom that allow them to poll students as a way of increasing interaction in large lecture halls. At the University of British Columbia, technology is being used to increase student feedback, with the rollout of online professor-evaluation forms.


Julia Christensen Hughes, head of the business department at the University of Guelph and a long-time champion of improving teaching practices, applauds these attempts to enrich the old lecture model.


"The classroom should be a value-added experience. It should not be just a straight transmission of information," she says.


At the same time, she cautions that "technology is not a panacea" and cannot be a substitute for face-to-face contact. "For me, learning is a social activity."


At McMaster, Prof. Kim agrees. In addition to his online lectures, he plans to give optional "live" talks on special topics.


He also plans to spice up his online offerings with hidden features that his students can discover and game shows involving students and tutorial assistants from the course. He's thinking he'll model it after Who Wants to Be a Millionaire.


He says given the number of students who enroll in his course, teaching in person would mean cramming thousands of them into a large lecture hall.


In past years, students in the course went to tutorials and watched a videotape of the lecture. This new format allows more time for discussion in class and forces them to become involved as they listen, he says.


While most of the personal instruction will come from undergraduates, Prof. Kim says his new job is designed with an emphasis on teaching, and the majority of his time will be devoted to this course.


He will be dropping in on tutorials, he says, and holding regular office hours if students prefer to ask questions in person.


Finally, as a professor of psychology, he's hoping to use the new course design to study how this new generation of students learns. The real test, he says, will be whether the combination of interactive, online learning and small group meetings will allow students to take in difficult concepts more easily.


"I'll be watching how they perform," he says.


CALCULUS SUPERHERO


How does a math professor create a cartoon superhero?


Veselin Jungic says he got the idea from watching the way his two sons, now grown, were captivated by characters in pop culture. "These superhero characters, they are able to bring a message to young people," said Prof. Jungic, 52, in English that carries the accent of his native Bosnia.


He started thinking that he could use a superhero to turn learning calculus into a positive experience for his first-year students. One day in 2003, as he handed back mid-term tests to students, he found his inspiration. Only one student got 100 per cent and, when he asked her to stand up, he saw a tiny young woman reluctantly rise in the large lecture hall. "That was my math girl," he said.


Prof. Jungic says he still has lots to learn about being a writer. The cartoons, filled with formulas and complicated concepts, are not likely to make it into a Saturday morning time slot. Still, there have been refinements. The first Math Girl episode, for example, didn't have a villain, but one was created for the sequel at the advice of his students. Prof. Jungic says he also has gained inspiration from old episodes of the Batman television series and predicts Math Girl 3, now in production, will be his best yet.


"I am just warming up,"


Cool tools


The latest crop of university students share many characteristics that professors can use to improve the teaching experience, says University of Guelph professor Julia Christensen Hughes. They are generally tech-savvy, are comfortable communicating online and are more willing to learn through trial and error than their parents.


"When they get a new piece of technology," she said, "they aren't going to look at the manual."


The same goes for the wave of young professors arriving on Canadian campuses. But just because technology is available doesn't mean every learning experience needs lots of bells and whistles, Prof. Christensen Hughes says. Posing questions and encouraging discussion is also a great way to foster different kinds of learning, she says.


"Some students find it very refreshing to go into a course without PowerPoint because it is a break," she said.


The trick, she believes, is to offer students a variety of learning options so that they can pick the style that works for them.


Some of the techniques now being used include:


Course websites: There are lots of developments on this front as websites evolve from a source of general information such as lecture outlines and assignment schedules to interactive hubs with discussion groups, professor blogs and links to journal articles or other sources.


Podcasts: Making lectures available in MP3 format is becoming a popular feature of many courses. The option means students can catch up on a missed class, review material at exam time or go over difficult concepts at their own pace. Many podcasts are also available to the general public. But some scholars worry students will forgo class altogether and just listen online.


Video podcasts: A few professors have gone one step further and created recordings with pictures and text that can be downloaded onto computers, MP3 players or even video-game consoles. They also allow students to review material and go at their own pace, but also provide charts, slides, video and other features for students who are visual learners.




Technorati :

Intel Socket Xeon 7300 quad-core processor-


24 hours news. Santa Clara, Calif.-based Intel Corp. officially announced its new four-socket, quad-core Xeon 7300 Series this week, code-named Tigerton -- just five days before Advanced Micro Devices Inc. (AMD)'s quad-core processor Barcelona is to be introduced.


Compared with the company's previous-generation four-socket, dual-core products, the new quad-core Xeon 7300 series processors pack more than twice the performance and more than three times the performance per watt - and at the same price, Intel says. The Xeon 7300 completes Intel's transition to Core microarchitecture, a move that Intel first announced in June 2006.


Intel's consolidation mission
Intel is pushing users to move away from the phased-out single-core processors onto the quad-core platform, saying the Intel Xeon 7300 is designed for server consolidation. It has four times the memory capacity of the previous generation: a four-socket, dual-core code-named Tulsa.


"We are not charging a premium for quad-core, so all of our dual-core processor pricing is replaced with quad-core prices," said Kirk Skaugen, Intel vice president and general manager of the Server Products Group. "We've eliminated every reason not to go to quad core. Many [users] have single-core servers that are utilized only 15% to 20%. Now we have a platform with five times the performance of single core, so you can take dozens of underutilized systems, create virtual partitions and increase utilization dramatically."


The more energy-efficient Xeon 7300 series includes frequencies up to 2.93 GHz at 130 watts; several 80-watt processors; as well as a 50-watt version, or 12.5 watts per core, with a frequency of 1.86 GHz for ultradense deployments, such as four-socket blade servers.


It's also possible to upgrade the Xeon 7300 to Intel's next-generation chips. Code-named Dunnington, the 45-nanometer (nm) processor with four or more cores is due out next year, Skaugen said. In mid-2008, Intel plans to ship its Nehalem family of processors, which will include one to eight cores per product. In 2009, Intel plans to introduce its 32-nm manufacturing process.


In addition, the Xeon 7300 includes a new Data Traffic Optimizations feature that enhances data movement between processors, memory and I/O connections, Intel said. While previously an interconnect was shared, each processor will now have its own interconnect, Skaugen said.


The previously announced Intel VT FlexMigration will assist in the seamless upgrade of virtual machines to Intel's next-generation 45-nm Core microarchitecture-based platforms.
VMware Inc. of Palo Alto, Calif., and Intel worked together to optimize VMware ESX Server on the Xeon 7300 for live migration with VMotion between Intel processor families. This means users with Intel Xeon processors can perform live migrations of virtual machines to servers with future-generation Intel processors.


Users won't be able to do live migrations between AMD and Intel-based servers, however.


AMD's two cents on Tigerton
When AMD of Sunnyvale, Calif., releases its first quad-core processor next week, the two-, four- and eight-socket versions of the chip, code-named Barcelona, will surpass performance of Intel's Xeon quad-core processor line.
"Tigerton has the unfortunate distinction of being near last in a line of a dying architecture based on a front-side bus bottleneck," said Bruce Shaw, director of server and workstation product marketing. "Nowhere are the limitations of a front-side bus architecture more keenly felt than in the high-end multiprocessor server market. So while Intel may publicly 'celebrate' the arrival of Tigerton, it is in fact the final inadequate attempt by Intel to make the front-side bus architecture scale."


AMD's quad-core processor will be on one die, making it the first "native" quad-core processor. Intel's quad-core processors are two dual cores stuck together.


"Tigerton is still a dual-core processor design, just as Penryn will be, said AMD's Shaw. "To achieve full-performance scaling on real-world multithreaded workloads, real design work is needed. Packaging dual-cores together into quad cores is insufficient, as Intel itself clearly understands. Why else transition to native quad core in late 2008?"


Intel spokesperson Nick Knupffer said Intel's Xeon quad-core processor performance is the same as if it were on a single piece of silicon. He did not confirm any plans to move to a single die in the future.


"We are interested in end-user performance, and we are proud of the performance we have been delivering," Knupffer said.


Since November 2006, Intel has introduced more than 20 quad-core processors in the server and desktop market segments.


Vendors add servers designed for Xeon 7300
Starting today, servers based on the Xeon 7300 series processors are available from more than 50 system manufacturers, including Dell Inc., Egenera Inc., Fujitsu, Fujitsu Siemens, Hitachi, IBM Corp., NEC Corp., Sun Microsystems Inc., Super Micro Computer Inc., and Unisys Corp.
Today, for example, Hewlett-Packard Co. announced its enhanced lineup of multiprocessor-based server systems based on the Xeon 7300.


The rack-based HP ProLiant DL580 G5 server and the HP ProLiant BL680c G5, HP's first four-processor, quad-core server blade, offer increased performance with double the number of processor cores.


Pricing for the new Intel Xeon quad-core processors depends on the speed, features and volume ordered, and cost ranges from $856 to $2,301 in quantities of 1,000. For additional details on the performance characteristics of the quad-core Intel Xeon 7300 series, visit Intel's Web site.




Technorati :

Russian rocket carrying Japanese satellite crashes


Russian rocket carrying Japanese satellite crashes


An unmanned Russian rocket carrying a Japanese communications satellite malfunctioned after liftoff today and crashed in Kazakhstan, officials said. Nobody was hurt, but the crash triggered concerns in Kazakhstan about environmental damage from toxic rocket fuel.


The Proton-M rocket failed to put the JCSAT-11 satellite into orbit because of a problem during operation of the second stage, the U.S.-based American-Russian joint venture International Launch Services said.


The rocket failed 139 seconds after its launch from the Russian-rented Baikonur facility in Kazakhstan, and veered from the planned trajectory at an altitude of 46 miles, said Alexander Vorobyov, a spokesman for the Russian space agency Roskosmos.


Parts of the rocket fell to the ground in an uninhabited area about 30 miles southwest of the central Kazakh town of Zhezkazgan, Vorobyov said.


The rocket was carrying more than 220 tons of fuel, including highly toxic heptyl, Kazakh space agency chief Talgat Musabayev said, expressing concern about possible contamination around the crash site, Kazakhstan's Kazinform news agency reported.


Kazakhstan would be fully compensated for environmental damage under existing agreements, Prime Minister Karim Masimov said, according to Russian news agencies.


Under an agreement with Kazakhstan, launches of Proton rockets from Baikonur were automatically suspended until the cause of the crash is determined, Vorobyov said.


He said that was unlikely to affect future launches, but an official at state-controlled Khrunichev State Research and Production Center, which makes Proton rockets, said that would depend on when an official investigative commission delivers its report.


Following an accident in July 2006 involving a different kind of rocket launched from Baikonur last year, the report came in about six weeks, and Proton launches are scheduled for November and December, Khrunichev spokesman Alexander Bobrenyov said.


Russian and Kazakh media quoted Musabayev as saying the accident was likely caused by the failure of steering mechanisms aboard the rocket, but Bobrenyov said it was too early to make that determination.


Russia has been aggressively trying to expand its presence in the international market for commercial and government satellite and space-industry launches, though its efforts have seen several high-profile failures.


The July 2006 incident involved a Dnepr rocket carrying 18 satellites for various clients that crashed shortly after takeoff from the Baikonur, spreading highly toxic fuel over a wide swath of uninhabited territory in Kazakhstan.


The JCSAT-11 satellite, made by U.S.-based Lockheed Martin Commercial Space Systems, was to be used by Japan's JSAT Corp., International Launch Services said in a news release. The heavy-lift Proton, a top income-generator for Russia's space industry, is made by Khrunichev, a partner in International Launch Services.






Technorati : , , , , ,

University of Michigan Astronomers Observe How Neutron Stars Warp Space-Time


University of Michigan Astronomers Observe How Neutron Stars Warp Space-Time


Located about 6,500 light-years from Earth, the Crab Nebula is the remnant of a star that ended its life on July 4, 1054 when it exploded as a supernova. After the explosion, the star collapsed into a neutron star, which is located at the center of the nebula. Researchers who study neutron stars are seeking answers to fundamental physics questions. The centers of neutron stars could hold exotic particles or states of matter that are impossible to create in a lab. (Image Credit: NASA and The Hubble Heritage Team)


Neutron stars contain the densest observable matter in the universe. They cram more than a sun's worth of material into a city-sized sphere, meaning a few cups of neutron-star stuff would outweigh Mount Everest. Astronomers use these collapsed stars as natural laboratories to study how tightly matter can be crammed under the most extreme pressures nature can offer.


Einstein's predicted distortion of space-time occurs around neutron stars, University of Michigan astronomers and others have observed. Using European and Japanese/NASA X-ray observatory satellites, teams of researchers have pioneered a groundbreaking technique for determining the properties of these ultradense objects.


The first step in addressing these mysteries is to accurately and precisely measure the diameters and masses of neutron stars. A U-M study is one of two that have recently done just that.


Like neutron stars themselves, the region around these stars is also extreme. The motions of gas in this environment are described by Einstein's general theory of relativity. Scientists are now exploiting general relativity to study neutron stars.


Research on neutron stars by Sudip Bhattacharyya and Tod Strohmayer of NASA's Goddard Space Flight Center bolsters the results reported by U-M research fellow Edward Cackett and assistant professor Jon Miller. Together the results signal that an accessible new method for probing neutron stars has been found.


NASA describes the findings as "a big step forward."


Cackett and Miller used the Japanese/NASA Suzaku X-ray observatory satellite to survey three neutron-star binaries: Serpens X-1, GX 349+2, and 4U 1820-30. The team studied the spectral lines from hot iron atoms that are whirling around in a disk just beyond the neutron stars' surface at 40 percent light speed.


Previous X-ray observatories detected iron lines around neutron stars, but they lacked the sensitivity to measure the shapes of the lines in detail.


Cackett and Miller, along with the Goddard astronomers, were able to determine that the iron line is broadened asymmetrically by the gas's extreme velocity. The line is smeared and distorted because of the Doppler effect and beaming effects predicted by Einstein's special theory of relativity. The warping of space-time by the neutron star's powerful gravity, an effect of Einstein's general theory of relativity, shifts the neutron star's iron line to longer wavelengths.


The iron line Cackett and Miller observed in Serpens X-1 was nearly identical to the one Bhattacharyya and Strohmayer observed with a different satellite: the European Space Agency's XMM-Newton. In the other star systems, Cackett and Miller observed similarly-skewed iron lines.


"We're seeing the gas whipping around just outside the neutron star's surface," Cackett said. "And since the inner part of the disk obviously can't orbit any closer than the neutron star's surface, these measurements give us a maximum size of the neutron star's diameter. The neutron stars can be no larger than 18 to 20.5 miles across, results that agree with other types of measurements."


Knowing a neutron star's size and mass allows physicists to describe the "stiffness," or "equation of state," of matter packed inside these incredibly dense objects. Besides using these iron lines to test Einstein's general theory of relativity, astronomers can probe conditions in the inner part of a neutron star's accretion disk.


"Now that we've seen this relativistic iron line around three neutron stars, we have established a new technique," Miller said. "It's very difficult to measure the mass and diameter of a neutron star, so we need several techniques to work together to achieve that goal."






Technorati : , , , , , , , ,

Cassini's Upcoming Visit to the Walnut, Iapatus


Cassini's Upcoming Visit to the Walnut, Iapatus


NASA's Cassini spacecraft is going to make one of its most important flybys of its entire mission this week, zipping past Saturn's moon Iapetus on September 10th, 2007. I'm calling it "most important", not NASA, but trust me, this is a big one. I've got two reasons for this: it's the first time Cassini will get this close to Iapetus, and this moon is one of the strangest objects in the Solar System, with a whole collection of bizarre features.


First let's talk about Iapetus. This is one bizarre moon. Take a look at the picture and you'll see that it's got what looks like a seam running across its equator. That's not a seam, but a bizarre mountain range that runs across its equator. This ridge is 20 km (12 mile) wide and 13 km (8 mile) high, extending 1,300 km (800 miles) directly along the moon's equator.


It's possible that this ridge was created when the moon was spinning much more quickly than it does today. Or maybe this is some kind of icy material that welled up from within the moon and then solidified on the surface. Or perhaps the moon consumed one of Saturn's rings, piling the material up on its surface along the equator. Whatever the case, it's one of the strangest features in the Solar System.


Second, Iapetus has two completely different coloured hemispheres: one bright as snow and the other dark. The dark material might have come from another of Saturn's moons, or maybe it's organic material that rained down in the past. Perhaps it's material that came out from the middle and hardened. But what is it? You see, this place is mysterious.


Third, it's shaped like a walnut. You can see the strange shape just in the picture. That's not a trick of the camera, the moon really is squashed like that. Like someone tore it in half, and then smashed it back together again. What caused it? How did it stay that way, and not turn back into a sphere?


NASA's Voyager 2 flew past Iapetus on August 22, 1981 at a range of 966,000 km (600,000 miles) and turned up the strange shape and dark/light hemispheres. On December 31st, 2004, Cassini made its first close approach getting within 123,000 km (77,000 miles), and taking the picture I've attached with this story.


Well, on September 10th, 2007, Cassini will fly only 1,200 km (800 miles) above Iapetus and take its highest resolution pictures ever. Finally, I'll get my answers. And probably a few new questions too.






Technorati : , , , ,

career fact


http://www.careerbd.net



Hole a billion light-years across discovered


Hole a billion light-years across discovered


Astronomers have found a perplexing and enormous hole in the universe, nearly a billion light years across, empty of both normal matter such as stars, galaxies, and gas, and mysterious dark matter.


"Not only has no one ever found a void this big, but we never even expected to find one this size," said Lawrence Rudnick of the University of Minnesota, USA. Rudnick heads a team of astronomers who report the finding in a paper slated for publication in the Astrophysical Journal.


Astronomers have known for years that, on large scales, the universe has voids largely empty of matter. However, most are much smaller than the one found by Rudnick's team.


Finding is "not normal"


"What we've found is not normal, based on either observational studies or on computer simulations of the large-scale evolution of the universe," team member Liliya R. Williams said.


The astronomers drew their conclusion by studying data from the Very Large Array (VLA) radio telescope, in New Mexico. The data revealed a mysterious fall in the number of galaxies in a region of sky in the constellation Eridanus.


The survey that made the finding imaged roughly 82 per cent of the sky visible from the New Mexico site. It consists of 217,446 individual observations that consumed 2,940 hours of telescope time between 1993 and 1997.


"We already knew there was something different about this spot in the sky," said Rudnick. The region had been dubbed the "WMAP Cold Spot," because it stood out in a map of the Cosmic Microwave Background (CMB) radiation made by the Wilkinson Microwave Anisotopy Probe (WMAP) satellite, launched by U.S. space agency NASA in 2001.


Cold region


The CMB, made up of faint radio waves that are the remnant radiation from the Big Bang, is the earliest "baby picture" available of the universe. Irregularities in the CMB show structures that existed only a few hundred thousand years after the Big Bang.


The WMAP satellite measures temperature differences in the CMB picture accurate to millionths of a degree. The cold region in Eridanus was discovered in 2004. Astronomers wondered if the cold spot was intrinsic to the CMB, and thus indicated some structure in the very early Universe, or whether it could be caused by something more nearby through which the CMB radiation had to pass on its way to Earth.


Now, finding the dearth of galaxies in that region by studying the VLA data has resolved that question.


"Although our surprising results need independent confirmation, the slightly colder temperature of the CMB in this region appears to be caused by a huge hole devoid of nearly all matter roughly 6-10 billion light-years from Earth," Rudnick said.


CMB radiation gains a small amount of energy when it passes through a region of space populated by matter, said Rudnick. However, when it passes through an empty void, it loses a small amount of energy, so appears cooler.


The experts are as yet at a loss to explain the anomaly.




Technorati : , , , ,

Moebius strip riddle solved at last


Moebius strip riddle solved at last


Loopy logic: A Möbius strip made of a material that changes colour with bending pressure. Places where the strip is most bent have the highest energy density; conversely, places that are flat and unstressed by a fold have the least energy density.



Scientists have cracked a 75-year-old riddle involving the Möbius strip, a mathematical phenomenon that has also become an art icon.

Popularised by the Dutch artist M.C. Escher, a Möbius (or Moebius) strip entails taking a strip of paper or some other flexible material. You take one end of the strip, twist it through 180 degrees, and then tape it to the other end.


This creates a loop that has an intriguing quality - dazzlingly exploited by Escher - in that it only has one side.


Mathematical conundrum


Since 1930, the Möbius strip has been a classic poser for experts in mechanics. The teaser is to resolve the strip algebraically - to explain its unusual shape in the form of an equation.


Now, in a study published in the journal Nature Materials that lyrically praises the strip for its "mathematical beauty," Gert van der Heijden and Eugene Starostin of University College London, in England, present the solution.


What determines the strip's shape is its differing areas of "energy density," say the experts in non-linear dynamics.


"Energy density" means the stored, elastic energy that is contained in the strip as a result of the folding. Places where the strip is most bent have the highest energy density; conversely, places that are flat and unstressed by a fold have the least energy density.


If the width of the strip increases in proportion to its length, the zones of energy density also shift, which in term alters the shape, according to their equations. A wider strip, for instance, leads to nearly flat, "triangular" regions in the strip, a phenomenon that also happens when paper is crumpled.


Not just esoteric


The research may seem esoteric, but van der Heijden and Starostin believe it also has practical applications.
It could help predict points of tearing in fabrics and also be useful for pharmaceutical engineers who model the structure of new drugs.


"One of the classic problems in mechanics is to find the shape assumed by a Möbius strip - the famous band that is closed with a half-twist and which has the intriguing topological property that it only has one side," said mathematician John H. Maddocks in an accompanying commentary that also appeared in Nature Materials.


"This abstract mathematical question, dating back to at least 1930, is also of practical scientific interest as single crystals in the form of a Möbius band have now been grown," said Maddocks, of the Swiss Federal Institute of Technology in Lausanne, who was not involved in the study.


The Möbius strip was named after a German mathematician, August Ferdinand Möbius, who discovered it in 1858. Another German, Johann Benedict Listing, separately discovered it in the same year.




Technorati : , ,

Australian science is among world's best


Australian science is among world's best


Impact factor:Australian scientists and academics published over a quarter of a million research papers between 1996 and 2006.


Australian science is punching well above its weight despite research funding below the level of other developed nations, a new analysis reveals.

When it comes to the country's contribution of published research papers Australia is helping shape the future, or so says a report flagged up by the The Australian newspaper. That report argues that Australia ranks in the top 10 of nations most published in international academic journals; the mainstay of the scientific community.


Excelling above expectations


"It's gratifying [to see Australia] feature well above what the standard is. We excel in sport above expectations but I think that science research does too, though it's not really recognised in that way," said Michael Barber group executive for information, manufacturing and minerals at government research body CSIRO in Sydney.


In the report - published by Thomson Scientific, an international business information research organisation - the total number of papers published between 1996 and 2006 is surveyed for 13 nations, taking into account the number of times each paper had been cited by others.


The results show the number of the most cited one per cent of papers worldwide each country achieved over that period.


Leading the field was the U.S. with over 50,000 of the top papers and a total of over 2.9 million publications in that period. The runners up were Britain and Germany.


Australia comes out number eight in the list for contribution to the most cited one per cent of papers - sandwiched between Italy and China - with a total of over a quarter of a million papers published during that time.


Australia actually moves up into the top five nations for research when our population size is taken into account, health scientist Simon Chapman of the University of Sydney told The Australian.


Underfunded overachiever


He added that the results have been achieved despite the country spending only 0.12 per cent of its gross domestic product (GDP) compared to the 0.2 per cent average spent by the mostly developed nations that are part of the Organisation for Economic Co-operation and Development.


"What might we achieve if research was taken as seriously as other nations?" said Chapman.


Kurt Lambeck, president of the Australian Academy of Science, based in Canberra, told Cosmos Online the results are interesting but he wondered if Australian scientists are "publishing [significant numbers of] papers in the most relevant fields of science…like nuclear science, engineering and nanotechnology which are currently very important."


CSIRO's Barber said that if we're not, we have to step up to the plate - and "make sure we're encouraging science and its research… The challenge is to take the science-based knowledge we are developing and translate it into social, economic and environmental gains."





Technorati : , , ,

Elusive waves observed in Sun's corona


Elusive waves observed in Sun's corona


Physicists claim to have solved a perplexing mystery as to why the Sun's atmosphere is much hotter than its surface. The answer may lie in a type of solar plasma wave that had been predicted to exist, but never observed until now.


The Sun's corona is a kind of superheated atmosphere of ionised gas, or plasma, that extends millions of kilometres into space. Researchers have long been puzzled that corona - at a temperature of millions of degrees Kelvin - is up to 200 times as hot as the surface.


Difficult to detect


Now, a new study in the U.S. journal Science reports that a solar weather phenomenon called Alfvén waves have been observed for the first time and may partially explain the temperature discrepancy.


Named after a Swedish physicist who postulated their existence in 1942, these plasma waves are like the vibrations that travel along a perturbed rope.


"Alfvén waves have long been postulated as a possible mechanism to transfer energy out into the corona, but until now they have not been observed." said Steve Tomczyk who headed the research U.S. team at the National Centre for Atmospheric Research (NCAR) in Boulder, Colorado.


Part of the reason the waves have been so difficult to detect is that, like sound waves, they cannot be compressed and they do not alter the heat and brightness of the solar material through which they pass, said Tomczyk. However these properties are also what may allow them to transport energy without dissipating easily, he said.


Tomczyk's team approached the problem using a new polarimeter at the NCAR's High Altitude Observatory to image the surface of the Sun as never before, in narrow bands and using polarised light. The approach has allowed them to detect the characteristic pattern of Alfvén waves travelling across the images.


They found that waves are ubiquitous in the corona and may therefore be partly responsible for transferring heat to it.


Solar storm detection


Tomczyk argues that it will now be possible to use Alfvén waves to measure the strength and direction of magnetic fields in the corona, valuable as they directly control the ejection of solar matter and solar storms that reach Earth.


"This research provides the first convincing observations of vigorous and ubiquitous magnetic wave activity in the solar corona," commented Tom Bogdan with the U.S. National Oceanic and Atmospheric Administration Space Environment Centre in Boulder, Colorado. "Before this critical breakthrough, most of our inferences about the Sun's magnetic field relied on theoretical models often plagued by uncertainties and ambiguities."


According to Bogdan, the observation of Alfvén waves not only increases our understanding of the complex behaviour of the Sun but will also allows us to more accurately predict space weather, that can knock out our power grids or damage orbiting satellites.


"Timely warnings of 'solar tsunamis' will enable high-technology sectors of our global economy including aviation, power grid operators, the satellite industry and commercial space endeavours to secure their assets and operations," he said.




Technorati : , , ,

Microfluidic Chambers Advance The Science Of Growing Neurons


Microfluidic Chambers Advance The Science Of Growing Neurons


Researchers at the University of Illinois have developed a method for culturing mammalian neurons in chambers not much larger than the neurons themselves. The new approach extends the lifespan of the neurons at very low densities, an essential step toward developing a method for studying the growth and behavior of individual brain cells.


The technique is described this month in the journal of the Royal Society of Chemistry - Lab on a Chip.


"This finding will be very positively greeted by the neuroscience community," said Martha Gillette, who is an author on the study and the head of the cell and developmental biology department at Illinois. "This is pushing the limits of what you can do with neurons in culture."


Growing viable mammalian neurons at low density in an artificial environment is no easy task. Using postnatal neurons only adds to the challenge, Gillette said, because these cells are extremely sensitive to environmental conditions.


All neurons rely on a steady supply of proteins and other "trophic factors" present in the extracellular fluid. These factors are secreted by the neurons themselves or by support cells, such as the glia. This is why neurons tend to do best when grown at high density and in the presence of other brain cells. But a dense or complex mixture of cells complicates the task of characterizing the behavior of individual neurons.


One technique for keeping neural cultures alive is to grow the cells in a medium that contains serum, or blood plasma. This increases the viability of cells grown at low density, but it also "contaminates" the culture, making it difficult to determine which substances were produced by the cells and which came from the serum.


Those hoping to understand the cellular origins of trophic factors in the brain would benefit from a technique that allows them to measure the chemical outputs of individual cells. The research team made progress toward this goal by addressing a few key obstacles.


First, the researchers scaled down the size of the fluid-filled chambers used to hold the cells. Chemistry graduate student Matthew Stewart made the small chambers out of a molded gel of polydimethylsiloxane (PDMS). The reduced chamber size also reduced - by several orders of magnitude - the amount of fluid around the cells, said Biotechnology Center director Jonathan Sweedler, an author on the study. This "miniaturization of experimental architectures" will make it easier to identify and measure the substances released by the cells, because these "releasates" are less dilute.


"If you bring the walls in and you make an environment that's cell-sized, the channels now are such that you're constraining the releasates to physiological concentrations, even at the level of a single cell," Sweedler said.


Second, the researchers increased the purity of the material used to form the chambers. Cell and developmental biology graduate student Larry Millet exposed the PDMS to a series of chemical baths to extract impurities that were killing the cells.


Millet also developed a method for gradually perfusing the neurons with serum-free media, a technique that resupplies depleted nutrients and removes cellular waste products. The perfusion technique also allows the researchers to collect and analyze other cellular secretions - a key to identifying the biochemical contributions of individual cells.


"We know there are factors that are communicated in the media between the cells," Millet said. "The question is what are they, and how can we get at those?"


This combination of techniques enabled the research team to grow postnatal primary hippocampal neurons from rats for up to 11 days at extremely low densities. Prior to this work, cultured neurons in closed-channel devices made of untreated, native PDMS remained viable for two days at best.


The cultured neurons also developed more axons and dendrites, the neural tendrils that communicate with other cells, than those grown at low densities with conventional techniques, Gillette said.


"Not only have we increased the cells' viability, we've also increased their ability to differentiate into what looks much more like a mature neuron," she said.


Sweedler noted that the team's successes are the result of a unique collaboration among scientists with very different backgrounds.


"(Materials science and engineering professor) Ralph Nuzzo is one of the pioneers in self-assembled monolayers and surface chemistry," Sweedler said. "Martha Gillette's expertise is in understanding how these neurons grow, and in imaging them. My lab does measurement science on a very small scale. It's almost impossible for any one lab to do all that."


Nuzzo and Sweedler are William H. and Janet Lycan professors of chemistry. Gillette is Alumni Professor of Cell and Developmental Biology. All are appointed in the Institute for Genomic Biology. Sweedler and Gillette are affiliates of the Beckman Institute and the Neuroscience Program. Sweedler is a professor in the Bioengineering Program and Gillette in the College of Medicine.










Technorati : , , , ,
Del.icio.us : , , , ,
Ice Rocket : , , , ,

'Lego-block' Galaxies Discovered In Early Universe


Lego-block' Galaxies Discovered In Early Universe


The conventional model for galaxy evolution predicts that small galaxies in the early Universe evolved into the massive galaxies of today by coalescing. Nine Lego-like "building block" galaxies initially detected by Hubble likely contributed to the construction of the Universe as we know it. "These are among the lowest mass galaxies ever directly observed in the early Universe" says Nor Pirzkal of the European Space Agency/STScI.


Pirzkal was surprised to find that the galaxies' estimated masses were so small. Hubble's cousin observatory, NASA's Spitzer Space Telescope was called upon to make precise determinations of their masses. The Spitzer observations confirmed that these galaxies are some of the smallest building blocks of the Universe.


These young galaxies offer important new insights into the Universe's formative years, just one billion years after the Big Bang. Hubble detected sapphire blue stars residing within the nine pristine galaxies. The youthful stars are just a few million years old and are in the process of turning Big Bang elements (hydrogen and helium) into heavier elements. The stars have probably not yet begun to pollute the surrounding space with elemental products forged within their cores.


"While blue light seen by Hubble shows the presence of young stars, it is the absence of infrared light in the sensitive Spitzer images that was conclusive in showing that these are truly young galaxies without an earlier generation of stars," says Sangeeta Malhotra of Arizona State University in Tempe, USA, one of the investigators.


The galaxies were first identified by James Rhoads of Arizona State University, USA, and Chun Xu of the Shanghai Institute of Technical Physics in Shanghai, China. Three of the galaxies appear to be slightly disrupted -- rather than being shaped like rounded blobs, they appear stretched into tadpole-like shapes. This is a sign that they may be interacting and merging with neighbouring galaxies to form larger, cohesive structures.


The galaxies were observed in the Hubble Ultra Deep Field (HUDF) with Hubble's Advanced Camera for Surveys and the Near Infrared Camera and Multi-Object Spectrometer as well as Spitzer's Infrared Array Camera and the European Southern Observatory's Infrared Spectrometer and Array Camera. Seeing and analysing such small galaxies at such a great distance is at the very limit of the capabilities of the most powerful telescopes.


Images taken through different colour filters with the ACS were supplemented with exposures taken through a so-called grism which spreads the different colours emitted by the galaxies into short "trails". The analysis of these trails allows the detection of emission from glowing hydrogen gas, giving both the distance and an estimate of the rate of star formation. These "grism spectra" - taken with Hubble and analysed with software developed at the Space Telescope-European Coordinating Facility in Munich, Germany - can be obtained for objects that are significantly fainter than can be studied spectroscopically with any other current telescope.





Technorati : , , , ,

Large Asteroid Breakup May Have Caused Mass Extinction On Earth 65 Million Years Ago


Large Asteroid Breakup May Have Caused Mass Extinction On Earth 65 Million Years Ago


The impactor believed to have wiped out the dinosaurs and other life forms on Earth some 65 million years ago has been traced back to a breakup event in the main asteroid belt. A joint U.S.-Czech team from Southwest Research Institute (SwRI) and Charles University in Prague suggests that the parent object of asteroid (298) Baptistina disrupted when it was hit by another large asteroid, creating numerous large fragments that would later create the Chicxulub crater on the Yucatan Peninsula as well as the prominent Tycho crater found on the Moon.


The team of researchers, including Dr. William Bottke (SwRI), Dr. David Vokrouhlicky (Charles University, Prague) and Dr. David Nesvorny (SwRI), combined observations with several different numerical simulations to investigate the Baptistina disruption event and its aftermath. A particular focus of their work was how Baptistina fragments affected the Earth and Moon.


At approximately 170 kilometers in diameter and having characteristics similar to carbonaceous chondrite meteorites, the Baptistina parent body resided in the innermost region of the asteroid belt when it was hit by another asteroid estimated to be 60 kilometers in diameter. This catastrophic impact produced what is now known as the Baptistina asteroid family, a cluster of asteroid fragments with similar orbits. According to the team's modeling work, this family originally included approximately 300 bodies larger than 10 kilometers and 140,000 bodies larger than 1 kilometer.


Once created, the newly formed fragments' orbits began to slowly evolve due to thermal forces produced when they absorbed sunlight and re-radiated the energy away as heat. According to Bottke, "By carefully modeling these effects and the distance traveled by different-sized fragments from the location of the original collision, we determined that the Baptistina breakup took place 160 million years ago, give or take 20 million years."


The gradual spreading of the family caused many fragments to drift into a nearby "dynamical superhighway" where they could escape the main asteroid belt and be delivered to orbits that cross Earth's path. The team's computations suggest that about 20 percent of the surviving multi-kilometer-sized fragments in the Baptistina family were lost in this fashion, with about 2 percent of those objects going on to strike the Earth, a pronounced increase in the number of large asteroids striking Earth.


Support for these conclusions comes from the impact history of the Earth and Moon, both of which show evidence of a two-fold increase in the formation rate of large craters over the last 100 to 150 million years. As described by Nesvorny, "The Baptistina bombardment produced a prolonged surge in the impact flux that peaked roughly 100 million years ago. This matches up pretty well with what is known about the impact record."


Bottke adds, "We are in the tail end of this shower now. Our simulations suggest that about 20 percent of the present-day, near-Earth asteroid population can be traced back to the Baptistina family."


The team then investigated the origins of the 180 kilometer diameter Chicxulub crater, which has been strongly linked to the extinction of the dinosaurs 65 million years ago. Studies of sediment samples and a meteorite from this time period indicate that the Chicxulub impactor had a carbonaceous chondrite composition much like the well-known primitive meteorite Murchison. This composition is enough to rule out many potential impactors but not those from the Baptistina family. Using this information in their simulations, the team found a 90 percent probability that the object that formed the Chicxulub crater was a refugee from the Baptistina family.


These simulations also showed there was a 70 percent probability that the lunar crater Tycho, an 85 kilometer crater that formed 108 million years ago, was also produced by a large Baptistina fragment. Tycho is notable for its large size, young age and its prominent rays that extend as far as 1,500 kilometers across the Moon. Vokrouhlicky says, "The probability is smaller than in the case of the Chicxulub crater because nothing is yet known about the nature of the Tycho impactor."


This study demonstrates that the collisional and dynamical evolution of the main asteroid belt may have significant implications for understanding the geological and biological history of Earth.


As Bottke says, "It is likely that more breakup events in the asteroid belt are connected in some fashion to events on the Earth, Moon and other planets. The hunt is on!"


The article, "An asteroid breakup 160 Myr ago as the probable source of the K/T impactor," was published in the Sept. 6 issue of Nature.


The NASA Origins of Solar Systems, Planetary Geology and Geophysics, and Near-Earth Objects Observations programs funded Bottke's and Nesvorny's research; Vokrouhlicky was funded by the Grant Agency of the Czech Republic.





Technorati : , , , ,
Zooomr : , , , ,

Networks Create 'Instant World Telescope'


Networks Create 'Instant World Telescope'


For the first time, a CSIRO radio telescope has been linked to others in China and Europe in real-time, demonstrating the power of high-speed global networks and effectively creating a telescope almost as big as the Earth


A CSIRO telescope near Coonabarabran NSW was recently used simultaneously with one near Shanghai, China, and five in Europe to observe a distant galaxy called 3C273.


"This is the first time we've been able to instantaneously connect telescopes half a world apart," Dr Tasso Tzioumis, VLBI operations and development manager at CSIRO's Australia Telescope National Facility said.


"It's a fantastic technical achievement, and a tribute to the ability of the network providers to work together."


Data from the telescopes was streamed around the world at a rate of 256 Mb per second - about ten times faster than the fastest broadband speeds available to Australian households - to a research centre in Europe, where it was processed with a special-purpose digital processor.


The results were then transmitted to Xi'an, China, where they were watched live by experts in advanced networking at the 24th APAN (Asia-Pacific Advanced Network) Meeting.


From Australia to Europe, the CSIRO data travelled on a dedicated 1 Gb per second link set up by the Australian, Canadian and Dutch national research and education networks, AARNet, CANARIE and SURFnet respectively.


"The diameter of the Earth is 12 750 km and the two most widely separated telescopes in our experiment were 12 304 km apart, in a straight line,"


Dr Tzioumis said.


Within Australia, the experiment used the 1 Gb per second networks that now connect CSIRO's NSW observatories to Sydney and beyond. The links, installed in 2006, were funded by CSIRO and provided by AARNet (the Australian Academic Research Network).


The telescope-linking technique, VLBI (very long baseline interferometry) used to take weeks or months.


"We used to record data on tapes or disks at each telescope, along with time signals from atomic clocks. The tapes or disks would then be shipped to a central processing facility to be combined," Dr Tzioumis said


"The more widely separated the telescopes, the more finely detailed the observations can be. The diameter of the Earth is 12 750 km and the two most widely separated telescopes in our experiment were 12 304 km apart, in a straight line," Dr Tzioumis said.


The institutions that took part in the experiment are all collaborators in the EXPReS project (Express Production Real-time e-VLBI Service), which is coordinated by the Joint Institute for VLBI in Europe (JIVE) in The Netherlands.





Technorati : , , , ,
Del.icio.us : , , , ,
Ice Rocket : , , , ,
Flickr : , , , ,

Specific Neurons Involved In Memory Formation Identified


Specific Neurons Involved In Memory Formation Identified


In a remarkable new study, scientists at The Scripps Research Institute have unlocked one of the secrets of how memory is formed. Working with a unique breed of transgenic mice, the new study has shown for the first time that the same neurons activated during fear conditioning are, in fact, reactivated during memory retrievalThe findings could potentially be used to uncover precisely how drugs such as antidepressants work in the brain, allowing clinicians to more accurately evaluate various treatment options.


"Our study provides the answers to some basic questions," said Mark Mayford, whose laboratory conducted the groundbreaking study. "We show that when you learn, and when you recall what you've learned, you reactivate the same neurons used during the original experience. While some studies have shown which region of the brain is active during learning and recall, we've now shown this at the level of individual neurons."


The new results suggest that the affected neurons evolved stable synaptic changes, giving them a capacity for reactivation by conditioned stimulus for at least three days. The study concluded that the reactivated neurons were likely a component of a stable engram or memory trace for conditioned fear.


Memories are presumably stored in subgroups of neurons that are activated in response to various sensory experiences, the study said. Previously, some encoding of memories in complex neuronal networks had been identified with electrophysiological recordings, and similar approaches have identified neurons with firing properties temporally linked to various aspects of learned task performance.


But, Mayford noted, this is like knowing only that a computer is turned on. The new study shows precisely which circuits are active during a specific memory formation.


"We found neurons in the basolateral amygdala that were activated during fear conditioning and were reactivated during memory retrieval," Mayford said. "The number of reactivated neurons correlated with the behavioral expression of that fear memory in the mice themselves, which indicates a stable correlation between these neurons and memory."


The basolateral amygdala is the part of the brain believed to be responsible for memories involving emotional arousal.


An innovative mouse


The new study utilized a unique transgenic mouse (TetTag mouse) that enabled scientists to genetically tag individual neurons activated during a given time frame. The tag can be used for the direct comparison of neuronal activity at two distinct and widely spaced points in time.


(The name TetTag comes from the technology itself; it combines elements of the tetracycline-transactivator system. Gene expression is controlled in these transgenic mice through exposure to tetracycline or derivatives such as doxycycline.)


This novel technology can be used with free roaming mice, enabling the scientists to record and measure the correlation between neuronal activity and any behavioral expression of a specific memory. Moreover, the study noted, the technology requires only basic laboratory equipment, which is generally available to most researchers.


"The TetTag mouse allows us to put genes into neurons that have been activated by an environmental stimulus," Mayford said. "Basically, we can put any gene we want into those neurons activated by fear, and this gives us genetic control over very specific circuits in the brain."


The reason fear and anxiety were used as the activating experience, Mayford said, is because fear is an ancient and fundamental emotion" "We know that mice feel fear; we don't know if they feel joy. In the wild, you can survive without joy, but you don't live very long without fear."


The ability to genetically manipulate these activated neurons should allow a better and more precise understanding of the underlying molecular mechanisms of memory encoding within a particular neuronal network.


This might one day translate into a clinical advantage for treating patients suffering from disorders such as depression, Mayford said.


"Antidepressants don't work the same in every individual," he said, "so our genetic tagging technique could potentially help clinicians evaluate treatment by showing how an individual's brain works at two different times during treatment-where and how the drug is affecting specific neurons."


The new study was published in the August 31, 2007, edition of the journal Science. Other authors of the Science study, "Localization of a Stable Neural Correlate of Associative Memory," were Leon G. Reijmers, Brian L. Perkins, and Naoki Matsuo of The Scripps Research Institute.





Technorati : , , , , ,
Del.icio.us : , , , , ,
Ice Rocket : , , , , ,
Flickr : , , , , ,

Migrating Squid Drove Evolution Of Sonar In Whales And Dolphins, Researchers Argue


Migrating Squid Drove Evolution Of Sonar In Whales And Dolphins, Researchers Argue


Behind the sailor's lore of fearsome battles between sperm whale and giant squid lies a deep question of evolution: How did these leviathans develop the underwater sonar needed to chase and catch squid in the inky depths?


Now, two evolutionary biologists at the University of California, Berkeley, claim that, just as bats developed sonar to chase flying insects through the darkness, dolphins and other toothed whales also developed sonar to chase schools of squid swimming at night at the surface.


Because squid migrate to deeper, darker waters during the day, however, toothed whales eventually perfected an exquisite echolocation system that allows them to follow the squid down to that "refrigerator in the deep, where food is available day or night, 24/7," said evolutionary biologist David Lindberg, UC Berkeley professor of integrative biology and coauthor of a new paper on the evolution of echolocation in toothed whales published online July 23 in advance of its publication in the European journal Lethaia.


"When the early toothed whales began to cross the open ocean, they found this incredibly rich source of food surfacing around them every night, bumping into them," said Lindberg, former director and now a curator in UC Berkeley's Museum of Paleontology. "This set the stage for the evolution of the more sophisticated biosonar system that their descendents use today to hunt squids at depth."


Lindberg and coauthor Nick Pyenson, a graduate student in the UC Berkeley Department of Integrative Biology and at the Museum of Paleontology, reconstructed this scenario after looking at both whale evolution and the evolution of cephalopods like squid and nautiloids - relatives of today's chambered nautilus - and relating this to the biology of living whales and cephalopods.


All toothed whales, or odontocetes, echolocate. The baleen whales, which sieve krill from the ocean and have no teeth, do not. The largest of the toothed whales, the sperm whale, grows up to 60 feet long and dives to 3,000 meters - nearly two miles - in search of squid. Though poorly known because they live entirely in the deep ocean, the many species of the beaked whale dive nearly as deep. Belugas and narwhals descend beyond 1,000 meters, while members of the dolphin family - porpoises, killer whales and pilot whales, for example - all can dive below the 200-meter mark where sunlight is reduced to darkness.


According to Pyenson, who focuses on the evolution of whales, the first whales entered the ocean from land about 45 million years ago, and apparently did not echolocate. Their fossil skeletons do not have the scooped forehead of today's echolocating whales, which cups a fatty melon-shaped ball that is thought to act as a lens to focus clicking noises.


Skulls with the first hints of a concave forehead and potential sound-generating bone structures arose about 32 million years ago, Pyenson said, by which time whales presumably had spread throughout the oceans. Whales had developed underwater hearing by about 40 million years ago.


According to Lindberg, whale biologists had various theories about echolocation, including that whales developed this biosonar soon after entering the water as a way to find food in turbid rivers and estuaries. The evolution of toothed whales, however, indicates otherwise. Whales first occupied the ocean, and only later invaded rivers. Other experts have proposed that development of echolocation coincided with global cooling around 33.5 million years ago, though a mechanism was not specified.


The most convincing explanation, that echolocation allowed whales to more efficiently find food in the darkness of the deep ocean, ignores the question of evolution.


"How did the whales know there was a large supply of food down in the dark?" asked Lindberg, noting that cephalopods are the most abundant and high-energy resource in the ocean, eaten by 90 percent of all toothed whales. "What were the intermediate evolutionary steps that got whales down there?"


Lindberg, a specialist in the evolution of marine mollusks, noted that cephalopods have migrated up and down on a daily "diel" cycle for at least 150 million years. At the time whales developed biosonar, nautiloids dominated the oceans. Lindberg and Pyenson propose that whales first found it possible to track these hard-shelled creatures in surface waters at night by bouncing sounds off of them, an advantage over whales that relied only on moonlight or starlight.


This would have enabled whales to follow the cephalopods as they migrated downwards into the darkness during the day. Today, the largest number of squid hang out during the day at about 500 meters below the surface, though some go twice as deep. During the night, however, nearly half the squid are within 150 meters of the surface.


Over the millennia, cephalopod species in general - and especially shelled cephalopod species - fell as the number of whale species boomed, possibly because of predation by whales. Then, about 10 million years ago, the whales seem to have driven the nautiloids out of the open ocean into protected reefs. Lindberg said that the decline in nautiloid diversity would have forced whales to perfect their sonar to hunt soft-bodied, migrating squid, such as the Teuthida, which in the open ocean are typically two feet long or bigger and range up to the 40-foot-long giant squid.


"Whales didn't need to have a very sophisticated sonar system to follow the nautiloids, they could just home in on the hard part," Lindberg said. Only later , he added, did they "develop a complex system with finer resolution to detect and capture soft-bodied squid."


"Whales, like bats, developed a sensory system for seeing with sound, and every single toothed whale echolocates in a different way, just like how different bat species echolocate in different ways," Pyenson said. Whales also partition the water column, specializing in harvesting squid at specific depths, just as bats partition the tree canopy and preferentially hunt insects at specific heights.


Lindberg noted that whales and bats are strong examples of convergent evolution to take advantage of unexploited food resources: nocturnal insects, in the case of non-migrating insectivorous bats, and nocturnal cephalopods, in the case of whales. And just as predominately migrating fruit bats do not echolocate, so filter-feeding baleen whales that depend on dense seasonal resources lack biosonar.


Lindberg and Pyenson used existing data on whales and cephalopods to reach their conclusions, drawing upon aspects of tectonics, paleontology, physiology, ecology, anatomy and biophysics. In the same way, "thinking from an evolutionary perspective about existing data from biology, paleontology and ecology could answer questions about the origin of echolocation in bats, shrews and other animals," Lindberg said.


The work was supported in part by the Remington Kellogg Fund and the Doris O. and Samuel P. Welles Research Fund of the UC Berkeley Museum of Paleontology and by a graduate research fellowship from the National Science Foundation.





Technorati : , , , , , , ,

Find here

Home II Large Hadron Cillider News