Tuesday, October 2, 2007
Traditionally, social scientists have been quite hesitant to acknowledge a role for genes in explaining economic behavior. But a study by David Cesarini, a Ph.D. student in MIT's Department of Economics, and by colleagues in Sweden indicates that there is a genetic component to people's perception of what is fair and what is unfair.
The paper, published in the Oct. 1 advanced online issue of the Proceedings of the National Academy of Sciences, looked at the ultimatum game, in which a proposer makes an offer to a responder on how to divide a sum of money. This offer is an ultimatum; if the responder rejects it, both parties receive nothing.
Because rejections in the game entail a zero payoff for both parties, theories of narrow self-interest predict that any positive amount will be accepted by a responder. The intriguing finding in the laboratory is that responders routinely reject free money, presumably in order to punish proposers for offers perceived as unfair.
To study genetic influence in the game, Cesarini and colleagues took the unusual step of recruiting twins from the Swedish Twin Registry, and had them play the game under controlled circumstances. Because identical twins share the same genes but fraternal twins do not, the researchers were able to detect genetic influences by comparing the similarity with which identical and fraternal twins played the game.
The researchers' findings suggest that genetic influences account for as much as 40 percent of the variation in how people respond to unfair offers. In other words, identical twins were more likely to play with the same strategy than fraternal twins.
"Compared to common environmental influences such as upbringing, genetic influences appear to be a much more important source of variation in how people play the game," Cesarini said.
"This raises the intriguing possibility that many of our preferences and personal economic choices are subject to substantial genetic influence," said lead author Bjorn Wallace of the Stockholm School of Economics, who conceived of the study.
Other members of the research team include Paul Lichtenstein of the Swedish Twin Registry and senior author Magnus Johannesson of the Stockholm School of Economics.
The research was funded by the Jan Wallander and Tom Hedelius Foundation and the Swedish Research Council.
The new energy research partnership builds on the existing Ford-MIT Alliance. With the establishment of this partnership, Ford becomes the inaugural Sustaining Member of the MIT Energy Initiative (MITEI), which was formally established in November 2006 to address global energy issues.
"Ford and MIT have a long and productive history of working together to meet critical national needs through research," said MIT President Susan Hockfield. "This expansion of the Ford-MIT Alliance will pair innovators at Ford and MIT to help meet the world's energy challenges. We are excited about Ford's support for the MIT Energy Initiative and its key role as the Initiative's first Sustaining Member."
As the inaugural Sustaining Member, Ford Motor Company will have the initial seat on the MITEI Executive Committee, which is responsible for the overall strategic direction of the Initiative. This five-year collaboration will support Ford's research program to develop new technologies and will include Ford's sponsorship of two named MITEI fellowships - the Ford Alliance Energy Fellows.
"Energy related issues pose some of society's greatest challenges," said Sue Cischke, Ford's senior vice president - Sustainability, Environment and Safety Engineering. "We are delighted to work with MIT toward sustainable solutions."
"Ford and MIT have a long history of innovating together," said Dr. Gerhard Schmidt, vice president, Ford Research and Advanced Engineering. "Focusing our combined efforts on energy challenges is crucial and timely."
The partnership will also support MITEI's energy research "seed fund" to promote the development of a broad range of novel, innovative energy technologies and concepts from innovators across the Institute.
"The development of new transportation technologies is critical for meeting the world's energy needs," noted Professor Ernest Moniz, director of MITEI. "As the first mover for the automotive technologies of the 20th century, Ford Motor Company transformed the world. This research collaboration is designed to support Ford's commitment to providing similar transformational technologies for a new century."
MITEI is an Institute-wide initiative designed to help transform the global energy system to meet the challenges of the future. The MIT Energy Initiative includes research, education, campus energy management and outreach activities, an interdisciplinary approach that covers all areas of energy supply and demand, security and environmental impact. For more information, please visit web.mit.edu/mitei/.
About Ford Motor Company
Ford Motor Company, a global automotive industry leader based in Dearborn, Mich., manufactures or distributes automobiles in 200 markets across six continents. With about 260,000 employees and about 100 plants worldwide, the company's core and affiliated automotive brands include Ford, Jaguar, Land Rover, Lincoln, Mercury, Volvo and Mazda. The company provides financial services through Ford Motor Credit Company. For more information regarding Ford's products, please visit www.fordvehicles.com.
Randolph, who most recently served as senior associate dean for student life, was installed in his new position during a ceremony at Kresge Auditorium attended by hundreds of members of the MIT community. Speakers at the service included President Susan Hockfield and the Rev. Peter J. Gomes of the Memorial Church of Harvard University.
As chaplain to the Institute, Randolph will be charged with working alongside the members of the Board of Chaplains, who represent many religious traditions, in fostering interfaith discourse and educating the MIT community about the history and role of religions around the world. His portfolio includes coordinating pastoral response in times of crisis at the Institute, raising the profile of religious life at MIT, and leading reflection on issues of social justice and core values.
In an interview with Tech Talk, Randolph said he sees his new role as particularly relevant at a time when religion is a dominant force in global events.
"At this time of a clash of cultures, it is clear that religion has become the point of the sword," he said. "My job will be to help knit together the fabric of faiths that already transcend our community."
Randolph said MIT's preeminence in science and technology means religion and matters of faith have a comparatively lower profile on campus but nonetheless thrive in their own right. MIT is home to 15 chaplains of different faiths and more than 35 student religious organizations, he said. In addition, a large number of interfaith activities can be found at the Institute; one recent example Randolph cited involved Muslim and Jewish students breaking fast together after sundown on Yom Kippur.
Since coming to MIT in 1979, Randolph has worked extensively with MIT's religious communities. He said sees his new role as chaplain to the Institute partly as an extension of what he has already been doing, but with added responsibilities such as resource development.
Randolph credited Dean for Student Life Larry Benedict with having the vision that made it possible for the Institute chaplain position to be created. But Benedict said Randolph's position was really the result of a vision laid out more than 50 years ago by former MIT President James Killian.
Killian felt strongly that MIT pay more attention to the spiritual life and to the place of religion in human history and contemporary society. Accordingly, Killian pushed for the building of both Kresge Auditorium and the adjacent chapel, both of which were intended to buoy spiritual life on campus. Plans for a chaplain were also in the works, but were put on hold when Killian left MIT to become special assistant for science and technology to President Eisenhower.
Benedict said the need for an Institute-wide chaplain today is arguably just as pressing--or more so--as it was in 1950s Cold War America.
"This is a milestone in the history of MIT that will be seen that way for generations to come," Benedict said.
"We really need a chaplain of the Institute to be a voice for justice, integrity and ethical conduct on campus," Benedict said. "At the same time, fostering interfaith dialogue becomes a major priority with an increasingly diverse population, with internationalization and with diverse religious groups on campus. This is especially true at a time when there is so much strife and stress in the world among and within various religions and sects."
The ADP's online databank, www.airlinedataproject.mit.edu, gives comparisons of the largest U.S. carriers on scores of different cost, revenue and productivity measures. The resource will let users compare 15 U.S. airlines on a wide variety of measures, including fleet utilization, labor costs, cash flow and profitability.
The project allows researchers to confirm--and in some cases dispel--conventional wisdom about the airline industry by presenting information within a historical landscape.
"The Airline Data Project will serve as an excellent data source for research and analysis not only for MIT students and faculty, but for airline executives, analysts, labor leaders and industry observers," said Peter P. Belobaba, program manager for the Global Airline Industry Program. "It is a natural extension of our ongoing work and supports our goal of developing a body of knowledge for understanding the development, growth and competitive factors that affect this industry."
The ADP was created in conjunction with the MIT Airline Industry Consortium, and with support from the Industry Studies Program of the Alfred P. Sloan Foundation. All of the data on the site is based on company filings with the U.S. Department of Transportation and the Securities and Exchange Commission.
"The airline industry is at its most critical crossroads since deregulation, and the information on this site tells hundreds of different stories that will bear that out," said ADP developer and manager William Swelbar, a research engineer at MIT and one of the aviation industry's most highly regarded economic analysts.
"Restructuring in the airline industry is not complete, despite extraordinary changes over the past six years," Swelbar said. "With new competition from foreign carriers through 'open skies' agreements and continued prospects for mergers and consolidation, the coming years could bring even more change."
Data on the ADP web site will be updated regularly with additional charts and analysis available.
MIT News Office
Airline Data Project
It may seem like science fiction, but some downright respectable scientists are gathered around radio antennae, waiting for messages from alien worlds.
And while other scientists scoff at stories of space aliens crash-landing in Roswell, New Mexico, still more are serious about alien television and radio broadcasts being beamed across the galaxy.
"We think that the extraterrestrials are out there. In fact, personally, I think the galaxy probably has a lot of intelligent critters out there," said Seth Shostak of the SETI (Search for Extraterrestrial Intelligence) Institute. "And we're trying to find them by eavesdropping on their radio traffic."
SETI is home to the most ambitious alien-contact project, tuning into 28 million separate frequencies. Scientists searching for extraterrestrial intelligence scan the static of the heavens for a signal, something that might amount to a message that says: "We're out here!"
Unlike actor Charlie Sheen's on-screen excitement at the discovery of an alien radio signal in the 1996 movie "The Arrival," in real life it's likely to be a computer that finds the signal.
"This is not like in the movies, where Charlie Sheen can sit next to the telescope with a bunch of loudspeakers and suddenly he hears (a sound) coming over the loudspeakers," Shostak says. "The computers are in fact monitoring the receiver, monitoring the 28 million channels."
Close to a breakthrough
But even Shostak has had his Charlie Sheen moment.
"There was a signal that we picked up in Australia that was passing all the tests for a few hours, and everybody was getting a little bit excited about that," he recalls. "I remember that I couldn't sit down. I just kept pacing around."
In the end, it was a false alarm, perhaps a stray satellite signal. Could it have been alien contact? No one knows. But about a dozen other scientific searches continue.
"If there are two or three places in our solar system that have life, life is rampant. Life is not a miracle. It's a statistic, something that happens all the time," Shostak explains. "So if you're getting a lot of life hooked up in the galaxy, maybe some of it's smart enough to build a radio transmitter."
While Shostak searches for alien radio broadcasters, he doesn't ever expect to have a personal encounter. The closest ones are light years away and even aliens have to obey the laws of physics, he says.
"If you want to make that trip in 10 years or less, the amount of energy your rocket's going to burn up is the amount of energy the United States uses in a century," Shostak says.
If Shostak is right, there may well be extraterrestrials out there. Someday there may even be interplanetary chat. But forget that close encounter, arrival and contact stuff. Shostak says it's just not fuel-efficient.
Scientists taking a peek under the clouds of ammonia gas that blanket Jupiter have found that the giant planet has wet and dry areas like the deserts and tropics on Earth.
When the unmanned Galileo spacecraft's atmospheric probe plunged through the outer layers of Jupiter's bottomless atmosphere on December 7, 1995, scientists expected to detect lots of water. Instead, they found dryness.
But now, new data from telescopes on Earth and on Galileo show other areas on Jupiter with clouds of water and perhaps even rain.
"We had suspected that the probe landed in the Sahara Desert of Jupiter," Andrew Ingersoll, a planetary science professor at the California Institute of Technology, told reporters Thursday at NASA's Jet Propulsion Laboratory.
Now that additional analysis has revealed moisture surrounding such dry spots, Ingersoll happily proclaimed: "Jupiter is wet."
But although Jupiter's weather may be more Earth-like than first believed, the planet lacks a solid surface, making it "highly unlikely" it could sustain life, Ingersoll said.
Robert Carlson, an investigator for Galileo's Near Infrared Mapping Spectrometer, showed a water map of a South America-sized expanse. It included bone-dry areas with 1 percent humidity, akin to Death Valley in California, and other places so wet, "It's either going to rain or is raining right now."
Astronomers hope to learn more about the way the oceans and atmosphere formed on Earth by studying the weather on Jupiter.
Tobias Owen, a University of Hawaii planetary scientist, explained that the abundance of elements found in Jupiter's atmosphere suggests it was seeded by comets.
"We think the same bombardment ... also brought the same important elements to Earth," he said.
NASA also released Galileo's images of Jupiter's very thin auroras, which glow in a narrow ring around the poles like the Northern Lights and Southern Lights above Earth's poles.
Auroras occur when electrically charged particles crash into Jupiter's atmosphere, but "where these charged particles come from is a mystery," Ingersoll said.
Galileo, launched in 1989 aboard a space shuttle, is more than halfway through a two-year orbital tour of Jupiter and its four major moons: Io, Europa, Callisto and Ganymede.
Carbon dioxide did not cause the end of the last ice age, a new study in Science suggests, contrary to past inferences from ice core records.
"There has been this continual reference to the correspondence between CO2 and climate change as reflected in ice core records as justification for the role of CO2 in climate change," said USC geologist Lowell Stott, lead author of the study, slated for advance online publication Sept. 27 in Science Express.
"You can no longer argue that CO2 alone caused the end of the ice ages."
Deep-sea temperatures warmed about 1,300 years before the tropical surface ocean and well before the rise in atmospheric CO2, the study found. The finding suggests the rise in greenhouse gas was likely a result of warming and may have accelerated the meltdown -- but was not its main cause.
The study does not question the fact that CO2 plays a key role in climate.
"I don't want anyone to leave thinking that this is evidence that CO2 doesn't affect climate," Stott cautioned. "It does, but the important point is that CO2 is not the beginning and end of climate change."
While an increase in atmospheric CO2 and the end of the ice ages occurred at roughly the same time, scientists have debated whether CO2 caused the warming or was released later by an already warming sea.
The best estimate from other studies of when CO2 began to rise is no earlier than 18,000 years ago. Yet this study shows that the deep sea, which reflects oceanic temperature trends, started warming about 19,000 years ago.
"What this means is that a lot of energy went into the ocean long before the rise in atmospheric CO2," Stott said.
But where did this energy come from" Evidence pointed southward.
Water's salinity and temperature are properties that can be used to trace its origin -- and the warming deep water appeared to come from the Antarctic Ocean, the scientists wrote.
This water then was transported northward over 1,000 years via well-known deep-sea currents, a conclusion supported by carbon-dating evidence.
In addition, the researchers noted that deep-sea temperature increases coincided with the retreat of Antarctic sea ice, both occurring 19,000 years ago, before the northern hemisphere's ice retreat began.
Finally, Stott and colleagues found a correlation between melting Antarctic sea ice and increased springtime solar radiation over Antarctica, suggesting this might be the energy source.
As the sun pumped in heat, the warming accelerated because of sea-ice albedo feedbacks, in which retreating ice exposes ocean water that reflects less light and absorbs more heat, much like a dark T-shirt on a hot day.
In addition, the authors' model showed how changed ocean conditions may have been responsible for the release of CO2 from the ocean into the atmosphere, also accelerating the warming.
The link between the sun and ice age cycles is not new. The theory of Milankovitch cycles states that periodic changes in Earth's orbit cause increased summertime sun radiation in the northern hemisphere, which controls ice size.
However, this study suggests that the pace-keeper of ice sheet growth and retreat lies in the southern hemisphere's spring rather than the northern hemisphere's summer.
The conclusions also underscore the importance of regional climate dynamics, Stott said. "Here is an example of how a regional climate response translated into a global climate change," he explained.
Stott and colleagues arrived at their results by studying a unique sediment core from the western Pacific composed of fossilized surface-dwelling (planktonic) and bottom-dwelling (benthic) organisms.
These organisms -- foraminifera -- incorporate different isotopes of oxygen from ocean water into their calcite shells, depending on the temperature. By measuring the change in these isotopes in shells of different ages, it is possible to reconstruct how the deep and surface ocean temperatures changed through time.
If CO2 caused the warming, one would expect surface temperatures to increase before deep-sea temperatures, since the heat slowly would spread from top to bottom. Instead, carbon-dating showed that the water used by the bottom-dwelling organisms began warming about 1,300 years before the water used by surface-dwelling ones, suggesting that the warming spread bottom-up instead.
"The climate dynamic is much more complex than simply saying that CO2 rises and the temperature warms," Stott said. The complexities "have to be understood in order to appreciate how the climate system has changed in the past and how it will change in the future."
Stott's collaborators were Axel Timmermann of the University of Hawaii and Robert Thunell of the University of South Carolina. Stott was supported by the National Science Foundation and Timmerman by the International Pacific Research Center.
Stott is an expert in paleoclimatology and was a reviewer for the Intergovernmental Panel on Climate Change. He also recently co-authored a paper in Geophysical Research Letters tracing a 900-year history of monsoon variability in India.
The study, which analyzed isotopes in cave stalagmites, found correlations between recorded famines and monsoon failures, and found that some past monsoon failures appear to have lasted much longer than those that occurred during recorded history. The ongoing research is aimed at shedding light on the monsoon's poorly understood but vital role in Earth's climate.
NASA plans to issue Mars news and weather updates on the Internet and take World Wide Web browsers along for rides in a Mars rover once the Pathfinder spacecraft arrives at the Red Planet.
The probe is to be launched December 2 and land July 4, 1997.
"Every day on the Internet, we're going to post the weather report on Mars -- a little different than Earth -- and there will be a virtual presence on Mars, so everybody in America and for that matter around the world can participate," NASA Administrator Daniel Goldin said Wednesday.
Web users will be able to see what the Mars rover sees as it ambles along the surface and scrutinizes rocks. Expect a 20- to 40-minute lag, though, for the time it takes the signals to reach Earth.
Mars Pathfinder will be the first spacecraft to land on Mars since NASA's twin Viking landers in 1976. Back then, there was no way to share such wonders with so many people. Even with the more recent planetary probes, there's never been anything like this.
"I would definitely term this the first planetary mission in the full-blown Internet era," said NASA spokesman Douglas Isbell. "It's vicarious exploration."
Cold and colder
If all goes well, the rover, named Sojourner, will study Martian rocks and soil for at least a week, possibly months, with scientists and Internet browsers following along.
As for the Martian weather forecast, make it cold and colder. At its equator, Mars is a brisk minus-70 degrees Fahrenheit and gets colder the closer one gets to the poles.
"I would hope that every newspaper would show the weather in Timbuktu -- and why not on Mars, too?" asked Matthew Golombek, project scientist for the Mars Pathfinder. "It's a little chilly, but a nice place to be."
In addition to Pathfinder, NASA plans to launch a Mars orbiter called the Mars Global Surveyor on November 6. It will take 10 months for the spacecraft to reach its destination. Once there, it will map the planet from a circular orbit for two years.
The color images will be posted on the Internet within a day or two.
Neither of the Mars probes will carry messages from Earthlings like Pioneers 10 and 11 and Voyagers 1 and 2, all launched in the 1970s.
NASA is gathering signatures, however, to put on one or two CD-ROMs that will be attached to the Cassini probe, to be launched next year to Saturn. So far, some 500,000 signatures have been collected, Isbell said.
Defective protein in sperm could be responsible for many cases
Scientists in Hong Kong and China have identified for the first time a protein in sperm from humans and from mice that could be responsible for many unexplained cases of male infertility.
Defective versions of the protein, called epithelial ion channel, have previously been reported to be responsible for female infertility.
Writing in the latest issue of the Proceedings of National Academy of Sciences journal, the researchers said they detected the protein in sperm samples from mice and human subjects
"(The protein) is involved in the transport of bicarbonate, which is required for sperm activation in order to fertilize the egg. If you have a defect in this (protein), then fertilization capacity of the sperm will be impaired or reduced," Chan Hsiao Chang, physiology professor at the Chinese University in Hong Kong, said in a telephone interview on Thursday.
Experiments showed that sperm taken from mutant mice with defective versions of the protein had far lower fertility than sperm taken from normal mice, the researchers said.
The discovery would help doctors more accurately diagnose and explain many cases of male infertility that have so far gone unexplained.
"For many people, they are infertile, but they don't know why, so diagnosis would be the immediate advantage," Chan said.
Between 8 percent and 12 percent of couples with women of childbearing age - or between 50 and 80 million people - are infertile globally, according to the World Health Organization.
Half of infertile couples fail to reproduce because of problems with male fertility.
Management of Male Infertility
While 85% of couples are able to conceive after one year of protected intercourse, approximately 15% of couples are unable to initiate a pregnancy without some form of assistance or therapy. These patients are said to be "primarily infertile." In approximately one-third of these couples, a male factor appears to be singularly responsible, and in an additional 20% both a male and a female factor can be identified. Therefore, a male factor is at least partly responsible for difficulties in conception in roughly 50% of these couples.
It has been shown that the longer a couple remains subfertile, the worse their chance for an effective cure. In addition, many couples experience significant apprehension and anxiety after only a few months of failure to conceive. For these reasons, unduly prolonged unprotected intercourse should not be advocated before workup of the male is instituted. Although it has often been recommended that clinical evaluation be delayed until 12 months of unprotected intercourse has passed, we believe that the initial screening of the male should be considered whenever the patient presents with the chief complaint of infertility. This initial evaluation, however, should be rapid, noninvasive, and cost-effective. The most important part of the management of male infertility is the correct diagnosis. The use of standard techniques for evaluating medical problems in general, such as complete history, physical examination, and laboratory tests is essential for this purpose.
A detailed history should address the duration of the couple's infertility, and also previous pregnancies with the present or previous partners. In addition, previous difficulty in achieving conception and any previous evaluation and treatment should be documented (Table 1).
(a) Sexual Habits
One of the most common problems encountered in this patient population is either too-frequent or too-infrequent intercourse. Often, neither the husband nor the wife understands her menstrual cycle. They do not realize that the optimal time for intercourse is midcycle and that the most effective frequency of intercourse is every 48 hours. This is based on the fact that sperm survival in normal cervical mucus and within the cervical crypts is approximately 2 days. Thus, this frequency will assure viable spermatozoa concurrently in the 24-hour period during which the egg will be within the fallopian tube and capable of being fertilized.
It is also important to discuss coital techniques with the husband, e.g., the use of lubricants or the frequency of masturbation that can deplete the sperm "reserve." Many lubricants have been tested for in vitro effects on sperm motility.1 Commonly used substances, such as K-Y Jelly, Lubifax, Surgilube, Keri Lotion, petroleum jelly, and saliva result in a deterioration of motility. Others, such as raw egg white, vegetable oil, and the Replens douche, do not impair in vitro motility. Astroglide, a water-soluble, inert vaginal lubricant, contains no petroleum ingredients or detergents that may be toxic to sperm; however, with increasing concentration, there is impairment of sperm motility equivalent to that found with K-Y jelly.
(b) Childhood Illnesses
A history of specific childhood illnesses and disorders may be an important finding in the evaluation of the infertile male. For example, it has been shown that in the male born with a unilaterally undescended testis, regardless of the time of orchiopexy, overall semen quality is considerably less than that found in normal men. Approximately 30% of men with unilateral cryptorchidism and 50% with bilateral cryptorchidism have sperm densities below 12-20 million/mL.2 Despite this impairment in semen parameters, the majority of men with a history of one undescended testis are able to initiate a pregnancy without difficulty.
Testicular trauma or torsion of the testes should be noted, since both can result in atrophic testes. Approximately 30% of men with history of testicular torsion will have abnormal results on semen analysis.3
A history of postpubertal mumps orchitis is also important. Mumps does not appear to affect the testes when experienced prepubertally. However after the age of 11 or 12, unilateral mumps orchitis is seen in 30% of males affected and bilateral orchitis in approximately 10%.4 Furthermore, the testicular damage can be quite severe and should be readily appreciated on physical examination, since the involved gonads will be markedly atrophic.
Patients who have had operative correction (Y-V plasty) of their bladder neck during childhood often suffer from retrograde ejaculation due to ablation of the internal sphincter. Bladder neck reconstruction at the time of ureteral reimplantation surgery was common in the early 1960s; this patient population has now entered an age group when pregnancy will most likely be attempted. Retrograde ejaculation should be suspected in the man who gives a history of bladder surgery and whose ejaculate volume is less than 1 cc, severely oligospermic, and abnormally alkaline. The correct diagnosis can be made by finding large numbers of sperm in the postejaculate urine. Children born with congenital anomalies involving the male reproductive system, such as bladder exstrophy/epispadias, can also exhibit abnormalities of ejaculation because of difficulties with both intromission and ejaculation. Spermatogenesis is usually normal; however, the ejaculatory ducts may be obstructed, or retrograde ejaculation may occur. Also, a history of herniorrhaphy suggests the possibility of iatrogenic vasal injury.
(c) Exogenous Agents That Interfere With Spermatogenesis
The history should also include a detailed inquiry into exposure to environmental toxins and medications that may interfere with spermatogenesis, either directly or through alterations in the endocrine system. For agents such as heat, ionizing radiation, heavy metals, and some organic solvents, there are many studies that support these associations. Recent publications have also reported the effect of specific pesticides (i.e., dibromochloropropane) on gonadal function.5 Furthermore, reversibility has been substantiated when the oligospermic patient has been removed from this toxic environment.6 However, once azoospermia has occurred, return to a normal pre-exposure state is highly unlikely.
Medications, such as sulfasalazine and cimetidine, or ingestants, such as caffeine, nicotine, alcohol, or marijuana, have also been implicated as gonadotoxic agents. Withdrawal from these substances should enable return of normal spermatogenesis if they are acting adversely. Also, calcium ion channel blockers may interfere with sperm membrane function and fertilization ability.7
The use of androgenic steroids by athletes is a potentially significant cause of infertility in both adults and adolescents, and the problem is becoming more commonplace.8 The incidence of steroid abuse has been reported to be as high as 30%-75% among professional athletes or body builders. Androgenic steroids exert their deleterious effect by depressing gonadotropin secretion and interfering with normal spermatogenesis. Consequently, if a person is taking any of these medications at the time of initial interview, the medication should be stopped and the patient's semen reevaluated at a later date.
Elevated temperatures, as in the routine use of saunas and hot tubs, may interfere with spermatogenesis.9
(d) Surgical History
Retroperitoneal Lymph Node Dissection: Approximately 75% of all testicular cancer patients will retain the potential for fertility.10 Retroperitoneal lymph node dissection can involve excision of portions of the sympathetic chain necessary for ejaculation. Some patients will retain seminal emission, but many will have retrograde ejaculation or lose the ability to emit semen altogether.
Prostatectomy: Patients who have had transurethral or open prostatectomy also have a high incidence of retrograde ejaculation. This incidence is reported to range from 40%-90%.
2. Physical Examination (Table 2)
Physical examination of the infertile man should include a generalized and complete evaluation. Any factor that affects overall health can theoretically be responsible for abnormalities in sperm production. For that reason, the physical examination should be thorough, with emphasis placed on the genitalia.
(a) Body Habitus
If the patient appears to be inadequately virilized (androgen-deficient), as evidenced by decreased body hair, gynecomastia, eunuchoid proportions, etc., the diagnosis of delayed maturation due to an endocrine abnormality should be considered and evaluated.
Penile curvature or angulation should be assessed, as should the location of the urethral meatus, i.e., for presence of hypospadias. Abnormalities can result in improper placement of the ejaculate within the vaginal vault.
The scrotal contents should also be carefully palpated with the patient standing. Testicular size and consistency must be noted and the volume of the testis estimated either with an orchidometer or by measuring the long and wide diameter of the testes to the nearest millimeter. It has been shown that a decrease in testicular size is often associated with impaired spermatogenesis. Standard values of testicular size have been recorded for the normal population.11 These data document that in the normospermic male, the length of the testis should be greater than 4 cm and the volume greater than 20 mL.
Examination of the peritesticular area is also essential. Epididymal induration, irregularity, and cystic changes should be noted, as should the presence absence of the vas deferens and any nodularity along its course. Certainly, engorgement of the pampiniform plexus should be identified, since a varicocele can cause abnormalities of gonadal function.12 Ideally, the patient should be examined in a warm room after standing for several minutes. Palpation for asymmetry of the spermatic cords, followed by a Valsalva maneuver with re-palpation of the spermatic cords, should be routinely performed. An "impulse" can often be felt with the increase in intra-abdominal pressure.
(d) Digital Rectal Examination (DRE)
DRE is necessary to assess prostatic size, as well as to rule out prostatic and/or seminal vesicular induration, masses, or cysts.
Disorder is harder on health than ills such as diabetes, arthritis, experts say...
Depression is more damaging to everyday health than chronic diseases such as angina, arthritis, asthma and diabetes, researchers said on Friday.
And if people are ill with other conditions, depression makes them worse, the researchers found.
"We report the largest population-based worldwide study to our knowledge that explores the effect of depression in comparison with four other chronic diseases on health state," the researchers wrote in the Lancet medical journal.
Somnath Chatterji of the World Health Organisation, who led the study, said researchers calculated the impact of different conditions by asking people questions about their capacities to function in everyday situations - such as moving around, seeing things at a distance and remembering information.
The researchers assigned a number between 0 and 100 reflecting a person's relative health score.
"Our main findings show that depression impairs health state to a substantially greater degree than the other diseases," the researchers wrote.
The team used World Health Organisation data collected from 60 countries and more than 240,000 people to show on average between 9 percent and 23 percent had depression in addition to one or more of four other chronic diseases - asthma, angina, arthritis and diabetes.
The most disabling combination was diabetes and depression, the researchers said.
"If you live for one year with diabetes and depression together you are living the equivalent of 60 percent of full health," Chatterji said in a telephone interview.
The findings show the need to provide better treatment for depression because it has such a big impact on people with chronic illnesses, Chatterji added.
"What tends to happen is a health provider doesn't look for anything else but the chronic illness," he said.
"What we are saying is, these people will also be depressed and if you don't manage the depression you can't improve a person's health because depression is actually worsening it."
- Adobe Systems Inc. released new software for its popular Flash Player on Sunday that promises to bring the quality of live video on cellular phones closer to that of video on computers.
Adobe, whose software made possible the rapid rise of pioneering online video site YouTube, said Nokia and NTT DoCoMo Inc. would use its new Flash Lite 3 in their new cell phones.
Adobe said more than 300 million mobile devices equipped with previous versions of Flash had already been shipped and it expected more than a billion Flash-enabled devices to be available by 2010.
Adobe's Flash software is installed on about 98 percent of all personal computers and is used by virtually all popular online video sites, mainly thanks to the fact it works independently of the device that the video is displayed on.
Gary Kovacs, in charge of marketing at Adobe's mobile unit, called Flash Lite 3 "the most significant advance we've made in mobile" and said it brought Adobe closer to being able to release software versions for mobile and desktop simultaneously.
"It's probably a few years away. We'll do it over the next couple to three years," he told Reuters.
Nokia's 3.4 million-strong mobile software development group, Forum Nokia, said it would launch a new development community on Monday to help Flash developers and designers.
Nokia, the world's biggest mobile telephone maker, announced a major new push into multimedia, including video, music and gaming last month, seeking to challenge Apple Inc.'s dominance in portable entertainment.
The head of Forum Nokia, Lee Epting, said in a statement: "Flash Lite 3 will enable us to deliver richer content to our customers, such as videos and animated ring tones."
Adobe, also known for its Acrobat document management and Photoshop software, said earlier this month that its profit more than doubled last quarter on strong sales of new products and as it makes inroads into mobile, video and office worker markets.
Puffy debris disks around three nearby stars could harbor Pluto-sized planets-to-be, a new computer model suggests.
The "planet embryos" are predicted to orbit three young, nearby stars, located within about 60 light years or less of our solar system. AU Microscopii and Beta Pictoris are both estimated to be about 12 million years old, while a third star, Fomalhaut, is aged at 200 million years old.
If confirmed, the objects would represent the first evidence of a never-before-observed stage of early planet formation. Another team recently spotted "space lint" around a nearby star that pointed to an even earlier phase of planet building, when baseball-sized clumps of interstellar dust grains are colliding together.
The new finding will be detailed in an upcoming issue of the Monthly Notices of the Royal Astronomical Society.
Using NASA's Hubble Space Telescope, the researchers measured the vertical thickness of so-called circumstellar debris disks around the stars, and then used a computer model to calculate the size of planets growing within them.
The thickness of a debris disk depends on the size of objects orbiting inside it. The ring of dust thins as the star system ages, but if enough dust has clumped together to form an embryonic planet, it knocks the other dust grains into eccentric orbits. Over time, this can puff up what was a razor-thin disk.
The new model the researchers created predicts how large the bodies in a disk must be to puff it up to a certain thickness. The results suggest that each of the three stars studied is harboring a Pluto-sized embryonic planet.
"Even though [the disks] are pretty thin, they turn out to be thick enough that we think there's something in there puffing them up," said study team member Alice Quillen of the University of Rochester in New York.
At least one of the stars is thought to contain at least one other planet in addition to the circling Pluto-sized planet. The circumstellar disk of Fomalhaut contains a void that scientists think is being cleared out by a Neptune-sized world. The researchers think the embryonic planets predicted by their model are too small to clear gaps like this in the disk.
"If you think of water flowing over pebbles, if the pebbles are very small at the bottom of the water, it doesn't make a good ripple," Quillen told Space.com.
All of the embryonic planets predicted to exist in the three systems are located far away from their parent stars. AU Microscopii's budding planet is estimated to lie about 30 AU from its star, or about the same distance that Pluto is from our sun. One AU is equal to the distance between Earth and the sun. The embryonic planets of Beta Pictoris and Fomalhaut are thought to lie even farther, at 100 and
133 AU, respectively.
It is the large distances separating the planet embryos and their stars that have drawn the most criticism by colleagues, Quillen said. Many find it hard to believe that any planet, even a diminutive Pluto-sized one, could form at such a far distance.
According to the standard theory of how our solar system formed, Pluto formed much closer to the sun but was then knocked out to its current orbit due to instability in the inner solar system. However, there are objects in our solar system that are located even further from our sun and are difficult to explain by this theory. Sedna, for example, is about three-fourths the size of Pluto and is located about three times farther from the sun.
Mordecai-Marc Mac Low, an astrophysicist at the American Museum of Natural History in New York City who was not involved in the study, said the new model should be viewed as a plausibility argument for the presence of Pluto-sized objects rather than proof of their existence.
"The work presented here shows that Pluto-sized objects stirring disks are consistent with the observed disk thicknesses and other properties," Mac Low said.
James Graham, an astronomer at the University of California, Berkeley, who was also not involved in the study, expressed a similar sentiment. "This calculation is making a bold extrapolation," Graham said in an e-mail interview. "It's bit like describing an elephant given a single cell from that animal. With enough knowledge, this is possible - if you know enough about microbiology and genetics and could read the DNA in the cell and in principle envision the entire creature."
One commonly used framework for troubleshooting that helps structure your response to a known network problem is the International Standards Organisation (ISO) Open Systems Interconnect (OSI) model. If you've worked with networking devices for any period of time, you are likely already familiar with OSI. It's the framework that encapsulates much of modern networking, and most network protocols live somewhere within its seven layers. Where you may not have used it before is as a troubleshooting guide for triaging an unknown problem on the network.
Figure 1: The OSI model is an excellent mental framework to assist the troubleshooter with identifying network problems.
Without going into too much detail on the history and use of model, let's take a look at how you can extend the OSI model into a framework for problem isolation. Figure 4.1 shows the seven layers in the OSI model and some issues that typically occur related to each layer. Let's discuss each of the layers in-turn from the bottom-up:
At the Physical layer, problems typically involve some break in the physical connectivity that makes up the network. Broken network connections, cabling and connector issues, and hardware problems that inhibit the movement of electricity from device to device typically indicate a problem at this layer.
At the Data Link layer, we move away from purely electrical problems and into the configuration of the interface itself. Data Link problems often have to do with Address Resolution Protocol (ARP) problems in relating IP addresses to Media Access Control (MAC) addresses. These can be caused by speed and duplex mismatching between network devices or excessive hardware errors for the interface. An incorrectly configured interface within the device operating system (OS) or interference for wireless connections can also cause problems at the Data Link layer.
At the Network layer, we begin experiencing problems with network traversal. Network layer problems typically occur when network packets cannot make their way from source to destination. This may have something to do with incorrect IP addressing or duplicate IP addresses on the network. Problems with routing data or ICMP packets across the network or protocol errors can also cause problems here. In extreme cases, an external attack can also spike error levels on network devices and cause problems identified at the Network Layer.
At the Transport layer, we isolate problems that typically occur with TCP or UDP packets in Ethernet networks. These may have to do with excessive retransmission errors or packet fragmentation. Either of these problems can cause network performance to suffer or drop completely. Problems at this layer can be difficult to track down because unlike the lower layers they often don't involve a complete loss of connectivity. Additionally, Transport layer problems can often involve the blocking of traffic at the individual IP port layer. If you've ever been able to ping a server but cannot connect via a known port, this can be a Transport layer problem.
The Session, Presentation, and Application layers are often lumped together because more recent interpretations of the OSI model tend to grey the lines between these three layers. The troubleshooting process for these three layers involves problems that have to do with applications that rely on the network.
These applications could involve DNS, NetBIOS, or other resolution, application issues on residing OSs, or high-level protocol failures or misconfigurations. Examples of these high-level protocols are HTTP, SMTP, FTP, and other protocols that typically "use the network" rather than "run the network." Additionally, specialised external attacks such as "man-in-the-middle" attacks can occur at these levels.
Network problems can and do occur at any level in the model. And because the model is so highly understood by network administrators, it immediately becomes a good measuring stick to assist with communicating those problems between triaging administrators. If you've ever worked with another administrator who uses language like, "This looks like a Layer 4 problem," you can immediately understand the general area (the Transport layer) in which the problem may be occurring.
You'll hear seasoned network administrators often refer to problems by their layer number. For example, when you hear "that's at layer 3," it can mean an IP connectivity problem. Layer 4 can reveal the problem is due to a network port closure. Network administrators jokingly refer to problems that occur with a system and not part of their network as those "at layer 7."
Let's talk about three different ways in which you can progress through this model during a typical problem isolation activity.
Three different approaches
Network administrators who use OSI as a troubleshooting framework typically navigate the model in one of three ways: Bottom-Up, Top-Down, and Divide-and-Conquer. Depending on how the problem manifests and their experience level, they may choose one method over another for that particular problem. Each of these approaches has its utility based on the type of problem that is occurring. Let's look at each.
The Bottom-Up approach simply means that administrators start at the bottom of the OSI model and work their way up through the various levels as they strike off potential root causes that are not causing the problem. An administrator using the Bottom-Up approach will typically start by looking at the physical layer issues, determine whether a break in network connectivity has occurred, and then work up through network interface configurations and error rates, and continue through IP and TCP/UDP errors such as routing, fragmentation, and blocked ports before looking at the individual applications experiencing the problem.
This approach works best in situations in which the network is fully down or experiencing numerous low-level errors. It also works best when the problem is particularly complex. In complex problems, the faulting application often does not provide enough debugging data to the administrator to give insight as to the problem. Thus, a network-focused approach works best.
The Top-Down approach is the reverse of the Bottom-Up approach in that the administrator starts at the top of the OSI model first, looking at the faulted application and attempting to track down why that application is faulted. This model works best when the network is in a known-good state and a new application or application reconfiguration is being completed on the network. The administrator can start by ensuring the application is properly configured, then work downward to ensure that full IP connectivity and appropriate ports are open for proper functionality of the application. Once all upper-level issues are resolved, a back-check on the network can be done to validate its proper functionality. As said earlier, this approach is typically used when the network itself is believed to be functioning correctly but a new network application is being introduced or an existing one is being reconfigured or repurposed.
The Divide-and-Conquer approach is a fancy name for the "gut feelling" approach. It is typically used by seasoned administrators who have a good internal understanding of the network and the problems it can face. The Divide-and-Conquer approach involves an innate feelling for where the problem may occur, starting with that layer of the OSI model first, and working out from that location. This approach can also be used for trivial issues that the administrator has seen before.
However, this approach has the downfall of often being non-scientific enough to properly diagnose a difficult problem. If the problem is complex in nature, the Divide-and-Conquer approach may not be structured enough to track down the issue.
Figure 4.: Depending on the type of problem, a Bottom-Up, Top-Down, or Divide-and-Conquer approach may be best for isolating the root cause of the problem.
No matter which approach you use, until you begin to develop that "gut instinct" for your network and its unique characteristics, you should consider a structured method for your troubleshooting technique. Although utilising a structured method can increase the time needed to resolve the problem, it will track down the problem without missing key items that drive resolution "band-aiding."
The 10-year partnership, known as the Novartis-MIT Center for Continuous Manufacturing, will work to develop new technologies that could replace the conventional batch-based system in the pharmaceuticals industry - which often includes many interruptions and work at separate sites - with continuous manufacturing processes from start to finish.
The Novartis-MIT Center for Continuous Manufacturing combines the industrial expertise of Novartis with MIT's leadership in scientific and technological innovation. Novartis will invest USD 65 million in research activities at MIT over the next 10 years.
"This partnership demonstrates our commitment to lead not only in discovering innovative treatments for patients but also in improving manufacturing processes, which are critical to ensuring a high-quality, efficient and reliable supply of medicines to patients. Our collaboration with MIT, a worldwide leader in developing cutting edge technologies, holds the promise to achieve a quantum leap in the production of pharmaceuticals, a field which has received rather little attention in the past," said Dr. Daniel Vasella, Chairman and CEO of Novartis.
Novartis and MIT expect the technologies created in this collaboration will benefit patients and healthcare providers through a positive impact on supply availability and the quality of medicines. These technologies will also seek to reduce the environmental impact of manufacturing activities.
"The Novartis-MIT Center for Continuous Manufacturing has the potential to revolutionize drug development and production," said Susan Hockfield, MIT President. "We are delighted to collaborate with Novartis to help improve the way that drugs are manufactured so that patients have quicker and more reliable access to the medications they need. The new educational opportunities that this program will provide for our students make this partnership even more exciting."
The pharmaceutical industry currently uses batch-based manufacturing that has been common for several years, even though other industries have moved to continuous manufacturing.
In this often time-consuming process, pharmaceutical active ingredients are synthesized in a chemical manufacturing plant. These ingredients are then shipped to a manufacturing facility, often at another site, where they are converted through defined processes into giant batches of pills, liquid or cream. With multiple interruptions, including transport to separate locations, each batch may take weeks to produce. In addition, manufacturing design and scale-up for a new drug are very costly and time-consuming.
Expected benefits of continuous manufacturing include accelerating the introduction of new drugs by designing production processes earlier; using smaller production facilities, with lower building and capital costs; minimizing waste, energy consumption and raw material use; monitoring quality assurance on a continuous basis instead of post-production batch-based testing; and enhancing process reliability and flexibility to respond to market needs.
The initial research of the Novartis-MIT Center for Continuous Manufacturing will be conducted primarily through Ph.D. programs at MIT laboratories, and then transferred to Novartis for further development to industrial-scale projects.
The partners expect the Center's work to involve seven to ten MIT faculty members, as well as students, postdoctoral fellows and staff scientists. Novartis will commit its manufacturing and R&D resources and will pilot new manufacturing processes with one of its pharmaceutical products.
The Massachusetts Institute of Technology, a co-educational privately endowed research university, is dedicated to advancing knowledge and educating students in science, technology, and other areas of scholarship to serve the nation and world. The Institute has more than 900 faculty and 10,000 undergraduate and graduate students.
MIT's commitment to innovation has led to a host of scientific breakthroughs and technological advances. Achievements include the first chemical synthesis of penicillin and vitamin A, the development of inertial guidance systems, modern technologies for artificial limbs, and the magnetic core memory that led to the development of digital computers. Sixty-three alumni, faculty, researchers and staff have won the Nobel Prizes.
Novartis AG is a world leader in offering medicines to protect health, cure disease and improve well-being. Our goal is to discover, develop and successfully market innovative products to treat patients, ease suffering and enhance the quality of life. We are strengthening our medicine-based portfolio, which is focused on strategic growth platforms in innovation-driven pharmaceuticals, high-quality and low-cost generics, human vaccines and leading self-medication OTC brands. Novartis is the only company with leadership positions in these areas. In 2006, the Group's businesses achieved net sales of USD 37.0 billion and net income of USD 7.2 billion. Approximately USD 5.4 billion was invested in R&D. Headquartered in Basel, Switzerland, Novartis Group companies employ more than 100,000 associates and operate in over 140 countries around the world.
MIT News Office
Novartis Global Media Relations
Phone: +41 51 324 3018
More: Chemistry and chemical engineering
More: Health sciences and technology
More: Industry relations
Professor Edward H. Adelson of brain and cognitive sciences will hold a five-year John and Dorothy Wilson Professorship. John J. Wilson (S.B. 1929, S.M. 1930), a life member of the MIT Corporation since 1958, established the professorship.
Assistant Professor Sandy Alexandre of the literature section in the School of Humanities, Arts, and Social Sciences will hold the three-year Class of 1948 Career Development Professorship, established by the Class in celebration of its 40th reunion.
Assistant Professor Markus J. Buehler of civil and environmental engineering will hold an Esther and Harold E. Edgerton Career Development Professorship for a three-year term. The Edgerton professorships were established in 1973 by the MIT Corporation to honor the Edgertons.
Associate Professor Ian Condry of the foreign language and literature section in the School of Humanities, Arts, and Social Sciences will be the next holder of the three-year Mitsui Career Development Professorship. The Mitsui Group established the Mitsui Chairs in 1980 to encourage cultural and technological exchange between the United States and Japan.
Assistant Professor Alexander D'Hooghe will be the next holder of the Class of 1922 Career Development Professorship for a three-year term.
Professor Jesus A. del Alamo of electrical engineering and computer science will hold the five-year Donner Professorship, established with a grant from the Donner Foundation in 1945.
Assistant Professor Vivek Goyal of electrical engineering and computer science will hold an Esther and Harold E. Edgerton Career Development Professorship for a three-year term. The Edgerton professorships were established in 1973 by the MIT Corporation to honor the Edgertons.
Assistant Professor Michael Hemann of biology will hold the three-year Latham Family Career Development Professorship, established by Allen Latham Jr. (S.B. 1930) and his wife, Ruth.
Associate Professor Dina Katabi of electrical engineering and computer science will hold the Class of 1947 Career Development Professorship for a three-year term.
Assistant Professor Manolis Kellis of electrical engineering and computer science is the next holder of the three-year Van Tassel Career Development Professorship. Van Tassel, a member of the Class of 1925, established the chair in 1986.
Assistant Professor Katherine C. Kellogg of MIT Sloan will hold the Class of 1954 Career Development Professorship for a three-year term. The Class established this chair in celebration of its 40th reunion.
Assistant Professor Michael T. Laub of biology will be the Whitehead Career Development Professor for a three-year term.
Professor Richard Locke of MIT Sloan was selected as a Class of 1960 Fellow for a two-year term; he won a Class of 1960 Innovation in Education Award.
Associate Professor John Maeda of the MIT Media Lab was selected as a Class of 1960 Fellow for a two-year term; he also won a Class of 1960 Innovation in Education Award.
Professor Dianne Newman of biology will hold a five-year John and Dorothy Wilson Professorship, established in the 1960s by John J. Wilson (S.B. 1929, S.M. 1930), a life member of the MIT Corporation since 1958.
Professor Jonas Peters of chemistry will hold the W.M. Keck Professorship of Energy for a five-year term. The Keck Foundation established the professorship.
Professor Ram Sasisekharan of biological engineering will hold the five-year Underwood-Prescott Professorship of Toxicology, established in 1972 by a gift from the Underwood Company.
Associate Professor Jay Scheib of the music and theater arts section in the School of Humanities, Arts, and Social Sciences will hold the three-year Class of 1958 Career Development Professorship, established by the Class of 1958 in celebration of its 25th reunion.
Assistant Professor Gabriella Sciolla of physics will hold the Cecil and Ida Green Career Development Professorship for a three-year term. Cecil, a member of the Class of 1923, and Ida Green established the professorship.
Professor Gigliola Staffilani of mathematics is the next holder of the five-year Abby Rockefeller Mauze Professorship, established in 1963 by Laurence Rockefeller and the Rockefeller Brothers Fund.
Professor Donca Steriade of linguistics will hold the five-year Class of 1941 Professorship, established by the Class of 1941 in celebration of its 40th reunion.
Assistant Professor Collin Stultz of electrical engineering and computer science will hold the W.M. Keck Career Development Professorship for a three-year term. The Keck Foundation established the professorship.
Assistant Professor Katrin Wehrheim of mathematics will hold the Rockwell International Career Development Professorship for a three-year term. The Rockwell International Corporation Trust endowed the Rockwell Professorship in 1985.