Search This Blog

Monday, February 25, 2008

Google to store patients' health records in test of new service

The U.S. health-care system is the most costly in the world. Yet it's also remarkably antiquated. The medical records of as many as 90% of patients are hidden away in old-fashioned filing cabinets in doctors' offices. Prescriptions are scribbled on paper. Most Americans need to fill out separate medical histories for each specialist they visit.

"We are trained, like Pavlov's dogs, to repeat the same information 17 times," says Scott Wallace, chief executive officer of the National Alliance for Health Information Technology, a not-for-profit alliance of health care providers, information technology vendors, and health and technology associations. The result: mistakes, duplicated tests, botched diagnoses, and billions of dollars in unnecessary costs and lost productivity.

Many providers, including Kaiser Permanente and Cleveland Clinic, have invested millions of dollars in information technology systems and creating electronic medical records for patients. Here's the rub: Much of that information can't be shared from one doctor or hospital to the next. As a result, blood-test results in the database of an Arizona doctor, for instance, are of little use when the patient is visiting a doctor halfway across the country.

Linking systems "is the real challenge in this industry," says Dr. C. Martin Harris, chief information officer at the Cleveland Clinic.

Pilot Project Kicks Off

In an effort to meet that challenge, the Cleveland Clinic and Google on Feb. 21 announced a project to give patients and doctors better access to electronic medical records. "It is clear that one of the big needs is assembling health records from a variety of places and giving people control of those records," explains Marissa Mayer, vice-president for search products and user experience at Google. And while the Cleveland/Google project may revolutionize medical record-keeping and improve how hospitals and physicians provide care, it also raises concerns over patient privacy and the security of sensitive information.

Here's how the pilot project, officially begun on Feb. 18, works. The Cleveland Clinic already keeps electronic records for all its patients. The system has built-in smarts, so that it will alert doctors about possible drug interactions or when it's time for, say, the next mammogram. In addition, 120,000 patients have signed up for a service called eCleveland Clinic MyChart, which lets patients access their own information on a secure Web site and electronically renew prescriptions and make appointments.

The system has dramatically cut the number of routine calls to the doctor and boosted productivity, though it has yet to effectively deal with information from an outside physician, Harris says. Those records are typically still on paper, and have to be laboriously added to the Cleveland Clinic system. It is a big problem, especially for the clinic's many patients who spend winters in Florida or Arizona, where they see other doctors.

Adding Google's technology lets patients jump from their MyChart page to a Google account. Once on Google, they'll see the relevant health plans and doctors that also keep electronic medical records. That means the patient can choose to share information between, say, the Arizona doctor and the Cleveland Clinic.

The system is still fairly primitive compared with sophisticated electronic information-sharing systems such as an ATM network. The information being shared is limited to data on allergies, medications, and lab results. That's because this data is more easily put in a standard form that can be read by different computer systems.

Paring Health-Care Costs

For the health-care system as a whole, though, it's an important move forward.

"What Martin [Harris] has done is revolutionary," says Wallace of the National Alliance for Health Information Technology. "It may not be the perfect solution, but it is a better solution than we have now." Over time, more information can be added, and more patients and doctors will be able to access the records. And if the pilot program works, Google intends to roll out a comparable service for the general public.

One payoff: cutting health-care costs. "There's a real potential to affect the slope of the health-care cost curve," Harris says. "I believe this kind of exchange is the way we will get the total value out of an electronic medical record."

Projects like the one started by Cleveland and Google could also have big implications for business. Companies want employees to take greater charge of their health care. Experts say employees can do a better job of that by gaining control over-and access to-records, and that they'll get a leg up, technology-wise, from the participation of such players as Google and Microsoft . "I think Google is spectacular on this," Wallace says. "Health care is a mainstream issue, and getting the purveyors of information involved in this is a brilliant step."

What's In It for Google?

How the e-health program plays out for Google is less clear. Mountain View (Calif.)-based Google is not the first high-tech giant to dip a toe into health care. Microsoft, for example, launched a health records and information service, HealthVault, in October (, 10/4/07). The company has more than 100 partners including the Mayo Clinic, a nonprofit medical practice and large online health-information network, and hopes to use its large health software business to help bring new players on board.

On Feb. 20, the company released source code to help outside organizations and developers integrate their information and build programs around the HealthVault platform. "We think that we are the best health search out there, and we think more and more we are going to convince people of that," Sean Nolan, HealthVault's chief architect.

Being late to the game has hurt Google in the past. The company's finance site, launched May, 2006, has failed to gain much traction. It ranks 16th in the business information category of Hitwise, a company that measures Web traffic. Yahoo's much older finance site has remained No. 1 for much of the past three years. Similarly, Google's payment service Google Checkout, launched in June, 2006, has failed to grab market share from eBay's leading payments service, PayPal .

Thorny Privacy Issues

When it comes to online health information, the obvious prize is the estimated $500 million to $1 billion health search advertising. Google won't admit to aiming for that market, though, and those familiar with the project suggest revenue could come from other sources. "They aren't wedded to advertising," Wallace says. "Their attitude is that this is such a nascent area, they can play around for a while and find a way to make huge amounts of money." It's not yet clear how that might happen. "The unanswered question is what is the business model that justifies the investment of these big players," says David Lansky, senior director of the health program at the John and Mary R. Markle Foundation, a nonprofit dedicated to improving information technology in health care.

One worry is that the companies might be tempted to sell personal information. While strict laws govern patient privacy at hospitals and health-care providers, "there is no federal regulation of what these middle-layer players can do with your data," Lansky explains. And while consumers might trust Google or Microsoft now, what might happen in years or decades? "This is deeply personal information that is being collected about you and your family," says Jeff Chester, executive director of the Center for Digital Democracy. "There is unease about marketers being able to access that vast range of information."

Technorati :

MIT neuroscientists see design flaws in computer vision tests

The human brain easily recognizes that these cars are all the same object, but the variations in the car's size, orientation and position are a challenge for computer-vision algorithms.

For years, scientists have been trying to teach computers how to see like humans, and recent research has seemed to show computers making progress in recognizing visual objects.

A new MIT study, however, cautions that this apparent success may be misleading because the tests being used are inadvertently stacked in favor of computers.

Computer vision is important for applications ranging from "intelligent" cars to visual prosthetics for the blind. Recent computational models show apparently impressive progress, boasting 60-percent success rates in classifying natural photographic image sets. These include the widely used Caltech101 database, intended to test computer vision algorithms against the variety of images seen in the real world.

However, James DiCarlo, a neuroscientist in the McGovern Institute for Brain Research at MIT, graduate student Nicolas Pinto and David Cox of the Rowland Institute at Harvard argue that these image sets have design flaws that enable computers to succeed where they would fail with more-authentically varied images. For example, photographers tend to center objects in a frame and to prefer certain views and contexts. The visual system, by contrast, encounters objects in a much broader range of conditions.

"The ease with which we recognize visual objects belies the computational difficulty of this feat," explains DiCarlo, senior author of the study in the Jan. 25 online edition of PLoS Computational Biology. "The core challenge is image variation. Any given object can cast innumerable images onto the retina depending on its position, distance, orientation, lighting and background."

The team exposed the flaws in current tests of computer object recognition by using a simple "toy" computer model inspired by the earliest steps in the brain's visual pathway. Artificial neurons with properties resembling those in the brain's primary visual cortex analyze each point in the image and capture low-level information about the position and orientation of line boundaries. The model lacks the more sophisticated analysis that happens in later stages of visual processing to extract information about higher-level features of the visual scene such as shapes, surfaces or spaces between objects.

The researchers intended this model as a straw man, expecting it to fail as a way to establish a baseline. When they tested it on the Caltech101 images, however, the model did surprisingly well, with performance similar or better than five state-of-the-art object-recognition systems.

How could that be? "We suspected that the supposedly natural images in current computer vision tests do not really engage the central problem of variability, and that our intuitions about what makes objects hard or easy to recognize are incorrect," Pinto explains.

To test this idea, the authors designed a more carefully controlled test. Using just two categories--planes and cars--they introduced variations in position, size and orientation that better reflect the range of variation in the real world.

"With only two types of objects to distinguish, this test should have been easier for the 'toy' computer model, but it proved harder," Cox says. The team's conclusion: "Our model did well on the Caltech101 image set not because it is a good model but because the 'natural' images fail to adequately capture real-world variability."

As a result, the researchers argue for revamping the current standards and images used by the computer-vision community to compare models and measure progress. Before computers can approach the performance of the human brain, they say, scientists must better understand why the task of object recognition is so difficult and the brain's abilities are so impressive.

One approach is to build models that more closely reflect the brain's own solution to the object recognition problem, as has been done by Tomaso Poggio, a close colleague of DiCarlo's at the McGovern Institute.

Trip of a lifetime: MIT hosts next generation of science leaders

MIT Professor Eric Lander describes the future of biology in an inspirational talk to high school students as part of the annual meeting of the American Junior Academy of Science.

It's not every day that high school students get the chance to visit MIT research labs and see concepts that they've learned about in classes come to life.

But that's exactly what happened Thursday, Feb. 14, as high-schoolers from around the country descended on MIT as part of the annual meeting of the American Junior Academy of Science (AJAS).

The AJAS meeting was held in conjunction with the annual meeting of the American Association for the Advancement of Science in Boston. Most of the 120 high school students in attendance won their way to Boston through science fair projects, which they presented at a poster session on Friday, Feb. 15.

On Thursday, the students got a taste of life and research at MIT, including lab tours, an afternoon at the MIT Museum and a talk by MIT Biology Professor Eric Lander.

Lander, director of the Broad Institute, offered students a glimpse of cutting-edge research in the field of genomics--something they will not learn about in their biology classes, he said.

"Textbooks always tell you about what we know, but what's interesting is what we don't know," said Lander. "Textbooks don't like to write about what we don't know, because it's hard to test you on it."

Lander told the students that biology is in the midst of a revolution that will transform the field, much as the development of the periodic table of elements transformed the study of chemistry in the 1800s.

The sequencing of the human genome, completed in 2003, is just the first step of that revolution, Lander said. Ongoing projects to map human genetic variation and determine the function of all human genes will open even more doors.

In about 10 or 15 years, scientists will have unprecedented resources and knowledge at their fingertips to help them study how human diseases arise and how to fight them, Lander said.

"The high school students of 2025 are not going to be able to understand what it was like to study biology in the benighted 20th century," he said.

After Lander's talk, students flocked to the front of the Broad Auditorium to ask questions or have their photo taken with him.

"His talk was incredible," said Zach Silver, a student from Pine Crest High School in Ft. Lauderdale, Fla.

Students also had the chance to tour about 20 MIT research labs. One group visited the Department of Aeronautics and Astronautics, where they heard from graduate students and postdoctoral researchers working on a variety of projects.

Christy Edwards, a graduate student in aero-astro, demonstrated the microsatellites that MIT students have been developing for several years.

Three of the volleyball-sized satellites are now onboard the International Space Station, where scientists will fine-tune the satellites' performance before they are sent into space on their own. "It's like driver's ed for satellites," Edwards explained.

In the Man-Vehicle Lab, the students got a peek at the lightweight, skintight spacesuit that Professor Dava Newman and her students are designing for future excursions in space.

Sachein Sharma, a sophomore at the Texas Academy of Science, said he enjoyed the tour of MIT's aero-astro projects. "It's very interesting and exciting," he said. "MIT is a great place for that kind of thing."

Sharma won his way to the AJAS conference by designing a new type of blade for a wind turbine. He envisions that someday wind turbines could be used in space, possibly on Mars' surface.

His research advisor, Cathy Bambanek, a chemistry teacher at the Texas Academy of Science, said she was impressed that much of MIT's research seems to be student driven.

"It's amazing," she said. "It seems like the students have a lot of input as far as what kind of projects they would like to work on."

The day at MIT was hosted by biology instructor Mandana Sassanfar and sponsored by the School of Science, School of Engineering, School of Architecture and Planning, Department of Biology and MIT Museum.

Learning about brains from computers, and vice versa

For many years, Tomaso Poggio's lab at MIT ran two parallel lines of research. Some projects were aimed at understanding how the brain works, using complex computational models. Others were aimed at improving the abilities of computers to perform tasks that our brains do with ease, such as making sense of complex visual images.

But recently Poggio has found that the work has progressed so far, and the two tasks have begun to overlap to such a degree, that it's now time to combine the two lines of research.

He'll describe his lab's change in approach, and the research that led up to it, at the American Association for the Advancement of Science annual meeting in Boston, on Saturday, Feb. 16. Poggio will also participate in a news briefing Friday, Feb. 15, at 3PM.

The turning point came last year, when Poggio and his team were working on a computer model designed to figure out how the brain processes certain kinds of visual information. As a test of the vision theory they were developing, they tried using the model vision system to actually interpret a series of photographs. Although the model had not been developed for that purpose--it was just supposed to be a theoretical analysis of how certain pathways in the brain work--it turned out to be as good as, or even better than, the best existing computer-vision systems, and as good as humans, at rapidly recognizing certain kinds of complex scenes.

"This is the first time a model has been able to reproduce human behavior on that kind of task," says Poggio, the Eugene McDermott Professor in MIT's Department of Brain and Cognitive Sciences and Computer Science and Artificial Intelligence Laboratory.

As a result, "My perspective changed in a dramatic way," Poggio says. "It meant that we may be closer to understanding how the visual cortex recognizes objects and scenes than I ever thought possible."

The experiments involved a task that is easy for people, but very hard for computer vision systems: recognizing whether or not there were any animals present in photos that ranged from relatively simple close-ups to complex landscapes with a great variety of detail. It's a very complex task, since "animals" can include anything from snakes to butterflies to cattle, against a background that might include distracting trees or buildings. People were shown the scenes for just a fraction of a second, a task that uses a particular part of the human visual cortex, known as the Ventral 1 pathway, to recognize what is seen.

The visual cortex is a large part of the brain's processing system, and one of the most complex, so reaching an understanding of how it works could be a significant step toward understanding how the whole brain works--one of the greatest problems in science today.

"Computational models are beginning to provide powerful new insights into the key problem of how the brain works," says Poggio, who is also co-director of the Center for Biological and Computational Learning and an investigator at the McGovern Institute for Brain Research at MIT.

Although the model Poggio and his team developed produces surprisingly good results, "we do not quite understand why the model works as well as it does," he says. They are now working on developing a comprehensive theory of vision that can account for these and other recent results from the lab.

"Our visual abilities are computationally amazing, and we are still far from imitating them with computers," Poggio says. But the new work shows that it may be time for researchers in artificial intelligence to start paying close attention to the latest developments in neuroscience, he says.

MIT's crossword king girds for annual battle of wits

Math professor and crossword puzzle fiend Kiran Kedlaya works on Friday's New York Times puzzle. He will compete in the American Crossword Puzzle Tournament on Feb. 29
A surprising number of crossword puzzle fans have backgrounds in math, computer science or some other technical field. That's certainly the case at MIT, where graduate students in the math department gather most weekday afternoons over tea to tackle The New York Times crossword puzzle.

Teamwork is encouraged, but one person usually stays on the sidelines: associate math professor Kiran Kedlaya, a champion crossword puzzle solver who could likely finish the puzzle by himself in less than 10 minutes.

"They won't let him join, because he's too good," says Michael Sipser, head of the math department.

Kedlaya, one of the top crossword solvers in the United States, is heading to Brooklyn, N.Y., this weekend for his 11th appearance in the American Crossword Puzzle Tournament.

Kedlaya studies number theory and algebraic geometry in his academic life; he enjoys crosswords because they let him combine his math skills with his interest in words and language.

"When I do crosswords, I'm using a part of my brain I don't get to use much in my job," says Kedlaya, who also composes puzzles for MIT's Mystery Hunt, held during IAP.

His best crossword tournament finish was in 2006, when he came in second place. He began doing crossword puzzles seriously in college, then started going to the tournament, made famous in the 2006 documentary "Wordplay," while a grad student at MIT.

Kedlaya finished fourth in 2005, the year the tournament was filmed for "Wordplay," and makes a couple of brief appearances in the movie.

To get ready for the tournament, Kedlaya does two or three crossword puzzles a day, mostly from The New York Times. Practice is critical to improve speed and perform well in the tournament, he says.

"Some people might be naturally good at crosswords, but you don't get to be this fast without training," says Kedlaya. "There's something similar to athletic training going on here. People are getting into shape, doing their daily puzzles, trying to get psyched up."

The New York Times crossword puzzles, which get more difficult as the week goes on, are the gold standard by which puzzle solvers judge themselves. For a really good solver, Monday's puzzle would take about three minutes, while Friday and Saturday puzzles would take seven to 10 minutes, says Kedlaya. Sunday's puzzle, which is larger, could take eight to 15 minutes.

Unlike Scrabble, another game popular with math-oriented people, crossword puzzles require knowledge of word meanings. Kedlaya suspects that is why they appeal to disparate groups of people like mathematicians and computer scientists, writers and editors, and musicians.

"There seems to be some conflation between math skills, music skills and language skills," he says.

The national tournament draws several hundred people and is the pre-eminent event for crossword puzzle solvers. Competitors solve seven puzzles during the first day of the event, with a 15-minute time limit for each one. The atmosphere can get pretty intense, Kedlaya says.

"It's like taking the SAT," he says. "Everyone is in there working on their paper. Nobody is talking."

The solvers with the top three scores (fastest times and fewest mistakes) compete in the final round, held on the second day. In that round, which Kedlaya reached in 2006, finalists solve the puzzles on large easels at the front of an auditorium, wearing headphones to block out the color commentary broadcast to spectators.

"It is pretty stressful," Kedlaya says. "You're standing in front of a room of more than 500 people, writing on an easel that is sturdy, but not completely sturdy. If you hit it too hard, it shakes."

Teaching in front of a large classroom is excellent practice for this, he says.

For the past three years, 23-year-old Tyler Hinman has won the tournament. However, Kedlaya says he's confident about his own chances going into the tournament this year. "There are a number of solvers that have a shot at the top place, and I'm in that group," he says.

MetaRAM Develops New Technology That Quadruples Memory Capacity

MetaRAM Develops New Technology That Quadruples Memory Capacity of Servers and Workstations; Reduces Price by Up to 90 Percent

MetaSDRAM(TM) for AMD and Intel(R)-Based Systems Now Available.
MetaRAM, a fabless semiconductor company focused on improving memory performance, today announced the launch of DDR2 MetaSDRAM™, a new memory technology that significantly increases server and workstation performance while dramatically decreasing the cost of high-performance systems. Using MetaRAM's DDR2 MetaSDRAM, a quarter-terabyte, four-processor server with 16 cores starts at under $50,000*, up to a 90 percent reduction in system cost** -- all without any system modifications. MetaSDRAM, designed for AMD Opteron™ and Intel® Xeon®-based systems, is currently available in R-DIMMs from Hynix Semiconductor, Inc. and SMART Modular Technologies. Servers and workstations from Appro, Colfax International, Rackable Systems and Verari Systems are expected in the first quarter of 2008.

"I've spent my career focused on building balanced computer systems and providing compatible and evolutionary innovations. With the emergence of multi-core and multi-threaded 64 bit CPUs, I realized that the memory system is once again the biggest bottleneck in systems and so set out to address this problem," said Fred Weber, CEO of MetaRAM. "MetaRAM's new MetaSDRAM does just that by bringing breakthrough main memory capacity to mainstream servers at unprecedented price points, without requiring any changes to existing CPUs, chipsets, motherboards, BIOS or software."

MetaSDRAM is a drop-in solution that closes the gap between processor computing power, which doubles every 18 months -- and DRAM capacity, which doubles only every 36 months. Until now, the industry addressed this gap by adding higher capacity, but not readily available, and exponentially more expensive DRAM to each dual in-line memory module (DIMM) on the motherboard.

The MetaSDRAM chipset, which sits between the memory controller and the DRAM, solves the memory capacity problem cost effectively by enabling up to four times more mainstream DRAMs to be integrated into existing DIMMs without the need for any hardware or software changes. The chipset makes multiple DRAMs look like a larger capacity DRAM to the memory controller. The result is "stealth" high-capacity memory that circumvents the normal limitations set by the memory controller. This new technology has accelerated memory technology development by 2-4 years.

MetaRAM Company Details

MetaRAM received its first round of funding in January 2006, demonstrated its first working samples in July 2007 and released its first chipset into production in November 2007. The company was co-founded by industry luminary and former AMD CTO Fred Weber and is funded by venture firms including Kleiner Perkins Caufield & Byers, Khosla Ventures, Storm Ventures and Intel Capital.

"Kleiner Perkins invested in MetaRAM because we believed in the founders and their technical vision. MetaRAM has assembled a first class team and executed flawlessly in bringing the DDR2 MetaSDRAM chipset to market in a short period of time. MetaRAM has the leadership, vision, and talent to challenge existing technological limitations and open new capabilities for computing," said Bill Joy, Partner of Kleiner Perkins Caufield and Byers, and a member of MetaRAM's board of directors.

"The rapid adoption of Quad-Core Intel® Xeon® processors and platform virtualization, combined with the growth of data intensive applications, is driving demand for increased server memory capacity," said Bryan Wolf, managing director, Enterprise Platforms, Intel Capital. "MetaRAM's technology presented an opportunity for Intel to participate as both an investor and a strategic technology collaborator to deliver a compatible solution that enhances system performance."

MetaSDRAM Technical Details

MetaSDRAM, underpinned by more than 50 pending patents, solves the memory capacity problem affordably by enabling multiple mainstream DRAMs to look like a larger capacity DRAM to the CPU. The MetaSDRAM chipset combines four separate 1Gb DDR2 SDRAMs into a single virtual 4Gb DDR2 SDRAM which acts exactly as a monolithic 4Gb DDR2 MetaSDRAM would.

The DDR2 MetaSDRAM chipset is optimized for low power and high performance. MetaRAM's MetaSDRAM features include:

-- WakeOnUse™ power management which improves the power efficiency of the
DRAMs, thus enabling two to four times the memory to fit into a typical
system's power delivery and cooling capabilities.
-- Dynamic command scheduler that ensures that the MetaSDRAM is
compatible with the JEDEC DDR2 protocol.
-- Low latency circuit design and an innovative clocking scheme, which
allow the MetaSDRAM-based DIMMs to fit into existing memory controller
-- Unique split-bus stacked DRAM design that enables flexible access of
the multiple DRAMs in a stack.

MetaSDRAM Chipset Availability

-- MetaSDRAM MR08G2 chipset enables 2-rank 8GB DIMMs and is capable of
functioning at speeds up to 667MT/s. It consists of an AM150 Access Manager
and 5 FC540 Flow Controllers working as a group. The chipset is currently
in full production and is available at $200 each in 1,000 kit quantities.
-- MetaSDRAM MR16G2 chipset enables 2-rank 16GB DIMMs and is capable of
functioning at speeds up to 667MT/s. It consists of two AM160 Access
Managers and 9 FC540 Flow Controllers. The chipset is qualified for
production and is priced at $450 each in 1,000 kit quantities.

Compatible Platforms

-- AMD: Platforms based on Dual-Core and Quad-Core AMD Opteron™
-- Intel: Platforms based on Dual-Core and Quad-Core Intel® Xeon®
processors with the 5100 MCH

Module Availability

Modules are currently available from:

-- Engineering samples are currently available from Hynix Semiconductor:
8GB PC2-4200 R-DIMM Module (HYMP31GP72CUP4-C6). For more information,
please visit
-- Qualification samples are currently available from SMART Modular
Technologies (NASDAQ: SMOD): 8GB PC2-4200 R-DIMM Module (SG5721G4MG8C66HM),
$1500 budgetary pricing. For more information, please visit

Server and Workstation Availability

Servers and workstations are expected in Q1 from:

-- Appro: Appro XtremeServers and XtremeWorkstations. For more
information please visit
-- Colfax International: Colfax CX1254-N2 and CX1460-N2 1U Rackmount
Servers and Colfax High-End Workstation CX980. To configure and purchase
please visit
-- Rackable Systems
-- Verari Systems: Newly introduced BladeRack® 2 X-Series blade-based
storage and server solutions, and Verari's high-end visualization
workstations. For more information please visit

Target Markets

MetaRAM products are designed for high performance rack-mount servers and workstations that run compute-intensive applications such as CAD/EDA simulations, database transaction processing (OLTP), business intelligence, digital content creation, and virtualization. These and other heavy workload applications are the backbone of industries like aerospace, automotive, financial services, animation, oil and gas exploration, and semiconductor design and simulation.

MetaRAM is headquartered in San Jose, Calif. and employs 35 people. More information on MetaRAM and its breakthrough product can be found on its web site at

About MetaRAM

MetaRAM is a fabless semiconductor company focused on improving memory performance. The company's first product -- MetaSDRAM™ -- enables four times the amount of standard memory to be placed into existing systems without any modifications. The company is privately held, and venture funded by Kleiner Perkins Caufield and Byers, Khosla Ventures, Storm Ventures, and Intel Capital and is headquartered in San Jose, California. For more information, please go to

MetaSDRAM and WakeOnUse are Trademarks of MetaRAM

All other trademarks are the property of their respective owners

*The Colfax CX1460-N2 is a 1U rackmount server with 256GB of DDR2 memory and four AMD Opteron 8000 series processors. Starting under $50,000.

New Server Chips Quadruple Memory Capacity
Startup Metaram has developed a technique to pack more RAM onto a memory module.
Startup company Metaram on Monday is expected to announce technology that overcomes traditional server memory limitations and allows users to quadruple memory without adding new hardware.

Targeted at servers, the MetaSDRAM chipset sits between the DRAM module and a memory controller, processing commands and manipulating the controller to allow the system to have up to four times more memory.

The capability of Metaram's chipset to read the additional memory means memory makers can pack more RAM on a memory module, overcoming limitations that typically throttle the amount of memory that can fit in servers.

For example, an 8-socket x86 server is limited to 256G bits of RAM, but MetaSDRAM chipsets quadruple that to 1T byte of RAM.

"That allows the system to overcome traditional limitations to read the additional RAM on a [memory module]," said Jeremy Werner, senior manager of marketing at Metaram.

The ability to plug four times the memory into a slot on a motherboard is very attractive and allows servers to perform better, said Nathan Brookwood, an analyst at Insight 64. "If you can put a terabyte of memory in a system, your entire Oracle database can sit in the memory. That's a rocket booster," Brookwood said.

It also results in cost savings, Brookwood said. Users can add four times the memory capacity without adding CPUs, he said.

Memory manufacturers can plug the chipset on existing memory modules, according to the company. Hynix and Smart Modular Technologies are supplying the technologies in the memory modules, according to Metaram.

Metaram is shipping separate chips that can help double and quadruple the DRAM capacity of memory modules. The MetaSDRAM MR08G2 chip, which helps double the capacity of memory modules, is available to memory makers for US$200 in quantities of 1,000. Metaram did not share pricing information on the chipsets that quadruple memory. The chips are compatible with Advanced Micro Devices- and Intel-based x86 systems, Metaram said.

With the MetaSDRAM chips, Metaram has found a way for users to fit memory modules into existing infrastructure that users can adopt quickly, Brookwood said. This follows the rationale of Fred Weber, one of the founders of Metaram and former chief technology officer for Advanced Micro Devices.

"It reflects the same design philosophy when AMD came up with their Opteron boxes," Brookwood said. "Intel said x86 couldn't do 64-bits, but Weber said that the problem with Itanium is it doesn't fit into existing infrastructure," Brookwood said. Weber and AMD figured out how to fit the 64-bit architecture into chips that could be implemented into existing infrastructure, Brookwood said.

While Metaram's technology overcomes bottlenecks facing traditional system architecture, it could have its limits, analysts said.

"It is not a revolutionary product, but it is a novel way to handle additional memory," said Will Strauss, principal analyst at Forward Concepts. PCs and servers support only limited memory today, and this product will be effective until new PC designs are introduced in the future, he said.

Adobe AIR launches

Adobe Systems on Monday is set to finally release Adobe Integrated Environment software, which is on the leading edge of a movement to make Web applications act more like traditional desktop applications.

At the company's Engage event in San Francisco on rich Internet application design, executives will announce the availability of AIR 1.0, a free download for Windows and Macintosh.

The wall between the web and your computer continues to crumble with today’s launch of Adobe AIR, a runtime environment that allows you to deploy Internet applications on the desktop. AIR has already been available in public testing mode for several months, but the official launch should lead to greater usage — and, if we’re lucky, a flood of innovative web/desktop hybrids.

AIR offers a “best of both worlds” approach, says Michele Turner, an Adobe vice president of product management and marketing. Web developers can use the technologies they’re used to, such as HTML and Ajax, and the applications can be built quickly and accessed remotely. But, like a desktop program, AIR apps can also read and write local files, as well as work with other applications on your computer.

AIR’s official launch puts it ahead of competitors JavaFX and Mozilla Prism, which are still in development or public testing. (Many think of Microsoft Silverlight as a competitor, but that’s a misconception, Turner says, because it’s a browser plug-in, not a desktop environment.)

Adobe is also releasing Adobe Flex 3, a tool for building Flash applications. Like AIR, Flex is already available in public testing mode. Components of the Flex software developer kit were already open source, but the release means the Flex SDK is now completely open.

A number of big financial players will use Adobe AIR to keep customers up-to-date about site news and account status, including as eBay, Deutsche Bank and NASDAQ. Cable TV children’s channel Nickelodeon has created a video jigsaw puzzle application, and The New York Times is using AIR to build the desktop component of ShifD, which will allow Times readers to move newspaper content back-and-forth between their computers and their mobile devices.

Start-ups are already making use of AIR too. For example, Unknown Vector used AIR to build its desktop video player (our coverage), and Acesis, which launched at last month’s DEMO, a medical records software, based parts of its medical records software in AIR (our coverage).

Adobe today announced the availability of its Adobe Integrated Runtime (AIR) cross-operating system for taking rich Internet applications (RIA) to the desktop.

Adobe also released Flex 3, an open-source development tool set aimed at helping developers build RIAs.

AIR is a runtime environment for building RIAs in Adobe Flash, HTML and AJAX. The product includes the Safari WebKit browser engine, SQLite local database functionality, and APIs that support desktop features such as native drag and drop and network awareness.

Nasdaq Stock Market Inc. and the American Cancer Society are among several organizations running beta versions of AIR to bridge the gap between the Web and the desktop. Both said they turned to the technology because it doesn't require that developers learn new skills.

"AIR takes the capabilities of Flex and Flash and extends that to the desktop," said David Wadhwani, general manager and vice president of Adobe's platform business unit. "With the release of AIR, we've expanded our developer base to the millions of AJAX and HTML developers of the world."

Wadhwani added that FedEx Corp. has developed an AIR application to track packages in real time on the desktop, and Deutsche Bank AG is using AIR to provide alerts about financial transactions.

In addition, business intelligence software vendor Business Objects SA has been working with Adobe to develop reports on transactional data that run in AIR and can be e-mailed to multiple users who can then access live feeds from those reports to do an analysis, he said.

Adobe also released Flex Builder 3, its commercial Eclipse-based plug-in for developing RIAs. Flex Builder 3 integrates with Adobe's Creative Suite 3 set of tools to make it easier for designers and developers to work together, Adobe said. It will be available in two versions: The standard edition is $249, while the professional version costs $699.

Finally, Adobe also made available its BlazeDS open-source tool that promises to help developers boost the data transfer capabilities and performance of RIAs. BlazeDS is made up of components from Adobe's LiveCycle Data Services suite.

Find here

Home II Large Hadron Cillider News