Search This Blog

Thursday, October 30, 2008

Intel in China, ice-powered air conditioners

Even with turmoil in the financial markets, venture capital is still flowing to energy-tech ventures.
Here are the latest such investments: • Intel Capital has made its first clean-tech investment in China, the company said Tuesday.

The venture-capital arm of the chip giant put $20 million into Trony Solar Holdings, a Chinese solar thin-film cell developer. It also invested an undisclosed sum in NP Holdings, which makes large-scale energy storage systems for renewable energy and energy efficiency.
Intel Capital set up a $500 million fund for tech deals in China earlier this year, according to Reuters.
"We think innovation is the way to help companies out of this financial crisis," Cadol Cheung, head of Intel Capital in Asia Pacific told reporters Tuesday. "We have no plan of slowing down our investment pace."
• Ice Energy said Tuesday it has raised $33 million in a second round of funding. The round, led by Energy Capital Partners, also provides up to $150 million in project development financing.
Ice Energy makes rooftop air conditioners that use ice to help lower the cost of operating them.

During off-peak hours, such as the middle of the night, the machines freeze water. During the day, the ice cools the refrigerant to run the air conditioner, cutting down on the electricity it would otherwise need.
The ice storage can shift the demand to off-peak times by as much as 40 percent, according to the company. For that reason, the company is marketing its products to utilities looking for ways to reduce peak demand to avoid construction of new power plants.
• Blue Source said Monday that Goldman Sachs will take an equity stake in the company and finance carbon offset projects. Ice Energy's rooftop ice-cooled air conditioner.Ice Energy's rooftop ice-cooled air conditioner.
Blue Source identifies and runs projects that reduce greenhouse gases, such as methane capture at landfills, and carbon capture and storage at oil wells.
Goldman Sachs will market and trade the offsets from Blue Source projects in carbon emissions trading markets, according to the companies.
• General Electric said last week it is investing $30 million in lithium-ion battery maker A123 Systems, part of a planned $102 million series E round.
GE is now the largest investor in the company with a 9 percent stake after having put in $55 million. The two companies are working on various projects, including integrating A123 Systems' batteries in the Think all-electric town car and a hybrid bus platform.

Anti-Piracy Tool Draws Chinese Suit.

Microsoft has filed 52 lawsuits against alleged software pirates.
The software giant, which has led an active campaign against counterfeit copies of its software over the years, announced Tuesday that it filed cases against resellers in countries that ranged from China to the Netherlands to the United Kingdom and United States.
Microsoft noted that in 15 of the 52 cases, the software involved could allegedly be traced to a massive commercial counterfeit syndicate that Chinese authorities and the FBI broke up this summer. Most of the alleged illicit sales were conducted through e-commerce sites.
Counterfeit copies of their digital goods cost members of the worldwide software industry an estimated $40 billion annually, according to Microsoft. The tech titan also cited a study conducted by the Business Software Alliance and market researcher IDC that put the global PC software piracy rate at 35 percent last year.
Redmond also unveiled a "Microsoft Buying Guide" on eBay as a tool for educating consumers about counterfeit applications. In addition, it maintains an information site with tips on how to detect pirated software.
Through users' tips, Microsoft said, it also gleaned enough information to refer 22 criminal cases to various law enforcement agencies around the world.

.....Complaint Against Microsoft

Anti-Piracy Tool Draws Chinese Suit, Complaint Against Microsoft
A man from Beijing has sued Microsoft while a Chinese lawyer asked the government to fine the U.S. firm $1 billion for turning black the desktop screen of computers installed with pirated software.
A Beijing man surnamed Liu asked the Haidian District People's Court to compel Microsoft to remove the Windows Genuine Advantage (WGA) program on his PC and the permanent desktop screen warning "You may be a victim of counterfeit software."
"Microsoft has no right to judge whether the installed software is pirated or not. It has no right to penalize users by intruding on their computers," said Liu, according to the People's Daily Online.
Beijing lawyer Dong Zhengwei, 35, asked last week the State Administration for Industry and Commerce to fine Microsoft. He also asked the Ministry of Public Security to order Microsoft to stop what he described as hacking and infringement of privacy perpetrated by the company through its WGA.
Meanwhile, the China Computer Federation condemned Microsoft on Tuesday for what it called "unsolicited remote control of computers" since introducing WGA in China last week to combat illegally copying of its software for sale to the public at a cheaper price.
"If a company believes others have infringed their intellectual property rights, it can collect evidence and take judicial measures to deal with the infringement according to Chinese law," the federation said in a statement.
The WGA does not stop computers using the Windows XP operating system from functioning. The warning label can be erased but reappears every one hour.

The National Science Foundation (NSF) has made 20 new awards totaling $57.3 million

Projects will better define plant responses to changing environments and contribute to understanding of genetic processes in economically important plantsThe National Science Foundation (NSF) has made 20 new awards totaling $57.3 million during the 11th year of its Plant Genome Research Program (PGRP). These awards, which cover two to five years and range from $350,000 to $6.8 million, support research and tool development to further knowledge of genome structure and function. They will leverage sequence and functional genomics resources to increase understanding of gene function and interactions between genomes and the environment in economically important crop plants such as corn, soybean, wheat and rice."Plant biologists continue to make significant conceptual and theoretical advances in our understanding of basic biological processes using plants," said James Collins, NSF assistant director for biological sciences. "It is clear that 21st century biology has become increasingly quantitative and interdisciplinary. The latest projects funded through the PGRP reflect this shift and will integrate innovative, cutting edge research with the training of the next generation of plant scientists at both research universities and small teaching colleges and universities."The new awards, made to 45 institutions in 28 states, include international groups of scientists from Asia, Australia, Europe and South America.First-time recipients of PGRP awards include California State University-Long Beach, Case Western Reserve University, North Carolina Central University, University of Minnesota-Duluth, University of Southern California and Western Illinois University.A wealth of genomics tools and sequence resources developed over the past 11 years of the PGRP continues to enable exciting, new comparative approaches to uncover gene networks that regulate plant development and growth in changing environments.


Projects include:
Research led by the University of Southern California to study how Medicago truncatula, a small legume, and associated soil bacteria co-adapt to high salinity conditions. This project will be done in collaboration with scientists in Tunisia and France.


Research led by the University of Minnesota, Duluth to identify the molecular mechanisms of nectar synthesis and secretion in the Brassicaceae, an agriculturally important family of flowering plants.


An interdisciplinary effort led by Pennsylvania State University to define the regulation of maize shoot growth and development by the plant hormone auxin.


A multi-institutional effort led by the University of California, Davis to develop genomics resources that will support the physical mapping of wheat chromosomes; this project will complement ongoing national and international efforts to sequence the wheat genome.


Research led by the University of Georgia to generate populations of mutant plants that will advance our understanding of the functions of agronomically important genes in soybean.


This year's awards were selected from a pool of outstanding proposals, many of which leveraged data and other resources previously produced with PGRP funding. The outstanding quality of these proposals testifies to the PGRP's success in enabling innovative research.
The PGRP, which was established in 1998 as part of the coordinated National Plant Genome Initiative by the Interagency Working Group on Plant Genomes of the National Science and Technology Council, works to advance the understanding of the structure and function of genomes of plants of economic importance.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering, with an annual budget of $6.06 billion. NSF funds reach all 50 states through grants to over 1,900 universities and institutions. Each year, NSF receives about 45,000 competitive requests for funding, and makes over 11,500 new funding awards. NSF also awards over $400 million in professional and service contracts yearly.




Wednesday, October 29, 2008

Microsoft is dreaming in the clouds, when it comes to its new OS


Microsoft and free aren't words that you expect together in a sentence. While the prolific operating system maker has been generous in offering discounted licenses to students and to developing nations, it has always made sure it got its fair slice.Well for a limited time, developers will get to use and test a unique new OS from Microsoft -- Windows Azure -- entirely for free. The new OS marks the release of Microsoft's long awaited cloud computing operating system.For those in the dark about cloud computing, you're not alone -- the abstract concept is a new one and very challenging to developers. In basic principle, it’s the concept of offloading tasks from workstations to cloud clusters -- high powered groups of servers. This setup leverages modern high-speed internet connections to deliver data storage, applications hosting and more.Cloud computing is tremendously popular, as it is widely viewed as the future of web hosting. One key reason for this is that cloud computing allows applications to easily scale to match rising or falling demand, without shifting local hardware. In order to deliver increasingly rich applications over an internet interface, moving to a cloud computing architecture becomes increasingly necessary. However, until now cloud computing lacked a single iconic operating system specially designed for it.That has all changed with the release of Microsoft's Azure. The new OS is a community preview, available free to any developers. This is a slight departure from Microsoft's RM/Beta/Alpha sequence typical to many of its operating systems, though it has done community previews of other releases before. "How long until the OS hits the market," is one question many will ask. Microsoft's Chief Software Architect Ray Ozzie was on hand to answer questions about the new OS, and he fielded this one. He stated, "Well, when we finally determine that it achieves the objectives from a completeness perspective and a reliability perspective that our customers would expect of us, then we'll go commercial. And when it does, it will be profitable from birth because we're going to price it to be that way."While Microsoft's OS is similar, according to Mr. Ozzie, to Amazon's EC2 web service in some respects, it is overall rather unique. Some users will be confused, he says, to restart their computers only to find their hard drives empty. Despite the .NET foundation, developers will have to adapt to the new storage system and adapt to the new error handling system.Mr. Ozzie says that Microsoft's growing interest in data centers and serving is the key to the company's success. He says, "It's a business that we will be in probably as long as there will be a Microsoft. ... Cloud computing is ultimately going to be 'do you trust this provider to have more to lose than I have to lose as a company if they mess me up?' And Microsoft has both the capacity to invest and the willingness to be in that end of a business, and give that kind of a trust assurance to developers and enterprises."While many outside the development community will meet the news of this new Microsoft OS with a bit of confusion as it’s not something they can easily experience, the bottom line is that this OS will help drive a new generation of feature-rich websites. And while cloud computing from an architecture standpoint might be perplexing to some, being able to use rich applications like word processing online, with free storage, would be easy to understand, and a highly desired development.As for Mr. Ozzie, he firmly believes the new OS represents the future of Windows, and is perhaps more critical than even Windows 7. He says that in 20 years, cloud computers will be household items and the once foreign concept will have been embraced, much as the personal computer was two decades ago. Says Mr. Ozzie, "It's a new kind of computer that 20 years from now we'll wonder how we did without."

more....

Windows Azure: The End Of Software?
Forget the marketing hype, Windows Azure isn't the latest Microsoft operating system. It's a business strategy. One that shows Redmond believes the days in which it can make fat profits from software alone are numbered.
Let's deal with the technology for a moment. Windows Azure is a hosted, runtime version of Windows Server. OK, that's out of the way. Now let's look at the business plan.
Under Windows Azure, the strategy, Microsoft is entering the hosted services market in a big way. Earlier this year, the company opened vast new data centers in Washington state and San Antonio, Texas. It also plans to open server farms in Chicago and Dublin in the near future. Now it's clear why.
Microsoft is betting that an increasing number of its customers will want their applications on tap -- from "the cloud" (i.e., the Internet) -- in the years ahead. And it's going to charge them subscription fees that cover hosting, maintenance, upgrades, and the software itself.
Microsoft's pitch to business: "Pay for the services you use and reduce capital costs associated with purchasing hardware and infrastructure." With Windows Azure, "You can scale at the click of a mouse to meet seasonal demands or spikes in traffic based on sales and promotions," the company assures.
Makes sense. In fact, it makes so much sense that IBM went down this road more than a decade ago.
IBM in the late '90s realized what Microsoft, as it watches Vista sales languish, is just now discovering. Software has become a commodity. There's so much free stuff out there -- from Linux to Gmail -- that making a buck off lines of code is getting harder every day. And each year, the free stuff gets better and better.
IBM's free Lotus Symphony office suite has seen more than 200,000 downloads to date. That's 200,000 users that don't need to buy Microsoft Office.
IBM correctly bet that the best way to make money from software was to sell hosting, installation, integration, and support services. Those are big, capital- and labor-intensive businesses where scale matters and barriers to entry are high. No pimply-faced geek living in his mother's basement in Latvia is going to open a 5-acre server farm and put you out of business.
Google, with its Google Apps offerings, has figured this out as well.
So the Windows Azure strategy is the right move -- but Microsoft is a tad late to the party. And the transition from packaged software seller to hosted services provider is not going to be an easy one. Microsoft still relies on old-school software sales for the bulk of its profits.
Still, does the company have any other choice?
Microsoft says its newest operating system will be used to control embedded systems.
Additionally, it will feature Microsoft's BitLocker drive encryption and key management security technology, and, like Windows 7, 64-bit support.
"Windows Embedded 'Quebec' will provide OEMs with the ability to further differentiate their devices by taking rich user experiences to the next level," said Kevin Dallas, general manager of Microsoft's Windows Embedded Unit, in a statement Tuesday from the Embedded Systems conference in Boston.
Microsoft offered developers their first peek at Windows 7 this week.
The company said the next version of its franchise OS will feature improved compatibility and will be more user friendly than Vista. Microsoft said it plans to release a trial, or beta, version of Windows 7 early next year.
The desktop version of Windows 7 will feature a new taskbar and a streamlined interface that will make users' most frequently used programs—such as a music player or a word processing app--easier to access, according to Microsoft. It will also include a new feature, Device Stage, that's designed to increase compatibility between the host computer and commonly used peripherals such as printers, phones, and digital cameras.
The company also said, "Windows 7 will offer more options than ever to customize and personalize Windows-based PCs with styles that match the user's personality," though it provided little detail.
Perhaps most significantly, Microsoft said applications that are compatible with Windows Vista will work with Windows 7 because the two operating systems share the same basic architecture. "Windows 7 extends developers' investments in Windows Vista," the company said in a statement.
Upon its debut in January of last year, Vista was roundly criticized for its lack of compatibility with applications built for the older Windows XP operating system. The problem was partly to blame for the fact that
few businesses have upgraded from XP to Vista, even though Vista has now been on the market for almost two years.
"With our new approach to planning and development we now have a great foundation for our partners to start learning and innovating on this exciting new version of Windows," said Steven Sinofsky, senior VP for Microsoft's Windows Engineering Group. Microsoft has established
a Web site where developers can learn more about building applications for Windows 7.



Tuesday, October 28, 2008

Recent discoveries of water and Earth-like soil on Mars :Robotic Ants Building Homes On Mars.

Recent discoveries of water and Earth-like soil on Mars have set imaginations running wild that human beings may one day colonise the Red Planet. However, the first inhabitants might not be human in form at all, but rather swarms of tiny robots.

“Small robots that are able to work together could explore the planet. We now know there is water and dust so all they would need is some sort of glue to start building structures, such as homes for human scientists,” says Marc Szymanski, a robotics researcher at the University of Karlsruhe in Germany.
Szymanski is part of a team of European researchers developing tiny autonomous robots that can co-operate to perform different tasks, much like termites, ants or bees forage collaboratively for food, build nests and work together for the greater good of the colony.
Working in the EU-funded I-SWARM project, the team created a 100-strong posse of centimetre-scale robots and made considerable progress toward building swarms of ant-sized micro-bots. Several of the researchers have since gone on to work on creating swarms of robots that are able to reconfigure themselves and assemble autonomously into larger robots in order to perform different tasks. Their work is being continued in the Symbrion and Replicator projects that are funded under the EU’s Seventh Framework Programme.
Planet exploration and colonisation are just some of a seemingly endless range of potential applications for robots that can work together, adjusting their duties depending on the obstacles they face, changes in their environment and the swarm’s needs.
“Robot swarms are particularly useful in situations where you need high redundancy. If one robot malfunctions or is damaged it does not cause the mission to fail because another robot simply steps in to fill its place,” Szymanski explains.
That is not only useful in space or in deep-water environments, but also while carrying out repairs inside machinery, cleaning up pollution or even carrying out tests and applying treatments inside the human body – just some of the potential applications envisioned for miniature robotics technology.
Creating collective perception
Putting swarming robots to use in a real-world environment is still, like the vision of colonising Mars, some way off. Nonetheless, the I-SWARM team did forge ahead in building robots that come close to resembling a programmable ant.
Just as ants may observe what other ants nearby are doing, follow a specific individual, or leave behind a chemical trail in order to transmit information to the colony, the I-SWARM team’s robots are able to communicate with each other and sense their environment. The result is a kind of collective perception.
The robots use infrared to communicate, with each signalling another close by until the entire swarm is informed. When one encounters an obstacle, for example, it would signal others to encircle it and help move it out of the way.
A group of robots that the project team called Jasmine, which are a little bigger than a two-euro coin, use wheels to move around, while the smallest I-SWARM robots, measuring just three millimetres in length, move by vibration. The I-SWARM robots draw power from a tiny solar cell, and the Jasmine machines have a battery.
“Power is a big issue. The more complex the task, the more energy is required. A robot that needs to lift something [uses] powerful motors and these need lots of energy,” Szymanski notes, pointing to one of several challenges the team have encountered.
Processing power is another issue. The project had to develop special algorithms to control the millimetre-scale robots, taking into account the limited capabilities of the tiny machine’s onboard processor: just eight kilobytes of program memory and two kilobytes of RAM, around a million times less than most PCs.
Tests proved that the diminutive robots were able to interact, though the project partners were unable to meet their goal of producing a thousand of them in what would have constituted the largest swarm of the smallest autonomous robots ever created anywhere in the world.
Nonetheless, Szymanski is confident that the team is close to being able to mass produce the tiny robots, which can be made much like computer chips out of flexible printed circuit boards and then folded into shape.
“They’re kind of like miniature origami,” he says.
Simple, mass production would ensure that the robots are relatively cheap to manufacture. Researchers would therefore not have to worry if one gets lost in the Martian soil

Sunday, October 26, 2008

Atom's Nucleus Is Capable Of Storing Information

The National Science Foundation announced a major breakthrough in quantum computing after scientists were able to store information inside the nucleus of an atom.
Scientists from Princeton University, Oxford University and the Department of Energy used both the electron and nucleus of a phosphorous atom inside a silicon crystal. The atomic particles acted as tiny quantum magnets, which stored quantum information.
The information was stored in the nucleus for just one tenth of a second, and was able to keep the information accessible for almost two seconds.
The experiment, dubbed as the "ultimate miniaturization of computer memory," is a key step in the development of quantum computers.
The New York Times reported that under the theory of quantum mechanics, atoms and other objects can exist in multiple states, capable of being in two places at once. In quantum computing, each individual piece of information could thus have more than one value simultaneously.

Friday, October 24, 2008

Soyuz space capsule carrying two Russian crew and an American space tourist has returned to Earth.


Space tourist back on terra firma
Space tourist Richard Garriott and cosmonauts Oleg Kononeko and Sergei Volkov this morning returned safely to Earth from the International Space Station, touching down in their Soyuz capsule in Kazakhstan at 10:37pm CDT (03:37 GMT).

Garriot stumped a cool $17m to spend ten days in space, following in the footsteps of his dad Owen Garriot who in 1973 passed 60 days aboard Skylab. Kononeko and Volkov ended their Expedition 17 jaunt to the ISS having enjoyed 199 days in orbit and 197 days on the space outpost.

The ISS is now in the hands of Expedition 18 crew Michael Fincke, Greg Chamitoff and Yury Lonchakov. Their main focus is to "prepare the station to house six crew members on long-duration missions beginning in spring 2009".

They'll be supported in their task by space shuttle Endeavour's mission STS-126, due to blast off on 12 November bearing equipment and supplies in the Multi-purpose Logistics Module Leonardo. The delivery includes "additional crew quarters, additional exercise equipment, equipment for the regenerative life support system and spare hardware".

Endeavour was yesterday trundled to Launch Pad 39A at the Kennedy Space Center in anticipation of the launch (see pic). It was relocated from Launch Pad 39B "so workers there could continue modifications to the launch complex for the Ares I-X test flight in 2009"

Sunday, October 19, 2008

IBEX is ready


"NASA has designed a mission to map the boundary of the solar system. The mission is called IBEX (Interstellar Boundary Explorer) and it is ready to launch. The data collected by IBEX will allow scientists to understand the interaction between our Sun and the galaxy for the first time. Understanding this interaction will help us protect future astronauts from the danger of galactic cosmic rays."
The IBEX Launch Blog will go active "about 2 hours before launch scheduled for 1:48 p.m. EDT," and the Southwest Research Institute will be running webcasts of the event. The IBEX fact sheet provides more details about the mission (PDF). IBEX will reach space via a Pegasus rocket launched from an L-1011 "Stargazer" carrier plane. You can see the launch countdown schedule at NASA's site

http://www.ibex.swri.edu/

IBEX October 2008
From Dave McComas, IBEX Principal Investigator

We have made huge progress over the past month and are now only two and half weeks away from launch on October 19th! The picture shows our IBEX spacecraft mounted on the front of the Pegasus rocket, just as we are enclosing it within the fairing (the aerodynamic front of the rocket that opens up and falls away once we are above the atmosphere). The flight out to Hawaii and then on to Kwajalein is scheduled to start on October 10th so we are finally almost there.

Another interesting thing that happened this month was that I got interviewed on NPR's "Science Friday." I was talking about the solar wind, which has been diminishing over the past decade and a half, down to the lowest levels ever measured (since the beginning of the space age). Anyway, as part of that interview I also got to talk about how the solar wind interacts with the galaxy and about how IBEX will image that interaction. The folks at NPR also posted a new video that describes the IBEX mission and science for us. You can listen to the Science Friday segment or view our new Mission Video at http://www.sciencefriday.com/program/archives/200809266.
Since June 2005 I have been introducing a new team member each month (you can see them all in our archive, so this month for launch, I thought we might tell you a little more about me. I'll leave my story to a professional writer as usual, but wanted to share with you my favorite picture of myself. That's me in fourth grade - you'll see below that that was a really big year for me.
Dave McComas
By Michelle Nichols, Adler Planetarium Educator

The launch of IBEX is just around the corner! As such, it seems fitting that the October 2008 Monthly Highlight interviewee be the leader of the project, the person who has seen it from its earliest concepts to the present day: Dave McComas, the Principal Investigator for the Interstellar Boundary Explorer mission.
Growing up in Milwaukee, Wisconsin among a father who was a lawyer and two of three siblings who also eventually chose the law as a profession, it would have seemed natural for Dave to also become a lawyer, but that wasn't "in the stars" for him. Dave has always been interested in science, and thought of himself as an inventor, constantly spending time building and trying, taking things apart, and putting them back together in new ways when he was young. "I found inventing to be really interesting when I was young because it allowed me to figure out how things worked and then use the parts in some new and unintended ways. As I grew older I zeroed in on physics as the key to unlocking the secrets of the Universe." Dave's parents encouraged him to be inquisitive, to be intellectual, and to learn about all sorts of things.
It was not an easy road for Dave to take while in school. He has dyslexia, a learning disability that manifests itself as difficulties with written words and letters. Because of this, Dave did not substantially learn to read until a fourth grade teacher worked hard with him to develop strategies to compensate for his reading problems. Thanks to this teacher, he did learn to read, and he has become an adequate reader (although still an inadequate speller) who has worked hard throughout his life to deal with his weakness in reading.
In high school, he decided that he wanted to study physics. While Dave was in college at the Massachusetts Institute of Technology, he worked at the MIT Center for Space Research as an undergraduate researcher in the laboratory, obtaining hands-on knowledge by helping graduate students and professors build instruments for a sounding rocket. He felt that this experience was an important step in his development as a scientist and is also something that he recommends to those who might want to be scientists. He says, "Get in the lab and do something. Get your hands dirty!"
After earning his bachelor's degree, Dave began working at the Los Alamos National Laboratory in the Space Group, allowing him to further his interest in space research. While there, he had the opportunity to work in what he says he can only describe as an "apprenticeship" with one of the great space experimentalists - Dr. Sam Bame. Dave was allowed to have his desk moved into Sam's lab, where he viewed and participated in all elements of the work with Sam designing and building space hardware. After two and a half years in the lab, Dave went to graduate school and earned his Ph.D. at the University of California at Los Angeles, where he studied space physics

Saturday, October 18, 2008

Google has solved a problem that affected the layout and functionality of the "Start" pages

Prolonged Gmail outage stressing Google Apps administrators
Google Fixes Problem With Apps Start Page........
Google has solved a problem that affected the layout and functionality of the "Start" pages of its Apps hosted collaboration and communications suite.
Although the bug had the potential to affect many customers, it manifested itself only in instances when Apps administrators had customized their organizations' Start page, said Rishi Chandra, Google Apps product manager.
The problem arose apparently Thursday afternoon U.S. Eastern Time and was finally solved at around noon on Friday.
Apps administrators who reported problems in the official Apps discussion forum described what they perceived as being an erratic Start page update designed to make it look and act more like iGoogle, the company's personalized home page service for consumers.
However, Chandra said that wasn't the case, although he understands why the administrators would interpret the incident that way, since the iGoogle logo replaced company logos in affected pages. The problem was caused by a system bug that altered Start pages layouts, broke some links and interfered with some "gadget" applications, like the one for Gmail, he said.
With a permanent fix now in place, all affected Start pages should have reverted back to their normal layout and operation without any loss of data or functionality, Chandra said. Google had prematurely declared the problem solved at around 8 p.m. on Thursday, but problem reports kept flowing in.
Google Apps is a hosted collaboration and communication suite aimed at workplace use, and its Start pages are designed as a portal main point of entry for end-users to their applications, such as Calendar and Gmail. Apps' Standard and Education versions are free, while its more sophisticated Premier edition costs US$50 per user per year.
The problem was disruptive at New Hope Fellowship in Springdale, Arkansas, which uses the Apps Education edition. The church's Start page was hit intermittently by the bug between Thursday at around 2 p.m. and noon Friday.
"Our users were trained to access their mail through the Start page. Once that didn't work, they could not access e-mail, which is critical to our work. We had to send paper memos around on how to access the mail without going through the Start page. Very frustrating," said Josh Jenkins, New Hope Fellowship's media director and Apps administrator, in an e-mail interview.
This wasn't the only problem New Hope Fellowship's 40 Google Apps users encountered this week. They also lost access to their e-mail due to an unrelated and prolonged Gmail outage that hit some Apps customers this week.
"Google must improve communication with business customers if they wish to be competitive in the corporate IT space. The 2-sentence 'we're working on it' blurbs posted in the [online discussion] groups are an unacceptable way to treat business clients," Jenkins said.
Susan Novotny, Apps Standard edition administrator at a national nonprofit with 30 users in Ontario, Canada, said via e-mail that the occasional bugs that hit Google Apps "do shake my confidence a little."
"I guess I expect a spectacularly wealthy company to be as reliable as the average e-mail provider," she added. "But they're providing tools no other provider can."
Nelson & Co. Engineering in Birmingham, Alabama, also experienced the Start page bug, but it wasn't too disruptive for its four Apps Premier users, said Apps administrator Ryan Nelson in an e-mail interview. The company feels that, despite the hiccups, Apps provides it with a great value at $50 per user per year.
"As a Premier user I would think that these issues would not happen. In the long run, Google Apps has been the best technology move we've ever made. Little issues crop up a couple times a year for less than 24 hours: not ideal, but better than anything else we've ever used," Nelson said.
Others were more frazzled, like an Apps administrator identified as Jay in the official discussion forum, who wrote Friday morning: "I now have over 1,200 users that have no idea how to get into their e-mail. The phones are ringing off the hook. What is going on with customer service these days. This really stinks."
The problem wasn't related to a major iGoogle upgrade the company rolled out on Thursday, Chandra said.
The unrelated Gmail problem this week kept users from accessing their e-mail in some cases for more than 24 hours between Wednesday and Thursday. Google declared that problem solved late on Thursday.
During Google's third-quarter earnings conference call on Thursday, cofounder and Technology President Sergey Brin said that there are now more than 1 million businesses using Google Apps.
Google Apps is one of the best-known examples of a new wave of Web-hosted communication and collaboration suites that are emerging as options to Microsoft's Office and Outlook/Exchange suite.
Apps is hosted by Google in its data centers and accessed by end-users via a Web browser. The appeal of Web-hosted software like Apps is that it doesn't have to be installed by customers on their own hardware, reducing maintenance costs and complexity. Apps and others like it are also designed from the ground up for workgroup collaboration.
However, when something breaks on the vendors' data centers, IT administrators have little or no control over how or when to remedy the problem, and are left to appease their angry end-users as best they can.
In August, Gmail had three significant outages that affected not only individual consumers of the free Webmail service but also paying Apps Premier customers. As a result, Google decided to extend a credit to all Apps Premier customers and said it would do better at notifying users of problems.


more...

Stories About Google Apps
Apple's new MacBooks are finally here, and the upgrades they feature are more than modest. The new Apple laptops sport slimmer designs, brighter and more power-efficient LED-backlit screens, new graphics systems, buttonless trackpads, and more. The updates have led some people to wonder whether now is the time to switch from a PC to a Mac.
But as cool as the updates are, Apple has not achieved MacBook perfection. Here's a look at what Apple got right and what I would have liked to see.
Construction Done Right
Apple stressed studier construction as a reason for developing its "unibody" manufacturing technique, and sure enough, the new MacBooks are as sturdy as they come. Compare the new design with the previous one, which had an annoying tendency to crack. The two different styles of MacBook are like night and day; the old MacBook seems creaky and cheap in comparison.
Simple, Quick Access
Apple has done a good job making the MacBook's components easy to access. Just pop off the battery cover to reach the hard drive. Remove a few screws, and boom, there are the memory, motherboard, and optical drive. Do-it-yourself upgraders will be happy with a new MacBook, although we would like to see Apple reduce the number of screws guarding the way to the innards.


What Was Apple Thinking?
Now that Apple has removed FireWire from the MacBook (a decision that upset some users), the only peripheral connectors on the new models are two USB ports. If you own a printer, iPod, digital camera, external hard drive, or perhaps an external keyboard or mouse, you might run out of ports in a hurry unless you buy a USB hub. An additional built-in USB port or two would have been a welcome addition.


Annoying Smudges
It's a little thing, but Apple includes a soft cloth for wiping fingerprints and smudges off the MacBook. You may want to keep it in your laptop bag, because you're going to need it. The glass display is a fingerprint magnet, and the aluminum shell readily shows fingerprints as well.


Good-Enough Portability
At a mere 4.5 pounds, the MacBook is light enough and compact enough to toss into your bag and take with you wherever you go. And unlike its superslim MacBook Air sibling, the new MacBook includes an optical drive and a more powerful processor. Given the choice, I think I'd take the MacBook over the Air any day, even if it means another pound and a half of weight.


A Bright Idea
Love the gloss or hate it, the MacBook's screen is flat-out beautiful. The screen itself is bright and evenly lit, thanks to its LED backlighting. I found that the glossiness is less of an issue than some folks might think.


DisplayPort Dilemma
Apple's switch from DVI to DisplayPort--and a mini version, at that--means that users will have to attach a dongle to connect their MacBook to most current displays. Although the adapter situation isn't new to the MacBook (the older models include a mini DVI port), current Mac laptop owners will have to buy yet another adapter. I do wonder if Apple's adoption of DisplayPort will help the standard catch on with other manufacturers, though.


Tricky Trackpad
While a buttonless trackpad is a novel idea and it functions reasonably well, it doesn't work as smoothly as it could. The lower portion of the trackpad is nice and clicky when pressed, but the farther up the trackpad you go, the harder it is to click. If you use the MacBook's trackpad just as you would use any other--clicking the button at the bottom of the trackpad with your thumb--you probably won't notice much difference. But if you press the trackpad with your pointing finger, you'll run into areas where it's impossible to push.
My initial reaction to the new MacBooks? Apple put together one solid machine. In a world of $800 laptops, the aluminum-clad MacBook may seem a little expensive, but it's a winner. Stay tuned for our full review and lab testing of the MacBook and MacBook Pro.


An unknown number of Gmail and Google Apps users were unable to access their accounts for more than 24 hours earlier this week. According to some, the outage was negatively impacting their business.
Starting at some point Wednesday afternoon, users noticed that they were unable to log into their Google accounts. Rather than accepting their sign-in credentials, users were met with a "502 error" message from Google. The company acknowledged that there was a problem and said that it would issue a fix by late Thursday evening.
That did little to quell perturbed customers. One forum member said that the e-mail outage had been rough on his company and it was impacting business.
According to MacWorld, one customer wrote on the Apps Forum, "Support keeps telling me it is affecting a small number of users. This is not a temporary problem if it lasts this long. It is frustrating to not be able to expedite these issues. I have to speak with the boss again and he's po'd. This is considered a mission-critical issue here. We may have to make other arrangements. Apparently Google mail is not very reliable. I think I would have pushed for something else before we switched if I had known the level of unreliability."
I have to disagree with the writer's sentiment about Gmail not being reliable. In the years that I have been using Gmail, I've been locked out of it for perhaps 6 hours at most. Maybe I'm lucky and my account happens to be stored on a set of servers that don't have any problems.
In the decade that I relied on Microsoft Exchange to deliver my corporate e-mail, I experienced what probably amounts to months of outages. I distinctly remember not being able to get into my e-mail for one full week once due to problems with an Exchange e-mail server. In my experience, Gmail has been far more reliable.

That didn't stop a Microsoft spokesperson from reaching out to me to make sure I was aware of the current Google Apps problems. The spokesperson said to me in an e-mail, "The Gmail outage was reported (and buried) on a discussion board yesterday and a solution is expected

Friday, October 17, 2008

Internet searches alling brain more stimulated

Comparing the book and internet search it has found that internet Searching the Internet may stimulate and help improve brain function more than reading a book.

(Left) This is your brain on books. (Right) This is your brain on Internet
Researchers at the University of California Los Angeles have found that searching the web triggers centers in the brain that control decision-making and complex reasoning in middle-aged and older adults. But while web browsing may spark more neurons than mere book-reading, the study says, this only occurs in those with previous Internet experience.

(Left) This is your brain on books. (Right) This is your brain on Internet.
Image courtesy UCLA.

The study, which the researchers claim is the first of its kind, is currently in press at the American Journal of Geriatric Psychiatry and will appear in an upcoming issue.

As the brain ages, it's prone to atrophy and reduced cell activity. Doing a bit of "hard thinking" is thought to help keep the mind in shape. Brain-improving mental exercises typically take the form of doing crossword puzzles, learning to play the piano, or taking up a new hobby where new skills are required. Being a deft hand at searching the Internet also appears to put the mind through its paces.

The UCLA team worked with 24 "neurologically normal" volunteers between the ages of 55 and 76. Half had experience using the web, the other half did not.

Participants performed Internet searches and book-reading tasks while a functional magnetic resonance imaging (fMRI) machine scanned their noggins.

All showed significant brain activity while reading, using regions of the brain that control language, reading, memory, and visual abilities.

The effect of Internet searches on the brain were not so universal, however. Each participant showed about the same brain activity they demonstrated while reading books, but the web-savvy also registered activity in the frontal, temporal, and cingulate areas of the brain, controlling decision-making and complex reasoning.

"Our most striking finding was that Internet searches appear to engage a greater extent of neural circuitry that is not activated during reading – but only those with prior Internet experience," said principal investigator Gary Small, a professor at the Semel Institute for Neuroscience and Human Behavior at UCLA.

The difference between Internet veterans and technopoops was measurably two-fold. A voxel is the smallest unit detectable by an fMRI machine (like a 3-D pixel, typically representing a cube of brain tissue about 2-4 millimeters on each side.) The scientists found web-savvy participants sparked 21,782 voxels, compared to 8,646 voxels in those with less Internet experience.

Small suggests the inexperienced may register less brain activity because they haven't yet figured out the right strategies required to perform a successful search. In other words, their Google-fu is weak.

"With more time on the Internet, they may demonstrate the same brain activation patterns as the more experienced group," Small said.

Thursday, October 16, 2008

Google released a developer-oriented update to its Chrome Web browser


Google released a developer-oriented update to its Chrome Web browser on Wednesday that fixes some crashes and video playback issues.
Chrome is still in beta testing, and for those who have an even higher tolerance for rough-around-the-edges software, Google also offers developer versions. Chrome 0.3.154.3 is the latter; see our earlier post on how to subscribe to the Chrome Dev channel.
"Release 154.0 (the most recent publicly released Chrome developer build) had a few browser crashes, including a crash on startup on tablet PCs running Windows Vista. We fixed the new crashes, and 154.3 should be much more stable," Mark Larson, Google Chrome program manager, said in a mailing list posting Wednesday evening.
The browser wars are back in force, albeit in a more standards-compliant and collegial way, and a major thrust of the resurgent competition is higher performance for faster, more sophisticated Web applications. The first beta version of Firefox 3.1, released Tuesday, brings significant improvements to JavaScript, the programming language that underlies many such applications. Microsoft is on the verge of releasing Internet Explorer 8 (though it still hasn't convinced innumerable people to upgrade even to the current version 7), and the Webkit project that forms the foundation of Apple's Safari browser is being fitted with a new JavaScript engine called Squirrelfish Extreme.
Other fixes addressed problems with plug-ins such as a bug that could hang video playback after a second or a plug-in priority issue that cause the browser to become unresponsive. Chrome can use the Mozilla Firefox versions of plug-ins such as Adobe Systems' Flash.
In the security department, Chrome requires more manual intervention before users can save executable files with .exe, .bat, and .dll extensions.
Chrome is open-source software, and Google credited two outside programmers for their contributions.
MORE...

FileMaker Bento 2 offers a spreadsheet feel, more app integration
Less than a year after the beta of its first personal database for Mac, Apple's FileMaker has released Bento 2, an edition that adds features in two main areas: more integration with outside applications, and the addition of sophisticated spreadsheet-like functionality.
As previously reported in BetaNews, Bento is geared to helping consumers and business people manage and organize information that runs the gamut from contacts and calendars to projects and events, all without any database programming.

Web users who are expecting a major shift in philosophy in the first round of Firefox betas, may want to wait for the developers to have their say. For now, there are a few helpful features, but one really useful one remains on the way.
Typically the purpose of a public beta is to enable general folks to comment about new features. But in the case of Mozilla, which seeks contributions from the general public on ideas of how features should look, the first public beta of Firefox 3.1 is actually being presented as an empty vessel for ideas to be fleshed out by its users.
For that reason, one of the new version's more prominent features is actually absent from Beta 1: the private browsing window. Yes, it will be in the final version; but Mozilla is continuing its active solicitation for comments and ideas about how to go about presenting the feature, now that Microsoft Internet Explorer 8 Beta 2 and the Google Chrome beta both already implement it in their own way.
Private browsing sets up a way for users to temporarily visit Web sites without their history and temporary files being recorded. Ostensibly, manufacturers have said, this enables dutiful husbands to buy gifts for their wives without leaving tracks for them to uncover before they arrive; but perhaps in want of greater placement in Google News, many have taken to calling the feature "porn mode."
So the question on Mozilla's mind is, should Firefox 3.1 be explicit -- to borrow a phrase -- in how it tells the user his browsing window is a private one (Chrome, for instance, puts a little cloaked spy icon in the upper left corner)? Or if it's a private browsing mode, perhaps that fact should be kept private as well.
"The notification should be subtle," reads a suggestion made yesterday by Mozilla contributor Michael Ventnor. "A big change like the location bar color would fail the over-the-shoulder test (although based on common uses of Private Browsing, the signal that they're in PB mode is the least of their worries if someone is looking over their shoulder, I suppose). We must still meet the goal of a non-distracting chrome [front end] and one that is simple to implement and fast to render."
Ventnor goes on to suggest changing the hue of the throbber icon in the browser's upper right corner.
While discussion on that topic continues, one new feature that does appear to work for the first public beta of v3.1 is the tab preview. Up to now, the Firefox user could flip back and forth through open tabs in a window using the Ctrl+Tab keystroke (Ctrl+Shift+Tab to go backwards). But with the tabs being easily accessible via mouse pointer, I've never seen a lot of folks use this key sequence, and I haven't been one to use it myself.

Wednesday, October 15, 2008

Wi-Fi moving Cost down of mobile calls


The cost of talking on the go is coming down, thanks to an increasing number of options for using Internet calling services on cellphones as an alternative to traditional cellular service plans.

Nokia is one of the biggest makers of cellphones that include chips for using Wi-Fi, the short-range wireless technology. Some high-profile devices are equipped with the technology, including Apple's iPhone and some BlackBerry models from Research In Motion. The soon-to-be-released G1 Google phone from HTC and T-Mobile also sports a Wi-Fi chip.

For Mark Laris, a Dallas-based nuclear engineer who travels the world running his consulting business, the technology saves him thousands of dollars a year on international phone bills.

Wi-Fi chips and Voice over Internet Protocol, or VoIP, let him do most of his business and personal calls over cut-rate phone services that work over the Web. His only cellphone bill is a 1,400-minute-per-month family plan from AT&T that he shares with a business partner.

"I always make VoIP calls," he said, adding that the call quality was as good as with a traditional mobile phone service.
He has access to the VoIP services by using a Nokia phone that has a Wi-Fi chip similar to the ones that allow laptops to connect to the Web in smaller venues like coffee shops.

The new phones are capable of operating exclusively with Wi-Fi - they do not need to use a cellphone network at all - and when the user is not in a Wi-Fi "hot spot," calls are routed to the Wi-Fi carrier's voice mail service.

Still, mobile VoIP is a fledgling field.

In the United States, T-Mobile sells Wi-Fi phones and Internet calling plans that cost $10 a month, on top of regular fees. It is the only U.S. carrier with such a package. The market is also filled with small, privately held companies hoping to make a name for themselves. They include DeFi Mobile, Fring, Gizmo5, Sipgate and Truphone.

One advantage that these new companies have in competing with established VoIP services like Skype and Vonage is that old-style Internet calling required users to be sitting in front of a computer or hooked up to a laptop to make calls. Mobile phones with Wi-Fi chips free them from their PCs.

Ivan Domaniewicz, a commercial airline pilot with homes in Miami and Barcelona, recently switched to DeFi Mobile from Skype. His $40-a-month DeFi plan gives him unlimited Internet calls, voice mail and phone numbers in Argentina and Spain that are automatically transferred to his Nokia phone.

"It's really helped me keep in touch with my family and friends in Argentina and Spain," said Domaniewicz, who shuttles between the United States, Japan, Europe and the South Pacific.

"What's nice is that I don't have to take my computer out and start Skype-ing to talk to them. I just turn on my phone," he said.

Jeb Brilliant, an event planner from Long Beach, California, reduced his monthly AT&T plan to 700 minutes from a more expensive unlimited access plan after he became comfortable using mobile VoIP.

He uses Truphone, which charges 6 cents a minute to call landlines in most countries and 30 cents a minute to call mobile numbers. It also sells bundles of minutes that are discounted over its à la carte rates.

Brilliant has tried other mobile VoIP services and said that the technology could sometimes prove more reliable than cellphone service. When a family friend recently went into labor, he found himself making phone calls via a Wi-Fi network at the hospital.

"You can get it in places where there is no cellphone reception," he said.

Viva la WiFi calls revolution!"

Now that WiFi enabled phones are starting to get into the market, it can be easily foreseen that WiFi calls will become a reality for mainstream users in the next 3-30 years. Why this wide range? Well, 3 years if it was dependant purely on technology advancement, 30 years if it's up to the operators...

Yes, though they try to be cool about it, operators are very much afraid of WiFi calls, and it's very easy to understand why: When you have a WiFi enabled phone, whenever you are around a WiFi hotspot you have internet access which means that you can technically make voice calls over the internet the same way you make them with Skype on your PC.

If your phone is Windows mobile based, you can actually run Skype Mobile, simple as that. If it's not, third parties such as Fring and Truphone offer their own solutions. Without getting into details about the solutions and their differences, they enable VoIP calls over WiFi and also over 3G. (BTW - VoIP over 3G is cost effective only if you have a flat/cheap data plan, if you're not roaming and of course provided that your operator doesn't block it...). In addition phones are starting to come with built-in VoIP software.
Now WiFi is not everywhere, and also not always free, but even if you are in the airport, suddenly the outrageous $10/hour rate can make sense when it comes as an alternative to mobile calls while roaming.

Over the past years, operators have been fighting to provide their subscribers a walled garden Internet instead of an open environment in order to route all mobile content through their channels and cut their usual 50% cut. This has been going on despite of legislation and despite of protests of strong content providers such as Google.How much of their revenues do operators make from data services you ask? Well, it's 7%-20% (Including SMS, browsing and mobile content). Voice accounts for more than 80%, and even 90% of operators revenues. Think how far they would be willing to go in order to protect that.And this exactly why Vodafone and Orange asked Nokia to disable VoIP on the N95 in such a way that not even possible to use Truphone, and you can see Truphone's video demo comparing an unlocked phone vs. a locked one. (BTW - I perceive the N95 as a breakthrough, since most of the other WiFi enabled phones are far away from mainstream, either the Blackberry-like E61, the Windows bulky smartphones etc.)

Another example is that the all powerful iPhone was released with no WiFi calls support. Apple has done a lot to equip this gadget with all the software needed to enjoy the full experience of a phone/media/Internet, so why not include the one killer app that could have utilized 2 of the iPhone strengths?

But unlike the battle for mobile content which is still waging (With the operators having the upper hand), this is one battle they can't win, and in fact this battle will also make them lose the battle over mobile content. And the reason for that is simple: Up until now, the operators were our ISP as well. Every data packet came through their gateways, and as such they could always block whatever they wanted: By IP address, by file type (ringtones, games), by protocol (SIP) etc.

WiFi phones provide the bypass everyone has been waiting for: You can access the Internet directly whenever you are around a WiFi spot, and the operator can't do anything about it. What it can do, is sell its subscribers blocked phones, but soon everyone will understand that it's better to buy unlocked phones from retail stores.

Knowing that, some operators are embracing the "if you can't beat them, join them" approach. One of those is T-Mobile which is even promoting now a WiFi phone, but still capitalizes on this "generous" offer by taking $10/month from subscribers. Another operator that promises to open up is Hutchison/3.In any case, operators are going to have to be creative and innovative to turn this situation from a potential disaster to a stage in their evolution. I am sure that in this case openness will be rewarded with customer loyalty, while tricks like locking phones, which is in any case a very temporary "solution", can only have the effect of antagonizing customers.We should also remember that in any case, that while WiFi is spreading to a lot of places even to the extent of city-wide hotspots, it is still far from the worldwide coverage that operators networks supply (for now...), and this is another reason for operators to act wisely and not block WiFi, so their users stay loyal and use their network when out of WiFi range.P.S. - For more on the N95 blocking you can read this. Also as a side note, even PSP will support VoIP soon.








Netbook?? Hot in selling lead


Technology is now a part of our life, We are interested to use all the technology with our reach,
Ever heard of a "netbook"? They're small, light, machines that sell for as little as $250 to just north of $600 - and they're selling like hot cakes
.

Today, if you look at the 15 best-selling laptops at Amazon, 13 are netbooks. You might think of these computers as a cross between a Blackberry and a full-blown laptop. Netbooks are great for e-mail, Web surfing and accessing Web-based applications, but they're not what you want to work on all day creating presentations or editing a stack of digital photos.

Taiwan-based computer maker Asus launched the netbook revolution about 18 months ago with the Eee PC. With a handful of the plucky little machines in its lineup, Asus projects to sell 5 million netbooks in 2008.
With those kinds of numbers, the competition is paying attention. Everyone from HP (HPQ, Fortune 500) to Dell (DELL, Fortune 500), and all the Asian-based manufacturers now offer their own versions of a netbook. This year Intel (INTC, Fortune 500) launched its Atom chip, a small, low-power, low price processor tailored for these kinds of machines. Clearly the netbook is here to stay.

The reasons for the popularity of the Eee PC and its brethren are its low price and dead-simple use. When the Eee PC was conceived, making the machine uncomplicated (the three "e's" in Eee PC stand for easy to work, easy to learn, easy to play) was the key driver in its design.

"Our target market was kids and moms," says Asus North America President Jackie Hsu. "People who didn't want or need a full-blown laptop." These are also people, as it turns out, who are increasingly doing computing tasks using Web-based services (the Cloud in today's parlance) like online games, Facebook or Google Calendar that don't require hefty computing on a local machine.

As is usually the case with new technology, the early-adopter crowd was the first to snatch up Eee PCs when they launched in the United States about a year ago. Part of that was due to its Linux operating system (there are Windows versions now), but as with all technology, it was its sheer newness that got people excited. But that market quickly moves on to the next new thing. What has happened since is exactly what Asus had hoped: Kids, moms and people needing a second laptop are buying the machines. At a recent layover in Newark airport, I spied a handful of soccer moms and their kids taking advantage of free WiFi and all tapping away on Eee PCs.

The next step for Asus is to solidify the Eee PC's place in this non-techie marketplace. To that end, Eee PCs are selling in Toys-R-Us, Target (TGT, Fortune 500) stores, and holiday catalogues from Saks, all retail channels that you don't associate with computers. "The whole point," says Asus' Hsu, "is to help them expand their brand to their customers. With the Eee PC they can do that."


Eee PC™ to Feature 3.75G for Internet Access Anywhere

Coupled with All-day Battery Life, 3.75G Capability Puts Eee PC’s™ Status as the Ultimate Travel Companion Beyond Question

Taipei, Taiwan, September 24, 2008 – ASUS today announced that it will be adding 3.75G connectivity* to its hugely-popular series of Eee PC™ netbooks, enabling convenient and high-speed access to the Internet anytime, anywhere. The inclusion of 3.75G is a perfect addition to the Eee PC’s™ existing set of travel-friendly features such as its high portability, shockproof data storage and all-day battery life—strengthening its reputation as the ultimate solution for computing on the go.

With 3.75G, the Eee PC™ will be able to deliver on its promise of borderless one-day computing better than ever before. No longer bound to Internet hotspots, 3.75G-equipped Eee PC™ users will be able to enjoy low latency mobile broadband Internet access at high downlink and uplink speeds of up to 7.2 Mbps and 2 Mbps** respectively, regardless of where they are—ensuring a seamless connected experience on the go. The Eee PC’s™ 7.5-hour battery life*** provides more than ample power to keep it up and running during extended outdoor excursions.

Frequent travelers will particularly welcome the timely addition of 3.75G support, which comes as service providers around the globe are ramping up their adoption of 3.75G High-Speed Uplink Packet Access (HSUPA). This means that they will be assured of a reliable, high-speed mode of Internet access in many destinations around the world.

3.75G will make its first appearance in Eee PC™ 901 netbooks released to market from October 2008 onward.

Specifications

Model Eee PC™ 901 with 3.75G

Operating System
Genuine Windows® XP Home

Display
8.9"

CPU & Chipset
Intel Atom

Wireless Data Network
WLAN: 802.11b/g/n****
Bluetooth: Yes****

Memory
1 GB (DDR2)

Storage
16 GB Solid State Drive (SSD)
20 GB free online Eee Storage

Memory Card Reader
MMC/SD (SDHC)

Camera
1.3 M Pixel

Audio
HD Audio
Stereo speakers
Digital Array Mic

Battery
6 Cells, 7.5 hrs***

Dimensions
225 mm (W) x 175.5 mm (D) x 22.7 mm ~ 39 mm (H)

Weight
1.1 kg

Casing Colors
Black or White


* 3.75G support is operator dependent.
** Actual speeds are operator dependent.
*** Actual battery life is subject to usage, configuration, as well as model.
**** The inclusion of 802.11n and Bluetooth capabilities is operator dependent

Hydrogen Powered Vehicles steps forward with new meterial


Its not a dream that you are driving your car by water, Reseach is going forward and bringing us the posibility possible,
Researchers in Greece report design of a new material that almost meets the U.S. Department of Energy (DOE) 2010 goals for hydrogen storage and could help eliminate a key roadblock to practical hydrogen-powered vehicles.

Their study on a way of safely storing hydrogen, an explosive gas, is scheduled for the Oct. 8 issue of ACS' Nano Letters, a monthly journal.

Georgios K. Dimitrakakis, Emmanuel Tylianakis, and George E. Froudakis note that researchers long have sought ways of using carbon nanotubes (CNTs) to store hydrogen in fuel cell vehicles. CNTs are minute cylinders of carbon about 50,000 times thinner than the width of a human hair. Scientists hope to use CNTs as miniature storage tanks for hydrogen in the coming generation of fuel cell vehicles.

In the new study, the researchers used computer modeling to design a unique hydrogen-storage structure consisting of parallel graphene sheets — layers of carbon just one atom thick —stabilized by vertical columns of CNTs. They also added lithium ions to the material's design to enhance its storage capacity.

The scientists' calculations showed that their so-called "pillared graphene" could theoretically store up to 41 grams of hydrogen per liter, almost matching the DOE's target (45 grams of hydrogen per liter) for transportation applications.

"Experimentalists are challenged to fabricate this material and validate its storage capacity," the researchers note

Silicon Nanotubes For Hydrogen Storage In Fuel Cell Vehicles
After powering the micro-electronics revolution, silicon could carve out an important new role in speeding the debut of ultra-clean fuel cell vehicles powered by hydrogen, researchers in China suggest. Their calculations show for the first time that silicon nanotubes can store hydrogen more efficiently than their carbon nanotube counterparts.

Dapeng Cao and colleagues note that researchers have focused on the potential use of carbon nanotubes for storing hydrogen in fuel cell vehicles for years. Despite nanotubes' great promise, they have been unable to meet the hydrogen storage goals proposed by the U.S. Department of Energy for hydrogen fuel cell vehicles. A more efficient material for hydrogen storage is needed, scientists say.

In the study, Cao's group used powerful molecular modeling tools to compare the hydrogen storage capacities of newly developed silicon nanotubes to carbon nanotubes. They found that, in theory, silicon nanotubes can absorb hydrogen molecules more efficiently than carbon nanotubes under normal fuel cell operating conditions. The calculations pave the way for tests to determine whether silicon nanotubes can meet government standards for hydrogen storage, the scientists note.

The article "Silicon Nanotube as a Promising Candidate for Hydrogen Storage: From the First Principle Calculations to Grand Canonical Monte Carlo Simulations" will appear in the April 24 issue of ACS' Journal of Physical Chemistry C.

Tuesday, October 14, 2008

significant progress with its 3D-SIC (3D stacked IC) technology.

Test-chip taped for assessing design rules and models for 3D-SIC technology.
Significant Process In Creating 3D Stacked Integrated Chips

IMEC, Europe’s leading independent nanoelectronics research institute has announced that it has made significant progress with its 3D-SIC (3D stacked IC) technology. IMEC recently demonstrated the first functional 3D integrated circuits obtained by die-to-die stacking using 5µm Cu through-silicon vias (TSV).

It will now further develop 3D SIC chips on 200mm and 300mm wafers, integrating test circuits from partners participating in its 3D integration research program.
IMEC reported a first-time demonstration of 3D integrated circuits obtained by die-to-die stacking and using 5µm Cu through-silicon vias (TSV). The dies were realized on 200mm wafers in IMEC’s reference 0.13µm CMOS process with an added Cu-TSVs process. For stacking, the top die was thinned down to 25µm and bonded to the landing die by Cu-Cu thermocompression. IMEC is upscaling the process for die-to-wafer bonding and is on track for migrating the process to its 300mm platform.
To evaluate the impact of the 3D SIC flow on the characteristics of the stacked layers, both the top and landing wafers contained CMOS circuits. Extensive tests confirmed that the performance of the circuits does not degrade with adding Cu TSVs and stacking. And to test the integrity and performance of the 3D stack, ring oscillators with varying configurations were made, distributed over the two chip layers and connected with the Cu TSVs. Tested after the TSV and stacking process, these circuits demonstrated the chips excellent integrity.
“With these tests, we have demonstrated that our technology allows designing and fabricating fully functional 3D SIC chips. We are now ready to accept reference test circuits from our industry partners,” commented Eric Beyne, IMEC Scientific Director for 3D Technologies, “This will enable the industry to gain early insight and experience with 3D SIC design, using their own designs”.

Space tourist docks with ISS:Russian spacecraft docks with orbital station


U.S. space tourist Richard Garriott crew member of the 18th mission to the International Space Station (ISS) gestures prior the launch of Soyuz-FG rocket at the Russian leased Baikonur Cosmodrome, Kazakhstan, Sunday, Oct. 12, 2008. Garriott, a computer game designer, reached space Sunday aboard a Russian rocket, fulfilling a long-deferred childhood dream as his astronaut father watched with pride. (AP Photo/Dmitry Lovetsky)

A Russian Soyuz craft carrying an American computer game designer and two crewmates docked with the international space station Tuesday.

The TMA-13 capsule automatically latched onto the station a few minutes ahead of schedule, two days after blasting off from the Baikonur cosmodrome in Kazakhstan.

Aboard the capsule was space traveler Richard Garriott, who paid a reported US$30 million for a 10-day stay on the station.

Garriott's father, Owen, applauded as he watched the docking from Russian Mission Control outside Moscow.

"I'm pleased everything is going smoothly. It's looking great and they are starting off on a fascinating new adventure," he told The Associated Press.

"There was not a lot of nervousness today or during the launch. We were confident it would go well," he said.

Also aboard were U.S. astronaut Michael Fincke and Russian cosmonaut Yuri Lonchakov. They will replace the station's current crew and spend several months in orbit.

The trio will enter the station when the hatches are opened in several hours.

Garriott will return to Earth on Oct. 23 with Russian cosmonaut Yuri Volkov, who has been at the station since April.

Volkov was the first man to follow his father, a decorated cosmonaut from the Soviet era, into space.


Computer games designer Garriot, 47, paid Space Adventures around $17m for the privilege of joining American Mike Fincke and Russian Yuri Lonchakov on Expedition 18 to the outpost. His father, Owen Garriott, spent 60 days aboard Skylab back in 1973, and he'll apparently spend some of his stay snapping the Earth's surface to see how it's changed since dad's time in orbit.

Garriott will return to Earth in a Soyuz TMA-12 on 23 October with Expedition 17 crew members Commander Sergei Volkov and Flight Engineer Oleg Kononenko, who've been aloft since 8 April. The third ISS crew member to welcome Expedition 18 is Gregory E. Chamitoff, who arrived aboard Discovery's STS-124 mission which launched on 31 May.

Fincke and Lonchakov are both ISS vets, with the former on his second gig, and the latter on his third tour. They'll be on board for six months, during which the crew will prep the station's life-support equipment for a permanent compliment of six crew members from next year.

Chamitoff will be relieved in November by astronaut Sandra H. Magnus, scheduled to fly to the station on Endeavour's STS-126. The mission is due to deliver extra equipment for the ISS's crew expansion.

After that, NASA has eight further shuttle flights to the ISS on its launch roster, before the fleet's final 2010 retirement. In 2009, Discovery (STS-119 delivering final solar arrays to the ISS) is slated to lift off on 12 February. Endeavour will on 15 May carry the final components of Japan's kibo lab (Exposed Facility and Experiment Logistics Module Exposed Section) on mission STS-127, while Atlantis (STS-128) is slated to launch on 30 July bearing science and storage racks for the station.

Discovery will be back in in the air on 15 October, when its mission STS-129 will "focus on staging spare components outside the station". The 2009 schedule wraps on 10 December with Endeavour on STS-130 whisking spacewards the "Cupola" - a "robotic control station with six windows around its sides and another in the center that provides a 360-degree view around the station".

The final three ISS jaunts are scheduled for 11 February 2010 (Atlantis, STS-131), 8 April (Discovery, STS-132) and 31 May (Endeavour, STS-133). They will deliver to the ISS Multi-Purpose Logistics Module, deliver maintenance and assembly hardware and "critical spare components", respectively.

The other planned shuttle launch is that of Atlantis' STS-125 mission to service Hubble, now knocked back to 2009 and "under review". ®

Luckuy 7 of windows OS "Windows 7 "

Microsoft is the biggest operating system of worldwide computer user, From the beginnning this is the 7th operating sytem that is going to be presented to the world market ,Microsoft sticks with 'Windows 7' for next OS
Redmond says the code name 'just makes sense' because it's the seventh Windows, but others differ on that version number.




Microsoft Corp. announced yesterday that the code name for its next operating system, Windows 7, will be the product's official name.
Mike Nash, vice president of Windows product management, said the company was sticking with the label for simplicity's sake. "Simply put, this is the seventh release of Windows, so therefore 'Windows 7' just makes sense," Nash wrote in Microsoft's Vista blog on Monday.
After noting that Microsoft has at times stuck a date on the OS -- Windows 2000 was the last -- Nash said that didn't make sense this time. "We do not ship new versions of Windows every year," Nash said. "Likewise, coming up with an all-new 'aspirational' name [like Windows XP] does not do justice to what we are trying to achieve, which is to stay firmly rooted in our aspirations for Windows Vista, while evolving and refining the substantial investments in platform technology in Vista into the next generation of Windows."
Some Windows watchers, however, questioned Nash's claim that Windows 7 would be the seventh iteration of the OS. The AeroXperience blog counted seven as of Windows Vista, and eight if the consumer-oriented Windows Millennium was included. However, only if kernel revisions are tallied, XP wasn't counted -- and Windows kernel was incremented to 7.0 for Windows 7 -- would that work, the blog argued.
According to the Windows timeline on Wikipedia, XP's kernel is tagged as 5.1, and Vista's as 6.0.
Microsoft's own version of its client operating system timeline ends with Windows XP, but assumes nine editions as of Vista: Windows 3.0, NT, Windows 95, NT Workstation, Windows 98, Millennium, Windows 2000, XP and Vista. By that timeline, Microsoft doesn't regard Windows 1.0, which it released in 1985, or Windows 2.0, launched in 1987, as "true" Windows.
More than two weeks ago, Microsoft had said it would issue an alpha version of Windows 7 to attendees of its Professional Developers Conference (PDC) and Windows Hardware Engineering Conference (WinHEC), which open Oct. 27 and Nov. 5, respectively. Today, Nash called that preview a "pre-beta developer-only release."
It's unusual for Microsoft to use an operating system code name as the official product moniker, and Nash ackowledged that fact. "I am pretty sure that this is a first for Windows," he said.
Operating system code names at Microsoft have ranged from "Chicago," which was the under-development name for what became Windows 95 and "Memphis" (Windows 98), to "Whistler" (Windows XP) and "Longhorn" (Windows Vista).
Microsoft has not pinned a ship date to Windows 7, but it has said it was shooting for three years after the release of Vista, which would mean it would be released late in 2009 or early in 2010.
Perhaps not coincidentally, Windows blogger Ed Bott wondered just last week whether Microsoft would keep the "7" tag for its next OS. Nearly half his readers who responded to an online poll gave the nod to "None of the above," but 20% voted for Windows 2010, 14% for Windows 2009 and 7% for Windows Vista R2.
Windows 7 received 15% of the votes in the poll

Monday, October 13, 2008

Human growth Hormone

If you want to know about the secret of youth you must have to know about Human Growth hormone,
Growth hormone (GH) is a peptide hormone that stimulates growth and cell reproduction in humans and other animals. It is a 191-amino acid, single chain polypeptide hormone which is synthesized, stored, and secreted by the somatotroph cells within the lateral wings of the anterior pituitary gland. Somatotrophin refers to the growth hormone produced natively in animals, the term somatropin refers to growth hormone produced by recombinant DNA technology,[1] and is abbreviated "rhGH" in human.
HGH is an abbreviation for Human Growth Hormone that is produced by the pituitary gland in the brain. This hormone stimulates growth and cell production in human beings.
The main property of HGH is to increase height, and the other important benefits are that it increases muscle mass, helps in calcium retention in our body, helps in keeping bones healthy, reduces fat in the body, helps in controlling sugar and insulin levels, helps with immunity and several more important functions that keep us healthy when we are young.
Now do you understand why we start having problems as we age?
The secretion of this hormone starts reducing by the time we are in our 30s and as we grow older, the pituitary gland produces lesser and lesser of this most important hormone.
The level of secretion is highest in our early childhood. It peaks during puberty when there is a growth spurt.
The levels of secretion continue to decline throughout our adult life. This decrease in the levels of HGH is what may cause us to look old, have problems like diabetes, depression, loss of energy, loss of muscle mass and every other problem associated with aging.
I know what is going on in your mind, that increasing the HGH levels in the body will make you look young and feel healthy again.
If we go by theory, HGH is the miracle that brings back your youth. Is it the fountain of youth that we humans were waiting for, for thousands of years?
There are several options available to induce HGH into your body. There are the HGH injections, oral sprays, natural herbal HGH Releasers, that claim to increase the HGH levels in your body.

Friday, October 10, 2008

Nobel in Chemistry awarded for discovery and development of green fluorescent protein (GFP)

Roger Y. Tsien of UC San Diego, Martin Chalfie of Columbia University and researcher Osamu Shimomura developed a fluorescent protein from jellyfish that allows researchers to trace cell molecules.

A UC San Diego pharmacologist and two other U.S.-based scientists won the 2008 Nobel Prize in Chemistry on Wednesday for their development of a green fluorescent protein from jellyfish that has provided researchers their first new window into the workings of the cell since the development of the microscope.Roger Y. Tsien, 56, of UC San Diego; Martin Chalfie, 61, of Columbia University; and Osamu Shimomura, 80, a Japanese-born researcher who works at the Marine Biological Laboratory in Woods Hole, Mass., will share the $1.4-million prize for developing the protein that the Nobel committee called "a guiding star for biochemists, biologists, medical scientists and other researchers."

Nobel 2008




The Path from Jellyfish to Medical Advances

The discovery and development of green fluorescent protein (GFP) recognized by this year’s Nobel Prize in chemistry exemplifies the interactions between different fields of science and different sources of funding in bringing about research advances of great importance. GFP, which glows green in response to blue light, is part of the fabric of modern cell biology. Linking the gene encoding GFP to essentially any gene of interest makes the target visible within cells and tissues.

GFP was first purified from jellyfish by biochemist Osamu Shimomura in 1962 in work supported by the National Science Foundation (NSF). The gene encoding GFP was subsequently isolated in mid-1985 by biochemist Douglas Prasher with support from the American Cancer Society. He shared this gene with both Martin Chalfie and Roger Tsien upon request. Martin Chalfie, a neurobiologist supported by the National Institutes of Health (NIH), was interested in mapping cells in the nervous system of the model organism C. elegans and realized that GFP could be a powerful tool. He discovered that when the GFP gene was expressed in a variety of organisms, including bacteria and C. elegans, functional GFP was produced without the need for any additional components present in jellyfish. This property is key to the broad applicability of GFP in cell biology.

Roger Tsien, a chemist, was also interested in fluorescent probes for cell biological studies. Having previously been supported by NIH and NSF, Tsien’s initial studies of GFP were supported by the Howard Hughes Medical Institute (HHMI) and NSF. He further characterized the basis for the green fluorescence and applied protein engineering methods to produce a vast collection of variant fluorescent proteins with different colors and other properties that have greatly expanded the power of these proteins for detailed cell biological and other studies. His laboratory continues to develop this technology to the present day and thousands of laboratories around the world now rely on GFP and its cousins as essential tools for the research.

Future disease treatments under development would not have been possible without this wonderful gift from the sea and the range of scientists from different fields who uncovered it and converted it into a tool nearly as fundamental for modern research as the microscope.




List of recent Nobel Prize in chemistry winners

Recent winners of the Nobel Prize in chemistry, and their research, according to the Nobel Foundation:

___

_ 2008: Osamu Shimomura, Japan, and Martin Chalfie and Roger Y. Tsien, United States, for the discovery and development of the green fluorescent protein, GFP.

_ 2007: Gerhard Ertl, Germany, for studies of chemical processes on solid surfaces, research that has advanced the understanding of why the ozone layer is thinning, how fuel cells work and even why iron rusts.

_ 2006: Roger D. Kornberg, United States, for work on how information stored within a gene is copied and transferred to the parts of cells that produce proteins.

_ 2005: Yves Chauvin, France, and Robert H. Grubbs and Richard R. Schrock, United States, for their work and exploration of metathesis.

_ 2004: Aaron Ciechanover and Avram Hershko, Israel, and Irwin Rose, United States, for their work in how cells break down.

_ 2003: Peter Agre and Roderick MacKinnon, United States, for their research on how key materials enter or leave cells in the body and their discoveries concerning tiny pores called "channels" on the surface of cells.

_ 2002: John B. Fenn, United States, Koichi Tanaka, Japan, and Kurt Wuethrich, Switzerland, for developing methods used in identifying and analyzing large biological molecules.

_ 2001: William S. Knowles and K. Barry Sharpless, United States, and Ryoji Noyori, Japan, for showing how to better control chemical reactions, paving the way for drugs to treat heart ailments and Parkinson's disease.

_ 2000: Alan J. Heeger and Alan G. MacDiarmid, United States, and Hideki Shirakawa, Japan, for the discovery that plastic conducts electricity and for the development of conductive polymers.

_ 1999: Ahmed H. Zewail, United States, for pioneering the investigation of fundamental chemical reactions, using ultra-short laser flashes, on the time scale on which the reactions actually occur.

_ 1998: Walter Kohn, United States, for the development of density-functional theory in the 1960s that simplifies the mathematical description of the bonding between atoms that make up molecules, and John Pople, Britain, for developing computer techniques to test the chemical structure and details of matter.

Find here

Home II Large Hadron Cillider News