Search This Blog

Sunday, February 3, 2008

MIT and Colombian company form logistics center

New program will join MIT's global network of supply chain centers.
MIT's Center for Transportation and Logistics (MIT-CTL) and LOGyCA, a Colombia-based logistics company, have signed an agreement worth $19 million creating the Center for Latin-American Logistics Innovation (CLLI), the leading research and education center for supply chain and logistics in Latin America.

CLLI will join MIT-CTL and the Zaragoza Logistics Center in Spain as the third member of MIT's growing international network of centers dedicated to supply chain education and research that now spans the United States, Europe and Latin America.

CLLI will help Latin American businesses and individuals compete in local, regional and global markets by delivering leading-edge research, technology and educational programs in logistics, transportation and supply chain management. The Center will also become a major force in academia within Latin America and across the globe.

LOGyCA, which boasts the most robust supply chain technology infrastructure in the region, will house the CLLI in its Bogota, Colombia, headquarters. CLLI researchers and students will have access to the infrastructure and knowledge base that helped Colombia establish the largest collaborative technology platform in Latin America.

CLLI will also connect with its counterparts in the United States through MIT-CTL in Cambridge, MA, and in Europe through the Zaragoza Logistics Center (ZLC) in Spain. The partnership between MIT-CTL and ZLC, launched in 2003, has created a highly regarded educational program and continues to play a key role in the economic growth of the Aragon region and the success of the PLAZA Logistics Park in Zaragoza.

MIT Professor of Engineering Systems and Director of MIT-CTL Yossi Sheffi said that launching the CLLI extends the reach of both MIT-CTL and the ZLC, and enhances their ability to meet the ever-growing demand for truly global supply chain education and research programs.

"Globalization continues to bring new opportunities for growth - and immense challenges. To stay on the cutting edge and help companies keep pace with these changes, we are expanding our unique network of learning centers where faculty, students, researchers and companies across continents collaborate on supply chain and logistics projects that have global impact," said Sheffi, who is also Director of the MIT Engineering Systems Division.

Rafael Florez, Director of LOGyCA, said the creation of CLLI is an excellent opportunity to strengthen the development of Colombian and Latin American logistics, a well-known strategic component of competitiveness.

"By joining the MIT-CTL network, CLLI will actively participate in the development of global educational and research programs. It will also give CLLI the opportunity to develop solutions that reflect the unique logistics and supply chain challenges in our economies. Latin American business leaders will have access to world-class academic programs that will contribute to improving value chains through the continent. This has always been LOGyCA's mission: leadership in innovation for value networks," said Florez.

The partnership between MIT-CTL and LOGyCA is based on a 10-year agreement, which officially begins on March 1, 2008. The $19-million deal includes a $4 million gift from LOGyCA to the MIT Center for Transportation & Logistics.

River plants may play major role in health of ocean coastal waters

Aquatic plants in rivers and streams may play a major role in the health of large areas of ocean coastal waters, according to recent research from MIT's Department of Civil and Environmental Engineering.

This work, which appeared in the Dec. 25 issue of the Journal of Fluid Mechanics (JFM), describes the physics of water flow around aquatic plants and demonstrates the importance of basic research to environmental engineering. This new understanding can be used to guide restoration work in rivers, wetlands and coastal zones by helping ecologists determine the vegetation patch length and planting density necessary to damp storm surge, lower nutrient levels, or promote sediment accumulation and make the new patch stable against erosion.

Professor Heidi Nepf, a MacVicar Faculty Fellow, was principal investigator on the research. Brian White, a former graduate student at MIT who is now an assistant professor at the University of North Carolina, was co-author with Nepf of the JFM paper. Marco Ghisalberti, a postdoctoral associate at the University of Western Australia, worked with Nepf on some aspects of this research when he was an MIT graduate student. This work was supported by grants from the National Science Foundation.

Traditionally, people have removed vegetation growing along rivers to speed the passage of waters and prevent flooding, but that practice has changed in recent years. Ecologists now advocate replanting, because vegetation provides important habitat. In addition, aquatic plants and the microbial populations they support remove excess nutrients from the water. The removal of too many plants contributes to nutrient overload in rivers, which can subsequently lead to coastal dead zones--oxygen-deprived areas of coastal water where nothing can survive. One well-documented dead zone in the Gulf of Mexico, fed by nutrient pollution from the Mississippi River, grows to be as large as the state of New Jersey every summer.

Nepf's work, which describes how water flows into and through a plant canopy and how long it remains within the canopy, can be used to find the right balance between canopy and flow in a river.

Vegetation generates resistance to flow, so the velocity within a canopy is much less than the velocity above it. This spatial gradient of velocity, or shear, produces a coherent swirl of water motion, called a vortex. Using scaled physical models, Nepf and Ghisalberti described the dynamic nature of these vortices and developed predictive models for canopy flushing that fit available field observations. The team showed that vortices control the flushing of canopies by controlling the exchange of fluid between the canopy and overflowing water. Similar vortices also form at the edge of a vegetated channel, setting the exchange between the channel and the vegetation.

The structure and density of the canopy controls the extent to which flow is reduced in the canopy and also the water-renewal time, which ranges from minutes to hours for typical submerged canopies. These timescales are comparable to those measured in much-studied underground hyporheic zones, suggesting that channel vegetation could play a role similar to these zones in nutrient retention. In dense canopies, the larger vortices cannot penetrate the full canopy height. Water renewal in the lower canopy is controlled by much smaller turbulence generated by individual stems and branches.

"We now understand more precisely how water moves through and around aquatic canopies, and know that the vortices control the water renewal and momentum exchange," said Nepf. "Knowing the time scale over which water is renewed in a bed, and knowing the degree to which currents are reduced within the beds helps researchers determine how the size and shape of a canopy will impact stream restoration."

High-level panel gives advice to MIT Energy Initiative

The MIT Energy Initiative received critical input, advice and insights in the first meeting of its External Advisory Board. Meeting in mid-January, the board, chaired by former U.S. Secretary of State George Shultz, was "very supportive of what we're trying to do," said MITEI Director Ernest Moniz.

At its inaugural meeting, the 22-member board "emphasized the importance of an international focus," said MITEI Deputy Director Robert Armstrong, and they "encouraged us to form more international linkups to advance our program." In addition, the board also reinforced "the importance of continuing to develop our communications and outreach into the public discourse about energy issues," Armstrong said. In terms of specific areas of research, the board agreed that a critical area that deserves increased attention is improvements in efficiency, especially in the design of buildings.

MIT President Susan Hockfield established the board to review MITEI's approach to global energy solutions and its current portfolio of activities, and to provide input on policy trends, needs, gaps and opportunities in energy, business, technology and the environment.

The board encompasses diverse backgrounds in energy supply, industry, academia, environmental groups and government, including former MIT professor and Nobel laureate Mario Molina; best-selling author Daniel Yergin of Cambridge Energy Research Associates; former Senator Sam Nunn, CEO of the Nuclear Threat Initiative; Tony Hayward, CEO of BP; and Frances Beinecke, President of the Natural Resources Defense Council.

Others on the board are Stephen Bechtel of SD Bechtel Jr. Foundation; ; Denis Bovin GM '69, vice chairman of Bear Stearns and Co.; Susan Cischke of Ford Motor Company; Rafael del Pino of Grupo Ferrovial SA; Arthur Goldstein of Ionics; Baba Kalyani ME '72, chair of Bharat Forge; Anne Lauvergeon of Areva; Lawrence Linden ME '76 of Linden Trust for Conservation; Leonardo Maugeri, senior vice president of Eni S.p.A., which recently announced a $50-million grant to the Energy Intiative; Internet pioneer Robert Metcalfe EE '68 of Polaris Venture Partners; Robert Millard of Lehman Brothers; John Reed, retired chair of Citigroup; Kenan Sahin of TIAX LLC; Philip R. Sharp of Resources for the Future; Institute Professor Emeritus Robert Solow; and James Wolfensohn, former head of the World Bank.

"It's an extraordinarily experienced and knowledgeable group," Moniz said. "Their discussions and suggestions were very stimulating. We've got quite a lot to digest and prioritize, and to benefit from their guidance as to how the work we do can best be leveraged to influence public policy."


CarTel is a distributed, mobile sensor network and telematics system. Applications built on top of this system can collect, process, deliver, analyze, and visualize data from sensors located on mobile units such as automobiles. A small embedded computer on the car interfaces with a variety of sensors in the car, processes the collected data, and delivers it to an Internet server. Applications running on the server analyze this data and provide interesting features for users.

We have used CarTel in several applications, including:

Commute and traffic portal: A Web site that shows all the trips made by a driver and makes personalized route recommendations based on the driver's own history as well as the aggregate driving history of other drivers with information along common paths. The portal provides interesting ways to visualize one's trips, and tools to analyze this data.
Note: This Boston Globe article on CarTel (which summarizes some of the applications of our project very well) suggests that “more often than not” it is better to hit the freeway than the byways in the Boston area. That happened to be true for a particular user's commute during his typical travel times. In reality, our data shows that things are much more complicated and depend on several factors, including which roads are being considered, the time of day, the day of week, weather conditions, and so on. If, upon reading the article, you decide to hit the freeway even against your better judgement, your commute may in fact take longer than you wanted it to — your mileage may, quite literally, vary!

MyRoute: CarTel's rich data set obtained from GPS-equipped cars enables us to model delays observed on various road segments as statistical distributions. These delay distributions are used by algorithms that compute optimal routes at different times of day between two points. “Optimal” does not always mean “shortest” or “shortest time”; it might also mean “most likely to get me to place X by 9am”. MyRoute is a CarTel-affiliated project done jointly with our CSAIL colleagues, Sejoon Lim, Daniela Rus, and David Gifford.
P2 (Pothole Patrol), a road surface monitoring system.
Fleet testbed, a 27-car CarTel deployment in the cars of a local limo company (PlanetTran). This testbed also serves as the “vehicle” for much of our research, allowing us to deploy software and applications on a running system (in addition, some user's cars also have CarTel nodes).
Wi-Fi Monitoring, mapping the proliferation of 802.11 access points in the Boston metro area.
On-board automotive diagnostics & notification: Use the OBD-II interface to monitor and report internal performance characteristics such as emissions, gas mileage, RPM, etc. These reports can be combined with historical data, thus highlighting long-term changes in a car's behavior. Such a system would be able to provide the driver with early trouble notification. In the future, with enough participation, this system could compare and “mine” data across multiple cars of similar make and age to identify anomalies in particular cars (e.g., “all the other cars show a value between X and 1.3X for this sensor, but your car shows a value of X/2, which is suspicious”).
Cars as mules. We have developed CafNet (“carry and forward network”) protocols that will allow cars to serve as data mules, delivering data between nodes that are otherwise not connected to one another. For example, these protocols could be used to deliver data from sensor networks deployed in the field to Internet servers without requiring anything other than short-range radio connectivity on the sensors (or at the sensor gateway node).
Unlike traditional automotive telematics systems that rely on cellular or satellite connectivity, CarTel uses wireless networks opportunistically. It uses a combination of WiFi, Bluetooth, and cellular connectivity, using whatever mode is available and working well at any time, but shields applications from the underlying details. Applications running on the mobile nodes and the server use a simple API to communicate with each other. CarTel's communication protocols handle the variable and intermittent network connectivity. To simplify application development, CarTel uses two well-known programming abstractions—pipes and databases—adapting them to the intermittently connected, mobile environment.

MIT researchers fight gridlock with Linux

At the Massachusetts Institute of Technology (MIT), researchers are testing a Linux-based automotive telematics system intended to reduce traffic congestion. CarTel is a distributed, GPS-enabled mobile sensor network that uses WiFi "opportunistically" to exploit brief windows of coverage to update a central traffic analysis program.

The CarTel system is used to test a growing number of automotive-related research projects at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). In some ways, the system is similar to the recently announced Dash Express navigation system, in that it uses embedded Linux, GPS, and WiFi, and links up to a central route analysis system. Unlike Dash, however, CarTel currently lacks a display or real-time navigation and mapping features. Dash, meanwhile, offers additional location-based services and incorporates a cellular modem (850MHz triband GSM) to provide a continuous connection, switching to WiFi when available.
Some experiments using the CarTel design have used a cellular modem, says Associate Professor Sam Madden in an interview. Yet, the main goal is to find out how much can be achieved with an intermittent communications system. Cellular service would add significantly to the operating cost of the system.

CarTel leaders Sam Madden (left) and Hari Balakrishnan (right)

Madden, who leads the project along with Professor Hari Balakrishnan, says CarTel's goal is to establish a flexible, affordable platform for diverse automotive-related research projects. These include testing continuous-stream querying, exploring issues of mobile, intermittent WiFi, and researching fleet management applications, driver safety technology, and identifying pothole location for maintenance planning.

The main focus, however, is to develop sophisticated route-selection algorithms to address the problem of congested traffic. "Everyone agrees that traffic is a pain," says Madden, "so avoiding it is good. Hopefully this will have some direct benefit to users." Improving routing could have a significant impact on commute time, says Madden, which in turn would reduce fuel consumption and pollution.

As a test deployment, the CarTel team installed Linux-based computers and sensor kits in 30 limousines from a Cambridge, Mass.-based livery service called PlanetTran. For the last year, the hybrid-based limos have been driving around collecting data about Boston-area traffic conditions, as well as information about potholes, WiFi availability, and the health of the cars themselves.

CarTel's OBD-II interface

The CarTel computer is equipped with an OBD-II (on-board diagnostics II) interface to the car's sensor network. This lets it record speed, mileage, emissions, RPMs, and other information. In addition, the car computer hooks up to additional sensors, depending on the experiment, such as cameras or three-way accelerometers to measure and map potholes. Other possibilities include noise and pollution detectors.

Inside CarTel

The current hardware platform for CarTel is a box about 7 x 5 x 1 inches equipped with a Soekris net4801 single-board computer (SBC) running Linux 2.6. The computer has a "586-class" CPU running at 266MHz, with 128MB of RAM, and 1GB (or more) of flash. The CarTel box is equipped with two serial ports for collecting sensor data, plus a USB 1.1 port, and a WiFi card plugged into a miniPCI slot. A commodity GPS unit is connected via one USB port. A Bluetooth dongle connects to the other USB port, allowing connections from a mobile phone. For telematics data, the system uses an OBD-to-serial adapter from

CarTel's Soekris net4801 SBC
(Click to enlarge)

The MIT team has developed some innovative technology to get the most out of fleeting, off-and-on encounters with WiFi access points. First, it has developed a WiFi communications protocol called EasyWiFi that is optimized for brief encounters. EasyWiFi takes only about a hundred milliseconds to establish a mobile wireless connection, says Madden, instead of a more typical five to ten seconds. The modifications are only on the client side, so the protocols can work with any WiFi connection.

Other communication protocols handle variable and intermittent connectivity. A "dpipe" (delay-tolerant pipe) transport programming abstraction developed by postdoc Jakob Eriksson allows producer and consumer nodes to reliably transport data across an intermittently connected network. The client-server dpipe connection is optimized for start-and-stop delivery and for situations when the IP address of an end-point changes frequently.

The dpipe uses a modified delay that can maintain its status even when the session is dropped, says Madden. "When connectivity is availability again, it pops the data out the WiFi connection to the server where it stores it in memory," he adds.

A longer term store-and-forward technology, called CafNet, (carry and forward network) employs protocols that enable cars to serve as data mules, delivering data between nodes that are not connected via telecommunications. This technology could be used, for example, in military, mining, agricultural or scientific applications that are spread out over vast distances, delivering data from sensor networks deployed in the field to Internet servers. Each sensor would require only short-range WiFi connectivity, thereby lowering costs and easing maintenance.

Real-time queries and what-if route algorithms

Distributed sensor data is processed by a portal application that is built around a Linux- and SQL-based stream-processing query application called ICEDB. This delay-tolerant, continuous query processor enables applications running on the portal to issue queries via an API, or transfer a sequence of data between nodes using dpipe. Queries can specify what sensor data is needed and at what rate, and how the data should be sub-sampled, filtered, summarized, and prioritized.

Applications can also query the portal's relational database for analysis. These "snapshot" queries work from available data and don't need to wait synchronously for complete results. All this underlying complexity is shielded from the developer, says Madden, so it appears like a standard SQL relational database.

The ICEDB database was modified from a similar "TinyDB" database that Madden developed years ago for the TinyOS operating system. TinyDB was used to query continuous-feed data from sensor networks that used sensor motes from Crossbow Technologies.

"We made a conscious decision to move to Linux because TinyOS was not as easy to work with," says Madden. "With Linux, there are also a huge number of people developing device drivers, and our graduate students already know how to develop with it."

The CarTel portal provides a geo-spatial data visualization system based on Google Maps that stores and marks sensor data using GPS coordinates. The portal organizes data as "traces," linear sets of sensor readings collected during a drive. Users can query the system using a graphical query interface to select traces, and then visualize the traces combined with various summaries, statistics, and maps. The portal allows queries both on the driver's own history, as well as the aggregate driving history of other drivers.

Users could include fleet directors, city planning officials, or the drivers themselves, logging in either from a WiFi-enabled laptop in the car or later at home. For example, drivers can see congested areas of their commutes marked in red, orange, and yellow segments, with greens indicating average speeds, and the rare blues indicating clear sailing.

The current focus of the project is in developing algorithms that run on top of the portal application to help drivers plot the best route at a given time. For example, the team's MyRoute project includes applications that model delays observed on road segments as statistical distributions. Various algorithms then use these to compute optimal routes for different times of the day.

"Instead of asking the shortest time or shortest distance from point A to point B, you ask what route should be taken, say, for the highest probability of getting to the airport by a certain time depending on the time selected," says Madden.

Meanwhile, additional research is underway using CarTel to improve fleet management and fine-tune street repair schedules based on identifying the deepest potholes that cars are hitting most frequently. Other research will investigate using telematics data to warn drivers of potential safety hazards or maintenance needs.

With the Soekris net4801 expected to reach end-of-life later this year, the CarTel group is planning to move to another Linux-based design, hoping to reduce the current over-$400 pricetag for the complete system. The plan is to expand the network to 1,000 vehicles, which will help improve the granularity of collected data for better route planning. Other potential lines of inquiry include peer-to-peer WiFi communications between cars to sense upcoming traffic congestion or to assist in onboard safety systems.

Last May, a U.S. government- and industry-led coalition called the Vehicle Infrastructure Integration Consortium (VII-C) was established, aiming to equip every car and roadside in America over the next decade with wirelessly connected Linux-based computers. The goal is to lower driver death rates, reduce traffic jams, and media-enable cars before 2017. The VII-C is funded by the U.S. Department of Transportation (DOT), and includes seven vehicle manufacturers already involved in the U.S. DOT's Intelligent Vehicle Initiative.

More Macs, More diffrent

By 2012, Gartner foresees mobile workers abandoning notebooks, despite their slowly diminishing size, for smaller, more portable mobile devices.
In the future there will be more Macs, more mobile devices, and more open source software.
At least that's how analysts with Gartner see it. The IT consulting and research firm on Thursday published 10 predictions for events and developments that will affect IT and businesses in the years ahead.

Gartner predicts that by 2011, Apple will have doubled its computer market share in the United States and Western Europe. It attributes Apple's rise both to the company's success and the failures of its rivals.

"Apple is challenging its competitors with software integration that provides ease of use and flexibility; continuous and more frequent innovation in hardware and software; and an ecosystem that focuses on interoperability across multiple devices (such as iPod and iMac cross-selling)," according to Gartner.

By 2012, Gartner foresees mobile workers abandoning notebooks, despite their slowly diminishing size, for smaller, more portable mobile devices. It describes these devices as "new classes of Internet-centric pocketable devices at the sub-$400 level." (Another word for this might be "iPhone.")

The year 2012 will also mark a time when 80% of all commercial software will include open source elements. Companies that fail to embrace open source software will be at a significant cost disadvantage, Gartner predicts.

Simultaneously, a third of business software spending will have moved from buying product licenses to service subscriptions. "The SaaS model of deployment and distribution of software services will enjoy steady growth in mainstream use during the next five years," according to Gartner.

By 2011, Gartner expects early technology adopters to buy at least 40% of their IT infrastructure as a service rather than as a capital expenditure. As if to confirm this trend, recently reported that the bandwidth utilized by Amazon Web Services, the company's pay-by-the-drink IT infrastructure, exceeded the bandwidth utilized by all of Amazon's global Web sites combined.

Only a year from now, Gartner believes that environmental criteria will be among the top six requirements for IT-related goods. And by 2010, the firm expects that three-quarters of organizations will consider full life-cycle energy and carbon dioxide footprint in making PC buying decisions. By 2011, it anticipates that companies will have to demonstrate their environmental credentials to maintain preferred supplier status.

IT groups will become more user-driven, Gartner projects, with more than half of all IT buying decisions being made at the behest of end users by 2010. "The rise of the Internet and the ubiquity of the browser interface have made computing approachable and individuals are now making decisions about technology for personal and business use," according to Gartner.

Finally, by 2011, Gartner expects the number of 3-D printers to increase 100-fold from their 2006 levels. With 3-D printers falling below $10,000, the firm expects consumers and business to warm to the idea of "printing" 3-D models.
More Reports :
Research firm sees explosion in Mac popularity, hints at iPod Touch potential
The Mac’s resurgence will continue over the next several years, according to Gartner research, resulting in the platform’s doubling its market share in the U.S. and Western Europe by 2011.

That prognostication led off Gartner’s list of 10 key predictions for the tech industry the firm released yesterday. While crediting Apple’s business model for much of the Mac’s success, Gartner notes that “failures” of Apple’s competitors also are playing a part. Here’s what the company had to say:

“By 2011, Apple will double its U.S. and Western Europe unit market share in computers. Apple's gains in computer market share reflect as much on the failures of the rest of the industry as on Apple's success. Apple is challenging its competitors with software integration that provides ease of use and flexibility; continuous and more frequent innovation in hardware and software; and an ecosystem that focuses on interoperability across multiple devices (such as iPod and iMac cross-selling).”

Gartner has figured out what veteran Mac users have known for years. The Mac’s ease of use holds great appeal to people who just want a computer that does what it’s supposed to do. That demographic includes most home users, which is where the Mac has made most of its gains.

Gartner doesn’t name names after citing the “failures of the rest of the industry,” but it’s obvious the primary guilty party is Microsoft. Dashing expectations after years of anticipation, the latest version of Windows – Vista -- has met resistance in its first year of release. Many PC vendors, including Dell, had to start offering XP as an option on new PCs.

But the Gartner statement hints at more than the PC industry’s missteps; if you read between the lines, it’s also an indictment of the PC business model. That would be the chaotic system in which one company makes the operating system (Microsoft) while many others build the hardware (Dell, HP, Lenovo, Acer, Toshiba, Sony). And that doesn’t include all the third-party peripherals that may or may not work with your Windows PC.

Apple has long endured criticism for its vertical integration philosophy. “If only Apple would open up its hardware and software, it could compete and gain market share,” they would opine.

Of course, Apple did allow clone making briefly in the mid-1990s, but instead of growing the Mac’s market share the clone-makers gobbled up Apple’s best customers. Steve Jobs killed that bad idea shortly after his return in 1997.

Now Gartner is lauding Apple’s “software integration” and its “ecosystem that focuses on interoperability across multiple devices.” Apple’s ability to integrate its hardware, software and peripherals is what makes it all work so well. Somebody finally gets it.

One other prediction on Gartner’s list also bodes exceptionally well for Apple. By 2012, the company predicts half of traveling workers “will leave their notebooks home in favour of other devices.”

Gartner explains that workers will tire of notebooks’ size and weight: “Vendors are developing solutions to address these concerns: new classes of Internet-centric pocketable devices at the sub-$400 level; and server and Web-based applications that can be accessed from anywhere.”

Again, Gartner does not mention any names. Can you think of any devices that fit the criteria? Name begins with an “i”? Yep, the iPhone fits the bill perfectly, right down to its $399 price tag. But the $299 iPod Touch, particularly with its newfound software capabilities (mail, maps, weather, notes and stocks), may fit the bill even better.

Apple knows it and is already poised to exploit it. CFO Peter Oppenheimer referred to the iPod Touch as “an entirely new type of iPod” in his comments during Apple’s earnings conference call last week. “This new iPod has the potential to grow the iPod from being just a music and video player into being the first mainstream WiFi mobile platform running all kinds of mobile applications.”

While other companies surely will be competing in this space, Apple is the best equipped to make it work as people will expect it to work; its expertise in hardware-software integration will prove an immense advantage here. In a few years, the iPod Touch, along with the iPhone, could completely dominate this sector.

The MacBook Air

Apple shipped a few MacBook Air units to its retail stores Friday, leaving the scant supply to mainly serve as in-store demo units. While our SSD model remains on order pending shipment, we managed to snag the one of the few available HDD-based units from one of the company's San Francisco outlets and have set to work on an in-depth review of its ins and outs.

Everyone Can Critique

"In many ways, the work of a critic is easy. We risk very little, yet enjoy a position of over those who offer up their work to ourselves and our judgement. We thrive on negative criticism, which is fun to write and to read. But the bitter truth we critics must face is that in the grand scheme of things, the average piece of junk is probably more meaningful than our criticism designating it so. But there are times when a critic actually risks something, and that is in the discovery and defense of the new. The world is often unkind to new talent, new creations. The new needs friends."

Those words of Anton Ego, voiced by Peter O'Toole in Pixar's Ratatouille, well describe the task in reviewing the new MacBook Air. Perhaps it's no coincidence that Steve Jobs had a hand in producing both the movie and the new laptop. While the heavy lifting was done by writers at Pixar and engineers at Apple, both push their audience and the industry to think differently.

Just prior to unveiling the MacBook Pro onstage at Macworld Expo, Jobs coyly highlighted Ratatouille as one of his favorite movies. That's because the rat in Ratatouille secretly was Jobs: the unlikely source of something new who has been delighting patrons and investors alike despite his real identity as an opinionated, reality distorting visionary who talks about his business in terms of art and design and craftsmanship that challenges the market's status quo rather than simply being another straight-laced bean counter promising to deliver more of the same old thing wrapped up in vapor and grandiose buzzwords.

As Ego's monologue noted, it would be easy and fun to negatively critique the new MacBook Air only as missing features of the MacBook Pro; however, the new model isn't just a rebadged Pro with a smaller 13.3" display, some shaved off specifications, and a $200 price cut. It's something entirely new, and asks to be evaluated at such. How well does it achieve what it sets out to deliver? You can help us determine that by sending in your suggestions for critically putting the new Air through its paces.

Avoiding the Plague of Featuritus Vulgaris

My dad taught me that when going out to buy something, I should first make a list of features I actually want, and then only buy those features. That way, I wouldn't be suckered into paying extra for impractical options that sound valuable in the marketing pitch but aren't anything I'd really ever use or need. My overall success in buying things has seemed directly proportional to how well I heeded his advice.

For most people buying a laptop, the essential feature list involves snappy performance, great mobility, and a thoughtful, practical design. However, when you look at the laptops currently on sale, many are layered with other rather impractical features that suddenly sound indispensable once you're aware they exist. A recent must-have addition on many new PC laptops is the fingerprint reader unit used to log in, as well as the assortment of lots of external buttons and switches to manually turn off wireless or to launch applications. These might be used on rare occasions, but their primary function seems to be to add clutter.

Look past the junk features, and many laptops miss the true target. Many lack enough RAM to be useful, supply limited video output features, go without Bluetooth in their base models, and otherwise skimp where they should shine. The MacBook Air is so completely stripped of junk featuritus that many critics are worried it won't appeal to mainstream users, just as they feared the iPod wouldn't appeal to broad audiences because it couldn't play FM radio, or as they feared the iPhone wouldn't be popular because it didn't have a physical slide out keypad.

Instead, all three Apple products aimed at delivering a strong and appealing design to the point where their specification numbers faded into the background. Apple markets its products, not as a list of GHz and GB numbers, but the same way top automakers do: as well built and attractively crafted machines that have enough under the hood so that they just work. Most car buyers are swayed by sexy designs they find appealing or the utilitarian practicality they need rather than the foot pounds of torque the vehicle's engine provides at a given RPM range.

The Air Feature Gap

One thing that pundits of all stripes have been conspicuously silent about when talking about the MacBook Air is its backlit keyboard, which makes it far more appealing to use in dimly lit conditions such as inside an airplane fuselage. It's almost as if Apple made a list of practical features that consumers actually use, and based its engineering decisions upon that, rather than simply assembling the specifications everyone else was shipping in a similarly identical form factor.

As a mobile laptop user, I've always bought an extra battery for every PowerBooks and MacBook Pro I've owned. I even had the weighty option to use two batteries at once in my late 90s PowerBook G3, which could swap out its optical drive for a second battery. The thing is, I've almost never actually used any of those spares. For example, I carried around an extra battery at Macworld, but never had the opportunity or need to swap it despite the scarcity of power outlets on the show floor. Despite that, I probably wouldn't buy a new MacBook Pro that didn't have a replaceable battery.

However, the Air's light weight and slim profile offers significantly more portability than any other laptop Apple has ever offered, a factor that balances out its inability to trade out its battery pack for a spare. Given the new restrictions upon carrying extra batteries on airplanes, Apple's sealed battery decision makes sense for a light mobile laptop. That's not to say that some users will find the MacBook Pro a better fit, particularly if they have needs that require long term, battery-only operation.

The iPod and iPhone were reviled for having a sealed battery, but neither really presents a serious problem for users. The competitive market solved any fears that millions of iPods would be thrown away prematurely because of the relatively high cost of Apple's replacement service. It's easy to find do-it-yourself, higher capacity iPod battery kits for less than $10. The iPod and iPhone also have external battery accessories that allow them to use rechargeable AA or Lithium Polymer battery packs for extended use. If the Air presents any battery issues for users, expect third parties to provide solutions.

Similarly, I've also always paid a premium to have a SuperDrive in every laptop I've bought, despite the fact that in retrospect, I've actually burned less than a dozen DVDs in as many years of being primarily a mobile user. The simple fact is that many of features we think we desperately need in a laptop are not the same as what we'll actually use. That's also a key reason why Apple's product launches generate a hailstorm of pundit angst prior to selling well in the market.

And all those missing ports? Recall that the original iBook launched in 1999 also lacked FireWire, audio input, had a single USB port, and even offered no video output. It was also priced within $200 of the MacBook Air, yet sold well to the consumer market despite its flamboyant use of color and its risk-taking "toilet seat" design. The iBook SE that shipped the following year cost the same price as today's MacBook Air and still didn't offer VGA output or more than one USB port.

When critics say the MacBook Air won't sell in volume, it must be because they haven't witnessed the fawning interest displayed by users upon seeing it in person. In order to maximize that interest, Apple designed a new retail store window display that puts the MacBook Air on a revolving platform in front of a matte background of clouds and behind a gradient plexiglass sheet with the caption "thinovation."

It installed the new installations in place of the former giant iPhone display. The company appears to be unfazed by reports suggesting that it will have a hard time selling enough iPhones this year, and seems confident in its ability to bring ultra mobile laptops into the mainstream in 2008 just as it successfully pushed its high-end, sophisticated smartphone into the mass market last year.
What's on Your Feature List?

As we take the new MacBook Air to task and put it through its paces, we'll look at how well it does in the core requirements of performance, mobility, and design. We'll also evaluate how well its new software features, described in MacBook Air spawns new software solutions for missing hardware, serve as alternatives to built in optical and FireWire hardware.

What would you like to know about the Air? Post your comments or questions and we'll kick its tires for you that much harder in our upcoming in-depth review of the MacBook Air.
The Good: As long as you are not hobbled by its limitations, the Air is a joy to use. I find that I type more comfortably and accurately on the rather flat keyboard of the air than I do on the more sculptured keyboard of the MacBook Pro. The screen is gorgeous with the LED backlighting providing very even illumination from corner to corner.

The ambient light sensing, which I once thought of as a sort of MacBook gimmick is extremely effective at keeping the screen at a comfortable level of brightness under just about any lighting condition except direct outdoor sunshine. Automatic dimming under low light conditions is not only easier on your eyes, but conserves battery life.

The thin body and low hinge mounting of the Air keeps the vertical dimension, critical for airplane use, low. I was even able to use it on my tray table in the cramped confines of a Bombardier CRJ200 regional jet.

Although the processor runs at a relatively slow 1.4 gHz, it is a full-power Intel Core 2 Duo and has all the speed you’ll need on a notebook of this class. The hard drive is also significantly slower than the 2.5-in. drives used on most notebooks, but again, you’re not going to be using the Air for video editing or software development.

The Bad. The more I used the air, the more I regretted Apple’s decision not to offer a built-in wireless broadband option. This is a go anywhere machine and it wants connect anywhere wireless. Wi-Fi is great when it’s available, but it wasn’t, for example, in my room at the Marriott Desert Springs Resort in Palm Desert, Calif., during the DEMO 08 conference. I used the wired Ethernet with my backup ThinkPad X61.

Apple didn’t make it easy to devise a wireless alternative. The Air lacks the MacBook Pro’s Express Card slot, so that option is out. AT&T, Sprint, and Verizon all offer USB modems for their wireless data services, but I have yet to find one that fits in the Air’s very cramped USB port without use of an extension cord, which makes for a very clumsy arrangement, especially when you want to connect on the fly. If there were one thing I could change on the Air, this would be it. As it stands, the lack of a wireless broadband option is probably a dealbreaker for me.

In and of itself, the lack of a built-in CD/DVD drive isn’t much of a drawback. You can always use an external drive. I left the Apple external Superdrive (which, incidentally works only with the Air because of its high power draw) on my desk, but used the ThinkPad’s clunkier read-only drive for movies on the plane without trouble. This lack will become even less objectionable once Apple fleshes out the catalog of iTunes movie rentals—there’s not much there yet.

Apple’s Remote Drive software solution that lets the Air share a drive on another Mac or PC only sort of works. It can’t be used for DVD video or Audio CD music nor can you boot from a Remote Drive disc, which rules out using it to install and operating system under Boot Camp, Parallels, or VMware Fusion. That pretty much limits it to use for loading software, and even here a wired Ethernet connection would be useful for speed. I loaded Microsoft Office 2008 using Remote Drive and it was painfully slow over Wi-Fi.

For now, at least, I didn’t find the lack of a replaceable battery to be a big problem, since the four hours or so of running time I got was adequate. The questions will come once that battery gets some miles on it and its ability to hold a charge begins its inevitable decline. Heavy users will likely end up spending $135 for a battery replacement after a year or so.

BID ; $18.55 bln for U.S. wireless auction

24hoursnews -
A sudden spurt of bidding has turned what was shaping up as a bust into the most robust auction of wireless spectrum the federal government has ever conducted, salvaging hopes for a new generation of mobile networks that are open to any device and any service.

The weeklong auction cleared a crucial stage on Jan. 31 as the combined bids for the five swaths of spectrum being sold reached $15.64 billion, which is 14% more than the record $13.7 billion raised in 29 days of bidding during the Federal Communications Commission's last auction a year and a half ago. Most of the blocks, each of them broken into multiple biddable chunks, have now drawn the minimum bids set by the FCC, including one of the two blocks being sold with unprecedented requirements that their eventual owners open their networks to devices and services from other companies.

C Block's High Bidder Unknown
The bigger of the two "open access" swaths, the C block, finally drew a $4.71 billion bid—a 2.3% premium over the FCC's $4.6 billion minimum price—after attracting little interest for a week. Blair Levin, a former FCC official and an industry analyst at Stifel Nicolaus (SF), said his analysis of the bid patterns indicates that "the bidding for the C block is likely over." For now, there appear to be no takers for the D block, the other open-access piece of spectrum that was earmarked to serve public safety agencies. Analysts believe the D block will have to be auctioned again in the next few months with many of these stringent requirements removed.

While the identity of the high bidder for the C block remains a mystery under the FCC's rules, it now appears certain the nation will see its very first open-access wireless network within a few years. Until now, the FCC has always allowed licensees to exercise full control over which phones and applications could be used on their networks. Indeed, three of the five blocks being auctioned this time around will allow the owners traditional discretion over their networks.

Levin says the bidding patterns suggest that the C block's current high bidder is Verizon Wireless, which is owned jointly by Verizon Communications (VZ) and Vodafone Group (VOD). But it's also possible the bid came from Internet search giant Google (GOOG), which pledged to bid at least $4.6 billion for the block after leading the charge that persuaded the FCC to place open-access requirements on the spectrum. The names of the winners will remain under wraps until the auction ends, likely sooner than expected, in a few weeks.

The actual degree of the new network's openness will depend in large part on the identity of the C block winner. "Open access means different things to different people," says Harold Furchtgott-Roth, a former FCC commissioner and founder of consultancy Furchtgott-Roth Economic Enterprises. "I think the more likely scenario here [if Verizon is the winner] is open hardware, not open software." Google, on the other hand, might be inclined to remove restrictions on which applications and services consumers could access on their mobile gadgets. But in either scenario, the winner will be testing an unproven business model on a very expensive investment in wireless spectrum.

A Win-Win for Equipment Makers
Google, a newcomer to the industry, would face an especially costly and complex endeavor with no existing network and wireless operations on which to build. The company would have to spend at least $2 billion just to line up sites for its wireless transmitters, figures Bastian Schoell, director of wireless carrier business development for Nortel Networks (NT) in North America. Then it would need to deploy billing and back-office systems from scratch.

The good news for telecom equipment makers is that even existing wireless carriers typically need to spend as much to expand their network operations as they've spent on new spectrum. With analysts now projecting this auction to raise $18 billion, companies such as Nortel, Alcatel-Lucent (ALU), Nokia Siemens Networks, and Ericsson (ERIC) are expecting a major windfall of orders for equipment, software, and services. And thanks to the FCC's stringent deadlines for launching services on auctioned spectrum, these orders might start trickling in soon. The winner of the C block, for instance, has to build a network covering 40% of the population within four years. "We could see deployments at the beginning to mid-2009," says Schoell.

Some of the current winners may yet withdraw their bids. Still, chances are the next few weeks of the auction will only see bidders driving prices higher. After all, says Schoell, "We see it as prime real estate for deploying innovative technologies."
More News :Bidding reached $18.55 billion on Friday in the U.S. Federal Communications Commission's record-setting auction of government-owned wireless airwaves, but there were no new offers for two large, closely watched slices of spectrum.

The total bidding, which covers five separate blocks of spectrum in the auction, was up from $15.64 billion on Thursday.

There were no new bids on a major slice of the airwaves, known as the "C" block, which will have to be made accessible using any device or software application, under FCC rules. A bid of $4.71 billion, made on Thursday morning remained the top offer.

Nor were there any new bids Friday on a nationwide piece of the spectrum, known as the "D" block, which must be shared with public safety agencies under auction rules set by the agency. A bid of $472 million from last week still stood.

The lone $472 million bid for the D block spectrum, which came in the first round of the auction a week ago, is far below the $1.3 billion minimum price set by the FCC. If bidding fails to reach the minimum, the FCC will have to decide whether to re-auction the D block or possibly modify the network-sharing requirement.

The open-access condition on the C-block spectrum is important because U.S. wireless carriers have traditionally restricted the models of cell phones that can be used on their networks and limited the software that can be downloaded onto them, such as ring tones, music or Web browser software.

But AT&T and Verizon began moving away from that restrictive stance in recent months.

The FCC is keeping bidders' identities secret until the entire auction ends. But analysts say Verizon Wireless and Internet search leader Google Inc (GOOG.O: Quote, Profile, Research) are the most likely bidders for the C block.
The 700-megahertz signals are valuable because they can go long distances and penetrate thick walls. The airwaves are being returned by television broadcasters as they move to digital from analog signals in early 2009.

In addition to the C and D blocks, the other spectrum includes more local chunks set aside in blocks designated "A" and "B". The final, "E" block, is considered less useful because it is limited to one-way data transmission.

The electronic auction will end when no more bids are submitted.

Third undersea Icable cut in Mideast :problems might take two weeks to fix.

An undersea cable carrying Internet traffic was cut off the Persian Gulf emirate of Dubai, officials said Friday, the third loss of a line carrying Internet and telephone traffic in three days.
A third undersea cable has been cut after breaks near Egypt earlier this week disrupted Web access in parts of the Middle East and Asia, Indian-owned cable network operator FLAG Telecom said on Friday.

FLAG, a wholly-owned subsidiary of India's number two mobile operator Reliance Communications (RLCM.BO: Quote, Profile, Research), said in a statement on its Web site its FALCON cable had been reported cut at 0559 GMT, 56 kms (35 miles) from Dubai on a segment between the United Arab Emirates and Oman. (Editing by Paul Bolding)

Ships have been dispatched to repair two undersea cables damaged on Wednesday off Egypt.

FLAG Telecom, which owns one of the cables, said repairs were expected to be completed by February 12. France Telecom, part owner of the other cable, said it was uncertain when repairs on it would be repaired.

Stephan Beckert, an analyst with TeleGeography, a research company that consults on global Internet issues, said the cables off Egypt were likely damaged by ships' anchors.

The loss of the two Mediterranean cables -- FLAG Telecom's FLAG Europe-Asia cable and SeaMeWe-4, a cable owned by a consortium of more than a dozen telecommunications companies -- has snarled Internet and phone traffic from Egypt to India.

Officials said Friday it was unclear what caused the damage to FLAG's FALCON cable about 50 kilometers off Dubai. A repair ship was en route, FLAG said.

Eric Schoonover, a senior analyst with TeleGeography, said the FALCON cable is designed on a "ring system," taking it on a circuit around the Persian Gulf and enabling traffic to be more easily routed around damage.

Schoonover said the two cables damaged Wednesday collectively account for as much as three-quarters of the international communications between Europe and the Middle East, so their loss had a much bigger effect.

Without the use of the FLAG Europe-Asia cable and SeaMeWe-4, some carriers were forced to reroute their European traffic around the globe, which could cause delays, Beckert said.

Other carriers could use SeaMeWe-3, an older cable that remained the only direct connection from Europe to the Middle East and Asia. Because this cable is older, it has a smaller capacity than the two damaged cables, Beckert said.

Still, Beckert stressed that although the problem created a "big pain" for many of carriers, it did not compare to the several months of disruption in East Asia in 2006 after an earthquake damaged seven undersea cables near Taiwan.

TeleGeography Research Director Alan Mauldin said new cables planned to link Europe with Egypt should provide enough backup to prevent most similar problems in the future.

Schoonover said a similar Internet problem could not happen in the United States.

"We have all the content here," he said. "It's not going to be felt other than we won't get the BBC."

TeleGeography officials also said most traffic between the U.S., Canada and Mexico is carried over land, and there is a plentiful supply of undersea cables carrying traffic under the Atlantic and Pacific oceans.

Meanwhile, Internet service was slow Friday in Dubai and Egypt, where online service was intermittent, but there was less demand because many businesses in those countries aren't open on Fridays.

Service providers in Egypt said they hoped to have improved capacity by Sunday.

Web surfers in India were experiencing a marked improvement in service, though graphic- or video-heavy sites were still taking longer to load.

Most of the major Internet service providers in India, like Reliance and VSNL, were starting to use backup lines Friday, allowing service to slowly come back, said Rajesh Chharia, president of the Internet Services Providers Association of India.

The Indian ISPs were still alerting customers to slowdowns over the next few days with service quality delays of 50 percent to 60 percent, he said.

The Internet slowdowns had no effect on trading at the country's two main stock exchanges, the SENSEX and the NSE, because they aren't dependent on the downed cables, Chharia said.

Individual Web users were still feeling the effects.

Madhu Vohra, who lives in the city of Noida on the outskirts of Delhi, said she uses Internet phone service Skype to call her son in the United States, but she hasn't been able to reach him since the slowdown.

"We keep trying for a long time and the message comes up, 'This page can't display,' so finally we just turn the computer off and give up," Vohra said.

Internet cafes typically full of teenaged gamers are nearly empty with speeds still frustratingly slow.
problems might take two weeks to fix.

Find here

Home II Large Hadron Cillider News