Search This Blog

Sunday, January 31, 2010

Network for iPad and iPhone is buffering up -

The recent flurry of rumors predicting the end of AT&T's arrangement as exclusive provider of the Apple iPhone were apparently false. In fact, not only did Apple not announce the demise of exclusivity, it doubled down on its exclusive arrangement with AT&T by revealing AT&T as the sole provider of 3G wireless access for the upcoming iPad as well.
numerous complaints of poor or slow data bandwidth from business users and consumers alike, particularly in metropolitan regions like New York and San Francisco where iPhone use is exceptionally heavy. Judging by the maps in the Verizon ads, if you venture outside of those urban areas you might be lucky to find a 3G connection at all.

I predicted that the addition of the iPad, a device more dedicated to data consumption than its iPhone cousin, could be the straw that breaks the proverbial camel's back and bring AT&T's network to its knees. However, AT&T addressed many of those concerns on its quarterly earnings call on Thursday.

It was revealed on the call that AT&T has twice as many smartphone users as its nearest competitor, and that AT&T has experienced a 5,000 percent spike in broadband data consumption since introducing the iPhone. The explosive growth in data demand was both unprecedented and unexpected, and could explain some of the challenges the AT&T network has faced.

In 2009, AT&T added 1,900 new cell sites, expanded 3G coverage to over 360 markets, reaching an estimated 75 percent of the population, and added 850MHz 3G--improving the range and strength of the 3G signal. It also enabled HSPA 7.2 throughout the network, speeding up the 3G download speeds.

Speaking to analysts on the earnings call, John Stankey, president and CEO of AT&T Operations, said "We're very pleased to say that one of the 7.2-enabled devices that will have connectivity on our network is Apple's new iPad, which was unveiled yesterday."

Stankey added "we're really excited about the device, and we work closely with Apple in planning for its connectivity on our network. AT&T is a natural fit for the iPad, given the combination of the ever-improving speed of our 3G network and our robust Wi-Fi capabilities. We have a thorough technical understanding, with a good read on the iPad's usage requirements and characteristics, and all that is included in our network plans for 2010 in the plans I'm sharing with you this morning."

AT&T has aggressive plans for 2010 as well, including investing over $2 billion to expand and improve the broadband data network. It plans to deploy fiber-optic backhaul which will increase 3G data speeds even further, as well as focusing on boosting data capacity in troubled areas like New York and San Francisco.

Overall, AT&T customers should be satisfied that AT&T is not deaf to their complaints, and that it is taking aggressive strides to improve the speed, availability, and stability of its 3G network.

As it relates to the iPad, though, I found AT&T CFO Rick Lindner's statement to be telling. "We believe, though, the device, based on where we believe it will be used--in homes, in offices, coffee shops, bookstores, airports, so on and so forth--will be used a substantial amount of time in a Wi-Fi environment. And so we'll just--we'll have to monitor this usage as the device gets out there. And if it's substantially different, we'll adapt to it. But right now, I think the economics will be very positive because it will be a very low-cost device for us--no cost, really, in terms of acquisition."

Translated, Lindner is saying that, although Apple will charge $130 extra for a 3G capable device, and AT&T will happily take your $30 a month for unlimited 3G broadband access, it is assuming that iPad users will rely primarily on Wi-Fi, so the $30 a month will be pure profit to AT&T with no impact at all to the 3G bandwidth.

That reinforces my belief that there is no point in paying extra for the 3G iPad, and that either Apple will just eliminate 3G from the mix and stick with Wi-Fi, or eventually phase out the Wi-Fi only version, and just offer the Wi-Fi plus 3G iPad for the lower price that the Wi-Fi models are being introduced at. Even if that happens, though, I see no reason to pay $30 a month for 3G connectivity when free Wi-Fi is fairly ubiquitous.

<a href=""></a> great site

in reference to: Google (view on Google Sidewiki)

Wednesday, January 20, 2010

Picture-driven computing-MIT

Until the 1980s, using a computer program meant memorizing a lot of commands and typing them in a line at a time, only to get lines of text back. The graphical user interface, or GUI, changed that. By representing programs, program functions, and data as two-dimensional images — like icons, buttons and windows — the GUI made intuitive and spatial what had been memory intensive and laborious.But while the GUI made things easier for computer users, it didn’t make them any easier for computer programmers. Underlying GUI components is a lot of computer code, and usually, building or customizing a program, or getting different programs to work together, still means manipulating that code. Researchers in MIT’s Computer Science and Artificial Intelligence Lab hope to change that, with a system that allows people to write programs using screen shots of GUIs. Ultimately, the system could allow casual computer users to create their own programs without having to master a programming language.The system, designed by associate professor Rob Miller, grad student Tsung-Hsiang Chang, and the University of Maryland’s Tom Yeh, is called Sikuli, which means “God’s eye” in the language of Mexico’s Huichol Indians. In a paper that won the best-student-paper award at the Association for Computing Machinery’s User Interface Software and Technology conference last year, the researchers showed how Sikuli could aid in the construction of “scripts,” short programs that combine or extend the functionality of other programs. Using the system requires some familiarity with the common scripting language Python. But it requires no knowledge of the code underlying the programs whose functionality is being combined or extended. When the programmer wants to invoke the functionality of one of those programs, she simply draws a box around the associated GUI, clicks the mouse to capture a screen shot, and inserts the screen shot directly into a line of Python code.Suppose, for instance, that a Python programmer wants to write a script that automatically sends a message to her cell phone when the bus she takes to work rounds a particular corner. If the transportation authority maintains a web site that depicts the bus’s progress as a moving pin on a Google map, the programmer can specify that the message should be sent when the pin enters a particular map region. Instead of using arcane terminology to describe the pin, or specifying the geographical coordinates of the map region’s boundaries, the programmer can simply plug screen shots into the script: when this (the pin) gets here (the corner), send me a text.“When I saw that, I thought, ‘Oh my God, you can do that?’” says Allen Cypher, a researcher at IBM’s Almaden Research Center who specializes in human-computer interactions. “I certainly never thought that you could do anything like that. Not only do they do it; they do it well. It’s already practical. I want to use it right away to do things I couldn’t do before.”In the same paper, the researchers also presented a Sikuli application aimed at a broader audience. A computer user hoping to learn how to use an obscure feature of a computer program could use a screen shot of a GUI — say, the button that depicts a lasso in Adobe Photoshop — to search for related content on the web. In an experiment that allowed people to use the system over the web, the researchers found that the visual approach cut in half the time it took for users to find useful content.In the same way that a programmer using Sikuli doesn’t need to know anything about the code underlying a GUI, Sikuli doesn’t know anything about it, either. Instead, it uses computer vision algorithms to analyze what’s happening on-screen. “It’s a software agent that looks at the screen the way humans do,” Miller says. That means that without any additional modification, Sikuli can work with any program that has a graphical interface. It doesn’t have to translate between different file formats or computer languages because, like a human, it’s just looking at pixels on the screen.In a new paper to be presented this spring at CHI, the premier conference on human-computer interactions, the researchers describe a new application of Sikuli, aimed at programmers working on large software development projects. On such projects, new code accumulates every day, and any line of it could cause a previously developed GUI to function improperly. Ideally, after a day’s work, testers would run through the entire application, clicking virtual buttons and making sure that the right windows or icons still pop up. Since that would be prohibitively time consuming, however, broken GUIs may not be detected until the application has begun the long and costly process of quality assurance testing.The new Sikuli application, however, lets programmers create scripts that automatically test an application’s GUI components. Visually specifying both the GUI and the window it’s supposed to pull up makes writing the scripts much easier; and once written, they can be run every night without further modification.But the new application has an added feature that’s particularly heartening to non-programmers. Like its predecessors, it allows users to write their scripts — in this case, GUI tests — in Python. But of course, writing scripts in Python still requires some knowledge of Python — at the very least, an understanding of how to use commands like “dragDrop” or “assertNotExist,” which describe how the GUI components should be handled.The new application gives programmers the alternative of simply recording the series of keystrokes and mouse clicks that define the test procedure. For instance, instead of typing a line of code that includes the command “dragDrop,” the programmer can simply record the act of dragging a file. The system automatically generates the corresponding Python code, which will include a cropped screen shot of the sample file; but if she chooses, the programmer can reuse the code while plugging in screen shots of other GUIs. And that points toward a future version of Sikuli that would require knowledge neither of the code underlying particular applications nor of a scripting language like Python, giving ordinary computer users the ability to intuitively create programs that mediate between other applications.

to explore Bangladesh

in reference to: Google (view on Google Sidewiki)

Find here

Home II Large Hadron Cillider News