Halo: Reach, the latest installment in Microsoft's wildly popular video game franchise, is proving to be the fastest-moving title in the series yet, Microsoft said on Wednesday.
In the first 24 hours that Halo: Reach has been available, it has pulled in more than $200 million in sales in the US and Europe, making it the fastest-selling game Microsoft has ever released.
"Every major installment has grown in scope and popularity, firmly cementing the 'Halo' franchise as one of the most popular entertainment properties in the world over the past decade," Microsoft Game Studios' corporate vice president Phil Spencer said in a statement Wednesday evening.
Indeed, the popularity of the franchise appears to be enjoying unabated growth, and each new release has broken the records set by its predecessor, (at least according to Microsoft's tallies.) Halo 3, for example, earned $170 million in its first day of availability when it launched in 2007, which broke the $125 million record set by Halo 2 in 2004.
By the end of Halo 3's first week of availability, it had earned $300 million, and Halo: Reach looks to be well on track to break that record as well.
Master Chief, Halo
Tony Bartel, President of video game retailer GameStop predicted that Halo: Reach will be the "biggest title in the series as well as one of the biggest titles in 2010." GameStop held midnight launch events in 4,000 of its stores on Tuesday, and eyewitness accounts tell of lines more than one hundred customers long waiting to get the game at stores all over the U.S.
GameStop is hoping to stretch out that first day excitement a bit longer by hosting a post-launch Halo: Reach tournament called "Melee by the Bay." The tournament has participating GameStop locations hosting one-on-one tournaments beginning on September 24, which are then followed by a national four-on-four online tournament with a $15,000 grand prize.
Saturday, April 30, 2011
Friday, April 29, 2011
Microsoft shuts down malware-friendly Autorun
Microsoft has finally disabled the feature in older Windows versions that helped spread worms like Conficker
Microsoft has, at long last, put the brakes on the notoriously exploitable Autorun feature found in older versions of Windows. Arguably synonymous with "autoinfect," the Autorun feature is directly responsible for helping propagate worms by giving bad guys a way to easily spread malware via USB devices.
Autorun works by automatically executing code embedded in autorun.inf files on USB devices and other portable media. The change to Autorun, pushed out Tuesday among an array of security patches, disables Autorun via Windows Upate. Disabling the feature previously required manually tweaking the registry or applying a roundabout fix.
The update affects Windows Server 2008 and pre-Windows 7 versions of the desktop OS. Windows 7 comes with Autorun pre-disabled.
Importantly, the change does not affect the behavior of autoplay, which automatically executes the code on CDs and DVDs. Microsoft offer website guidance on its on how to disable that feature.
Microsoft has, at long last, put the brakes on the notoriously exploitable Autorun feature found in older versions of Windows. Arguably synonymous with "autoinfect," the Autorun feature is directly responsible for helping propagate worms by giving bad guys a way to easily spread malware via USB devices.
Autorun works by automatically executing code embedded in autorun.inf files on USB devices and other portable media. The change to Autorun, pushed out Tuesday among an array of security patches, disables Autorun via Windows Upate. Disabling the feature previously required manually tweaking the registry or applying a roundabout fix.
The update affects Windows Server 2008 and pre-Windows 7 versions of the desktop OS. Windows 7 comes with Autorun pre-disabled.
Importantly, the change does not affect the behavior of autoplay, which automatically executes the code on CDs and DVDs. Microsoft offer website guidance on its on how to disable that feature.
Thursday, April 28, 2011
Microsoft previews Internet Explorer 10
The next-generation browser has a heavy emphasis on HTML5 and offers CSS3 capabilities and accelerated graphics
Preaching the mantra of HTML5, Microsoft began offering on Tuesday a preview of its planned Internet Explorer 10 browser, which emphasizes the critical Web specification and its visual effects.
The browser offers CSS3 capabilities and accelerated graphics. "We're hard at work on IE10 on some forward-looking things," said Steven Sinofsky, president of the Windows and Windows Live division at Microsoft, during a presentation Wednesday at the Mix11 conference in Las Vegas. Company officials demonstrated the IE10 platform preview, featuring HTML5 video, CSS3 gradients, and 3D transforms. The preview, which was shown running on a machine running an ARM processor, also boasted faster SVG (Scalable Vector Graphics), CSS3 Flexible Box Layout capabilities, and ECMAscript 5 Strict language improvements.
[ Despite having proprietary technologies to rival HTML5, both Microsoft and Adobe are on board supporting HTML technologies. | Get your websites up to speed with HTML5 today using the techniques in InfoWorld's HTML5 Deep Dive PDF how-to report. | Follow Paul Krill on Twitter. ]
Microsoft is about three weeks into the development of IE10. The preview is available at the IE Test Drive site, said Dean Hachamovitch, Microsoft corporate vice president of Internet Explorer. He also stressed Microsoft's adherence to "native HTML5," supported in IE9, which was released four weeks ago. "You and your site can take advantage of that today and deliver significantly better browser experiences." Native HTML, Hachamovitch said, means "that you really use the language to take advantage of the underlying OS" and leverage hardware acceleration.
Updates to the IE10 platform preview are planned for every 8 to 12 weeks. No specific time was offered for a general release of the browser. Hachamovitch acknowledged that browser upgrades at user sites can be a slow process. He cited an example of a hospital nuclear imaging system he was aware of that still used IE6: "Sometimes, the old versions just take a while to go away."
Despite's Microsoft's emphasis on HTML5, seen as a rival to the company's proprietary Silverlight rich Internet application plug-in, a beta release of Silverlight 5 also is due to be released at the conference. But in touting HTML5, Hachamovitch stressed it offers capabilities previously requiring a plug-in. "Native HTML5 support within Windows in IE9 makes a huge difference in what these sites can do."
Microsoft is acknowledging that HTML5 "is the language for developing front ends on the Web," said analyst Al Hilwa, of IDC. "The position on Silverlight is no different than that articulated earlier. It seems to me that Silverlight will remain native-type technology as an extension of .Net into lighter-weight devices. Silverlight will likely be heavily used in Windows 8 tablets, but we will not know for sure until September."
Preaching the mantra of HTML5, Microsoft began offering on Tuesday a preview of its planned Internet Explorer 10 browser, which emphasizes the critical Web specification and its visual effects.
The browser offers CSS3 capabilities and accelerated graphics. "We're hard at work on IE10 on some forward-looking things," said Steven Sinofsky, president of the Windows and Windows Live division at Microsoft, during a presentation Wednesday at the Mix11 conference in Las Vegas. Company officials demonstrated the IE10 platform preview, featuring HTML5 video, CSS3 gradients, and 3D transforms. The preview, which was shown running on a machine running an ARM processor, also boasted faster SVG (Scalable Vector Graphics), CSS3 Flexible Box Layout capabilities, and ECMAscript 5 Strict language improvements.
[ Despite having proprietary technologies to rival HTML5, both Microsoft and Adobe are on board supporting HTML technologies. | Get your websites up to speed with HTML5 today using the techniques in InfoWorld's HTML5 Deep Dive PDF how-to report. | Follow Paul Krill on Twitter. ]
Microsoft is about three weeks into the development of IE10. The preview is available at the IE Test Drive site, said Dean Hachamovitch, Microsoft corporate vice president of Internet Explorer. He also stressed Microsoft's adherence to "native HTML5," supported in IE9, which was released four weeks ago. "You and your site can take advantage of that today and deliver significantly better browser experiences." Native HTML, Hachamovitch said, means "that you really use the language to take advantage of the underlying OS" and leverage hardware acceleration.
Updates to the IE10 platform preview are planned for every 8 to 12 weeks. No specific time was offered for a general release of the browser. Hachamovitch acknowledged that browser upgrades at user sites can be a slow process. He cited an example of a hospital nuclear imaging system he was aware of that still used IE6: "Sometimes, the old versions just take a while to go away."
Despite's Microsoft's emphasis on HTML5, seen as a rival to the company's proprietary Silverlight rich Internet application plug-in, a beta release of Silverlight 5 also is due to be released at the conference. But in touting HTML5, Hachamovitch stressed it offers capabilities previously requiring a plug-in. "Native HTML5 support within Windows in IE9 makes a huge difference in what these sites can do."
Microsoft is acknowledging that HTML5 "is the language for developing front ends on the Web," said analyst Al Hilwa, of IDC. "The position on Silverlight is no different than that articulated earlier. It seems to me that Silverlight will remain native-type technology as an extension of .Net into lighter-weight devices. Silverlight will likely be heavily used in Windows 8 tablets, but we will not know for sure until September."
Wednesday, April 27, 2011
REVIEW: Microsoft SharePoint 2010 Beta Brings Already Solid Server into Modern Day
Office SharePoint is one of Microsoft's biggest success stories in the corporate world. SharePoint 2007 is still a solid performer for a variety of tasks, but it has been showing its age. eWEEK Labs' tests of the SharePoint 2010 beta show that Microsoft has done a good job of bringing the server squarely in step with the times, providing business-oriented social networking features and a new interface as well as beefed-up capabilities for the kinds of tasks for which businesses have been counting on SharePoint.
REVIEW: Microsoft SharePoint 2010 Beta Brings Already Solid Server into Modern Day
Ask businesspeople what the best and most useful product made by Microsoft is, and you may be surprised to hear many skip past the more obvious choices—such as Windows and Office—and go right to SharePoint.
Introduced as a modest set of online extensions for a variety of online and collaborative tasks, SharePoint is arguably the most successful Microsoft product of the last 10 years, especially in the corporate world. In many ways, SharePoint has become the core on which Microsoft has based most of its online enterprise solutions.
For our look at the Microsoft SharePoint 2010 beta, click here.
Need a corporate portal? SharePoint. Want a collaboration system? SharePoint. A document management system? Web publishing system? For those and many other tasks, companies have made use of the SharePoint platform.
All of this isn't exactly what Microsoft had in mind for SharePoint—users have continually pushed the platform past its original design goals and have used it for tasks such as enterprise content management and records management.
However, while the current version, SharePoint Server 2007, is an excellent product (and the winner of an eWEEK Labs Analyst's Choice award), it is definitely showing its age. To put it into perspective, when Microsoft was developing SharePoint 2007 in 2006, Twitter was just starting to leave its prototype stage and Facebook was just opening up to non-college students.
I recently tested the beta of the newest SharePoint server, which is due in the first half of 2010. I found that it has definitely caught up with the times, including capabilities such as Twitter-style microblogging and social networking. However, in my tests of the SharePoint 2010 beta, I also saw a much improved interface that takes advantage of rich Web technologies (and that also works well on non-Internet Explorer browsers), and I saw many new enterprise features that take into account the advanced applications for which businesses have been using SharePoint.
REVIEW: Microsoft SharePoint 2010 Beta Brings Already Solid Server into Modern Day
Ask businesspeople what the best and most useful product made by Microsoft is, and you may be surprised to hear many skip past the more obvious choices—such as Windows and Office—and go right to SharePoint.
Introduced as a modest set of online extensions for a variety of online and collaborative tasks, SharePoint is arguably the most successful Microsoft product of the last 10 years, especially in the corporate world. In many ways, SharePoint has become the core on which Microsoft has based most of its online enterprise solutions.
For our look at the Microsoft SharePoint 2010 beta, click here.
Need a corporate portal? SharePoint. Want a collaboration system? SharePoint. A document management system? Web publishing system? For those and many other tasks, companies have made use of the SharePoint platform.
All of this isn't exactly what Microsoft had in mind for SharePoint—users have continually pushed the platform past its original design goals and have used it for tasks such as enterprise content management and records management.
However, while the current version, SharePoint Server 2007, is an excellent product (and the winner of an eWEEK Labs Analyst's Choice award), it is definitely showing its age. To put it into perspective, when Microsoft was developing SharePoint 2007 in 2006, Twitter was just starting to leave its prototype stage and Facebook was just opening up to non-college students.
I recently tested the beta of the newest SharePoint server, which is due in the first half of 2010. I found that it has definitely caught up with the times, including capabilities such as Twitter-style microblogging and social networking. However, in my tests of the SharePoint 2010 beta, I also saw a much improved interface that takes advantage of rich Web technologies (and that also works well on non-Internet Explorer browsers), and I saw many new enterprise features that take into account the advanced applications for which businesses have been using SharePoint.
10 Strategies Microsoft Should Follow in 2010
5. Create a solid marketing campaign
Apple's "I'm a Mac, I"m a PC" ads have proven extremely successful. Over the past few years, Microsoft has tried to match their success with marketing campaigns of its own. Unfortunately, they never worked out. Microsoft needs to spend time in the new year developing marketing campaigns that appeal to consumers, shed its products in a good light, and make them understand why they want to buy Windows or use Bing. It's not easy, for sure, but the software giant needs to do its best.
6. Stay true to the enterprise
As Google and Apple attempt to steal operating-system market share away from Microsoft, it's in the enterprise where the software giant can solidify its power. In the software space, the big money is made in the business world. Google can't break into that space. Apple has had very little success. Microsoft rules the enterprise. In 2010, it needs to maintain that rule. It can't simply switch gears to appeal to consumers because the competition has. By controlling the enterprise, Microsoft can keep its stranglehold on the market, no matter the competition's offerings.
7. Get rid of Starter edition
Microsoft made a mistake offering Windows 7 Starter edition to netbook users in 2009. Many of those consumers were upset to see that they couldn't get the same experience on a netbook that they might otherwise enjoy on a standard notebook or desktop. In 2010, Microsoft needs to optimize Windows 7 to work with the netbook, so all versions of the software have the new features users want.
8. Don't forget Web advertising
For too long, Microsoft's Web-advertising efforts have been poor. When compared to Google's advertising platform, Microsoft's service falls short in almost every area. Microsoft needs to drastically improve its Web-advertising platform in 2010 if it wants to be successful on the Internet. Advertising is the way Microsoft will pay for many of its services going forward. Without providing a good alternative to Google's advertising services, it won't have much of a chance.
9. Get to work on security
Microsoft has done a better job of confronting the many security issues that face its operating system, but it has much more work to do in 2010. This year, the operating system faced zero-day vulnerabilities and far too many unpatched items that could have wreaked havoc on the user's computer. A better security initiative (and more services like Security Essentials) will increase Microsoft's stock in the security community. It has an opportunity to secure its operating system even more effectively in 2010. It can't miss that opportunity.
10. Don't obsess over Apple
Microsoft has a tendency to obsess over its competitors. It had an unhealthy obsession over Apple and Google in 2009. The Google obsession is understandable (after all, that company could cripple Microsoft), but the software giant's focus on Apple is a bit much. There's no debating that Apple can have a direct impact on Microsoft's bottom line. At the same time, its OS market share is small, at best. And although the iPhone is beating Windows Mobile badly, Microsoft can still fall back on the enterprise. Apple is a large, powerful company, but it's not nearly as big of a threat to Microsoft as some want to believe.
Apple's "I'm a Mac, I"m a PC" ads have proven extremely successful. Over the past few years, Microsoft has tried to match their success with marketing campaigns of its own. Unfortunately, they never worked out. Microsoft needs to spend time in the new year developing marketing campaigns that appeal to consumers, shed its products in a good light, and make them understand why they want to buy Windows or use Bing. It's not easy, for sure, but the software giant needs to do its best.
6. Stay true to the enterprise
As Google and Apple attempt to steal operating-system market share away from Microsoft, it's in the enterprise where the software giant can solidify its power. In the software space, the big money is made in the business world. Google can't break into that space. Apple has had very little success. Microsoft rules the enterprise. In 2010, it needs to maintain that rule. It can't simply switch gears to appeal to consumers because the competition has. By controlling the enterprise, Microsoft can keep its stranglehold on the market, no matter the competition's offerings.
7. Get rid of Starter edition
Microsoft made a mistake offering Windows 7 Starter edition to netbook users in 2009. Many of those consumers were upset to see that they couldn't get the same experience on a netbook that they might otherwise enjoy on a standard notebook or desktop. In 2010, Microsoft needs to optimize Windows 7 to work with the netbook, so all versions of the software have the new features users want.
8. Don't forget Web advertising
For too long, Microsoft's Web-advertising efforts have been poor. When compared to Google's advertising platform, Microsoft's service falls short in almost every area. Microsoft needs to drastically improve its Web-advertising platform in 2010 if it wants to be successful on the Internet. Advertising is the way Microsoft will pay for many of its services going forward. Without providing a good alternative to Google's advertising services, it won't have much of a chance.
9. Get to work on security
Microsoft has done a better job of confronting the many security issues that face its operating system, but it has much more work to do in 2010. This year, the operating system faced zero-day vulnerabilities and far too many unpatched items that could have wreaked havoc on the user's computer. A better security initiative (and more services like Security Essentials) will increase Microsoft's stock in the security community. It has an opportunity to secure its operating system even more effectively in 2010. It can't miss that opportunity.
10. Don't obsess over Apple
Microsoft has a tendency to obsess over its competitors. It had an unhealthy obsession over Apple and Google in 2009. The Google obsession is understandable (after all, that company could cripple Microsoft), but the software giant's focus on Apple is a bit much. There's no debating that Apple can have a direct impact on Microsoft's bottom line. At the same time, its OS market share is small, at best. And although the iPhone is beating Windows Mobile badly, Microsoft can still fall back on the enterprise. Apple is a large, powerful company, but it's not nearly as big of a threat to Microsoft as some want to believe.
Bing Rewards: Haven’t We Played This Game Before, Microsoft?
Microsoft, we thought you learned your lesson from the from the failure of Bing Cashback. It looks like we were wrong.
Earlier today, Microsoft launched Bing Rewards, a new program that lets users earn credits for performing actions like searching on Bing, making Bing their homepage and testing out new features. The more users perform these actions, the more credits they earn.
Of course, there’s a catch — you have to download the “Bing Bar” (it’s a toolbar for Internet Explorer) onto your Windows machine and sign up with a Windows Live ID. We hope you’re running Boot Camp, Mac owners.
Overall, Bing Rewards is exactly like any loyalty rewards program you’ve used via your credit card or at your favorite store. Buy more stuff and complete certain tasks, and you get some miniscule reward. The program is clearly the successor to Bing Cashback, the now-defunct rewards program that gave you money for buying products through the Bing search engine. Cashback’s termination was announced in June, and it officially closed on July 30.
We were hoping that Cashback would be the end of Microsoft trying to (directly) buy users, but it looks like that was hoping for too much. While the program seems like a decent enough concept, we just don’t think people treat search like they do their credit cards. Are thousands or millions of people really going to switch from Google and install a god-awful toolbar just so they can get a Zune?
Microsoft, you’re wasting time, energy and resources on this rewards program. Awesome new features are going to help you win the search war, not Bing points and gift cards.
Earlier today, Microsoft launched Bing Rewards, a new program that lets users earn credits for performing actions like searching on Bing, making Bing their homepage and testing out new features. The more users perform these actions, the more credits they earn.
Of course, there’s a catch — you have to download the “Bing Bar” (it’s a toolbar for Internet Explorer) onto your Windows machine and sign up with a Windows Live ID. We hope you’re running Boot Camp, Mac owners.
Overall, Bing Rewards is exactly like any loyalty rewards program you’ve used via your credit card or at your favorite store. Buy more stuff and complete certain tasks, and you get some miniscule reward. The program is clearly the successor to Bing Cashback, the now-defunct rewards program that gave you money for buying products through the Bing search engine. Cashback’s termination was announced in June, and it officially closed on July 30.
We were hoping that Cashback would be the end of Microsoft trying to (directly) buy users, but it looks like that was hoping for too much. While the program seems like a decent enough concept, we just don’t think people treat search like they do their credit cards. Are thousands or millions of people really going to switch from Google and install a god-awful toolbar just so they can get a Zune?
Microsoft, you’re wasting time, energy and resources on this rewards program. Awesome new features are going to help you win the search war, not Bing points and gift cards.
Monday, April 25, 2011
Big Apple, Big Google, Big Brother
In some ways, all the uproar about Apple saving location data on its iOS device users is old news. Guess what? Big Brother, or Big Google, also collects geo-location information from its mobile, Android-powered devices. It’s like anything else in computing: geo-location can provide great services and resources, but it can also be abused.
Take, for example, a woman who was recently robbed in Texas. Using her stolen iPhone, police officers were able to quickly find not only her stolen phone, but her wedding ring as well. Yea!
On the other hand, say another woman is in an abusive relationship and goes to a friend’s house or to a “safe-house” shelter. Her husband tracks her down using her smartphone and literally drags her back “home.”
That last case isn’t fiction. My friend Angela, a Certified Information Privacy Professional (CIPP) tells me, “I teach tech-safety courses for domestic-violence survivors. This scenario has a probability of 1. In the two years I’ve been teaching, we’ve had multiple instances of abusers using hidden GPS-Bluetooth phone combinations to track vehicles, which sort of totally sucks when the vehicle is now parked at a ‘secret’ women’s shelter.”
“Worse, the use of phone ‘family’ plans and fancy smartphones are among the most difficult issues we face in the teaching process,” Angela said. “Most of the women we see are in desperate financial straits; often there’s no money for any sort of mobile plan (and we’ll leave aside the whole getting-an-account-set-up-under-those-circumstances thing), let alone for a decent phone. Realistically, they know they have to dump the gadget and the plan and so forth, but practically? With so much else happening? Argh.”
How about wanting the local cops to know where you’ve been for the last two weeks? Police already have the technology to grab GPS location data from smartphones including latitude, longitude, altitude and time data. They don’t need sophisticated forensics equipment. In Michigan, cops can do it in a roadside traffic stop in a few minutes.
The cops or the jealous ex don’t even need to get their hands on your smartphone or tablet. Both Apple and Google regularly pull down your location data. Apple, it seems, does it twice a day, while Google updates your location several times an hour.
Why do they need continual access to this information? Beats me. Advertising is what comes first to mind, but do they really need to know where I am around the clock to make sure I get local ads? It strikes me as overkill.
And here’s the part that really worries me. What stops someone from snatching that location data out of the air over the Wi-Fi or 3G/4G network? Do we want a government, say Syria, using this information to track down protesters seen at a recent demonstration? Might Syria’s dictatorship be doing just that with its recent pinpoint kidnapping of activists?
I know there are people who don’t consider it a big deal that Big Companies potentially knows their every move.. I do. There’s a huge difference between information that you opt to give a company when you buy their product or click on a Web ad, and information that flows to them whenever your device is turned on.
Sure, you can opt out by refusing to grant any geo-location app permission to run, but that’s not a viable answer. That’s throwing out the baby with the bathwater.
The real answer, the better answer, is for Apple and Google to keep only a brief log of where you’ve been, and to stop transmitting this data to the home office. The applications don’t need this comprehensive information; the companies don’t need it, even if they want it; and the potential harm that can result from using the information far outweighs the benefits. Do the right thing, Apple and Google: Get out of the Big Brother business.
Take, for example, a woman who was recently robbed in Texas. Using her stolen iPhone, police officers were able to quickly find not only her stolen phone, but her wedding ring as well. Yea!
On the other hand, say another woman is in an abusive relationship and goes to a friend’s house or to a “safe-house” shelter. Her husband tracks her down using her smartphone and literally drags her back “home.”
That last case isn’t fiction. My friend Angela, a Certified Information Privacy Professional (CIPP) tells me, “I teach tech-safety courses for domestic-violence survivors. This scenario has a probability of 1. In the two years I’ve been teaching, we’ve had multiple instances of abusers using hidden GPS-Bluetooth phone combinations to track vehicles, which sort of totally sucks when the vehicle is now parked at a ‘secret’ women’s shelter.”
“Worse, the use of phone ‘family’ plans and fancy smartphones are among the most difficult issues we face in the teaching process,” Angela said. “Most of the women we see are in desperate financial straits; often there’s no money for any sort of mobile plan (and we’ll leave aside the whole getting-an-account-set-up-under-those-circumstances thing), let alone for a decent phone. Realistically, they know they have to dump the gadget and the plan and so forth, but practically? With so much else happening? Argh.”
How about wanting the local cops to know where you’ve been for the last two weeks? Police already have the technology to grab GPS location data from smartphones including latitude, longitude, altitude and time data. They don’t need sophisticated forensics equipment. In Michigan, cops can do it in a roadside traffic stop in a few minutes.
The cops or the jealous ex don’t even need to get their hands on your smartphone or tablet. Both Apple and Google regularly pull down your location data. Apple, it seems, does it twice a day, while Google updates your location several times an hour.
Why do they need continual access to this information? Beats me. Advertising is what comes first to mind, but do they really need to know where I am around the clock to make sure I get local ads? It strikes me as overkill.
And here’s the part that really worries me. What stops someone from snatching that location data out of the air over the Wi-Fi or 3G/4G network? Do we want a government, say Syria, using this information to track down protesters seen at a recent demonstration? Might Syria’s dictatorship be doing just that with its recent pinpoint kidnapping of activists?
I know there are people who don’t consider it a big deal that Big Companies potentially knows their every move.. I do. There’s a huge difference between information that you opt to give a company when you buy their product or click on a Web ad, and information that flows to them whenever your device is turned on.
Sure, you can opt out by refusing to grant any geo-location app permission to run, but that’s not a viable answer. That’s throwing out the baby with the bathwater.
The real answer, the better answer, is for Apple and Google to keep only a brief log of where you’ve been, and to stop transmitting this data to the home office. The applications don’t need this comprehensive information; the companies don’t need it, even if they want it; and the potential harm that can result from using the information far outweighs the benefits. Do the right thing, Apple and Google: Get out of the Big Brother business.
Sunday, April 24, 2011
iPad Has Already Overtaken Linux in Browser Usage
Just a little more than a year since its launch, the iPad is already accounting for more views on website pages than longstanding open source operating system Linux.
According to data from StatCounter Global Stats, iOS accounted for 1.17% of U.S. April browser visits to the more than 3 million websites that use the company’s free web analytics service. Meanwhile, Linux only accounted for .71%. The iOS for iPad has also creeped past Linux in several other countries.
StatCounter spokesperson Ronnie Simpson said that the company separates its “desktop operating systems” category from its “mobile operating systems” category depending on whether a device that uses it fits in a pocket. Until the iPad, iOS doesn’t appear on the operating system graph at all. The visits currently represented in the iOS category only represent iPad use, not iPod or iPhone use. It looks like iPad traffic passed Linux traffic in the US sometime in December.
Performance monitoring company Pingdom, which first noted the stats in a blog post, pointed out that comparing iOS for iPad with desktop browsers is a stretch, and that tablet operating systems will likely constitute their own category in the near future.
Classifications aside, the quick adoption of the iPad’s browser is stunning considering that Linux’s enthusiasts recently celebrated its twentieth anniversary.
According to data from StatCounter Global Stats, iOS accounted for 1.17% of U.S. April browser visits to the more than 3 million websites that use the company’s free web analytics service. Meanwhile, Linux only accounted for .71%. The iOS for iPad has also creeped past Linux in several other countries.
StatCounter spokesperson Ronnie Simpson said that the company separates its “desktop operating systems” category from its “mobile operating systems” category depending on whether a device that uses it fits in a pocket. Until the iPad, iOS doesn’t appear on the operating system graph at all. The visits currently represented in the iOS category only represent iPad use, not iPod or iPhone use. It looks like iPad traffic passed Linux traffic in the US sometime in December.
Performance monitoring company Pingdom, which first noted the stats in a blog post, pointed out that comparing iOS for iPad with desktop browsers is a stretch, and that tablet operating systems will likely constitute their own category in the near future.
Classifications aside, the quick adoption of the iPad’s browser is stunning considering that Linux’s enthusiasts recently celebrated its twentieth anniversary.
Saturday, April 23, 2011
How to See the Secret Tracking Data in Your iPhone
Coverage of the iPhone tracking "feature" has ranged from concern to outrage. "I don't know about you, but the fact that this feature exists on an iPhone is a deal-killer," wrote PCMag Columnist John Dvorak, shortly after news broke. PCMag Executive Editor Dan Costa drew a softer line, writing, "Apple may not be actively tracking you, but it did turn your phone into a tracking device without telling you."
As frustrating as it is to learn that your iPhone has been spying on you, collecting an unencrypted treasure trove of your travels, the truth is we knew this was happening. Last June we reported that Apple updated its privacy policy, stating that it could, "collect, use, and share precise location data, including real-time geographic location of your Apple computer or device." How precise that location data is remains in question. What is clear, however, is that the update arrived alongside the release of iOS 4—the OS affected by the tracking feature—and identified the four devices (iPhone 3G, iPhone 3GS, iPhone 4, and iPad with 3G) affected by the tracking feature.
I'm not about to give Apple a pass on disclosure or execution. Who combs through an Apple privacy statement when the latest iOS software awaits? And, to "collect" and "share" user data is one thing; to retain it in an unprotected file is quite another.
However, I think it's important that, with a few days' hindsight, we move beyond the bombast, pin down the facts, and see what's actually there. To do this, I've taken a close look at what's at risk and, in empirical spirit, borrowed fellow PCMag software analyst Jeff Wilson's iPhone 3GS to see what I could learn of the man and the travels using Pete Warden's iPhoneTracker app.
UPDATE: While I tested the tracking feature using the OS X-based iPhoneTracker, Windows users can access their data using iPhoneTrackerWin.
What and Who Is At Risk?
First, the bad news: if you're running iOS 4, your location-based data—latitude and longitude coordinates, coupled with timestamps—is stored on your phone in a file called "consolidated.db;" that file is automatically transferred to any machines with which you sync (and back up), and it's probably flowing back to Apple in some form or another. The worse news: if you haven't encrypted your backups, that data is unprotected.
Now, for the not-so-bad news. There's no confirmation that that data is leaving your custody and no evidence that Apple's harvesting it towards nefarious ends. More likely, it's being used for two things: Apple's reportedly tapping location information to build a database, which may actually be for your own good; and other apps, such as Maps, require geo-locational data to play. To halt both in their tracks, you can disable Location Services.
World
Furthermore, the data is far from "precise." In fact, Apple's data collection is both inconsistent and imprecise. Rather than using GPS, location information logged in consolidated.db is determined by triangulation via cell-phone towers, a notoriously loose method. Update times run the gamut, left to the whims of cell-phone towers and phone activity. Finally, the location data available on your phone is limited by several variables:
* it dates back to the release of iOS 4, less than one year;
* only affects iPads with 3G or the iPhone 3G, 3GS, and 4;
* while data is timed to the second on your iPhone, you can only browse within a single week of activity using Pete Warden's iPhoneTracker application
The final point is important: let me show you why. (Click "Next" below to keep reading.)
As frustrating as it is to learn that your iPhone has been spying on you, collecting an unencrypted treasure trove of your travels, the truth is we knew this was happening. Last June we reported that Apple updated its privacy policy, stating that it could, "collect, use, and share precise location data, including real-time geographic location of your Apple computer or device." How precise that location data is remains in question. What is clear, however, is that the update arrived alongside the release of iOS 4—the OS affected by the tracking feature—and identified the four devices (iPhone 3G, iPhone 3GS, iPhone 4, and iPad with 3G) affected by the tracking feature.
I'm not about to give Apple a pass on disclosure or execution. Who combs through an Apple privacy statement when the latest iOS software awaits? And, to "collect" and "share" user data is one thing; to retain it in an unprotected file is quite another.
However, I think it's important that, with a few days' hindsight, we move beyond the bombast, pin down the facts, and see what's actually there. To do this, I've taken a close look at what's at risk and, in empirical spirit, borrowed fellow PCMag software analyst Jeff Wilson's iPhone 3GS to see what I could learn of the man and the travels using Pete Warden's iPhoneTracker app.
UPDATE: While I tested the tracking feature using the OS X-based iPhoneTracker, Windows users can access their data using iPhoneTrackerWin.
What and Who Is At Risk?
First, the bad news: if you're running iOS 4, your location-based data—latitude and longitude coordinates, coupled with timestamps—is stored on your phone in a file called "consolidated.db;" that file is automatically transferred to any machines with which you sync (and back up), and it's probably flowing back to Apple in some form or another. The worse news: if you haven't encrypted your backups, that data is unprotected.
Now, for the not-so-bad news. There's no confirmation that that data is leaving your custody and no evidence that Apple's harvesting it towards nefarious ends. More likely, it's being used for two things: Apple's reportedly tapping location information to build a database, which may actually be for your own good; and other apps, such as Maps, require geo-locational data to play. To halt both in their tracks, you can disable Location Services.
World
Furthermore, the data is far from "precise." In fact, Apple's data collection is both inconsistent and imprecise. Rather than using GPS, location information logged in consolidated.db is determined by triangulation via cell-phone towers, a notoriously loose method. Update times run the gamut, left to the whims of cell-phone towers and phone activity. Finally, the location data available on your phone is limited by several variables:
* it dates back to the release of iOS 4, less than one year;
* only affects iPads with 3G or the iPhone 3G, 3GS, and 4;
* while data is timed to the second on your iPhone, you can only browse within a single week of activity using Pete Warden's iPhoneTracker application
The final point is important: let me show you why. (Click "Next" below to keep reading.)
Friday, April 22, 2011
Blackberry Playbook Makes Business Operations Fun
No need to say that business operations demand big mental exercise. If you are a kind of entrepreneur who need to pass through tiring mental exercise while finishing big business operations then reduce your efforts with a high end multimedia tablet.
UK electronics market is full of such devices. In fact, all the leading handset makers including Samsung, LG, Apple, Blackberry, Motorola and many more offer finely-configured tablets to serve consumers better. Every tablet is meticulously made. Thus, it is totally nonsensical to doubt efficiency or performance of any of the devices.
If you are a kind of person who is looking for best bet for proficient gadget then Blackberry playbook is the best device to quench your thirst. We feel pleasure to tell that Blackberry experts worked-hard for it and made it most-capable to finish vital and tougher business operations in matter of seconds with ease and comfort.
Its extraordinary performance comes due to high end multimedia features like capacitive, multi-touch, touchscreen, GPS, orientation sensor (accelerometer), 6-axis motion sensor (gyroscope), digital compass (magnetometer),
Camera: dual 1080p HD cameras (3 MP front facing, 5 MP rear facing) and many more.
In fact, the list of its multimedia facilities is large and every aspect could not be given in one write-up. We assure that it carries all the trendy features. All over the market, you will not find a single aspect that is not included in it. The best part of Blackberry playbook is high resolution and wide touchscreen that does only not help finish tougher tasks with ease and comfort but also lets enjoy to a large extent.
If you are fascinated with this gizmo and planning to own then do not spend big bucks from pocket or reduce bank balance to a large extent. All the leading network operators of UK including vodafone, virgin, orange, o2, three and t-mobile offer Blackberry playbook deals at competitive price. Out of those, you can buy with any per your wish.
In fact, network operators offer all Blackberry devices under beneficial mobile phone deals like Blackberry torch 9800 deals. You will feel delighted to know that all the creations from Blackberry at affordable price are easily available in market. So, you do not need to stray store to store to buy any of the devices. For convenient dealing, you can also take help from online portals.
UK electronics market is full of such devices. In fact, all the leading handset makers including Samsung, LG, Apple, Blackberry, Motorola and many more offer finely-configured tablets to serve consumers better. Every tablet is meticulously made. Thus, it is totally nonsensical to doubt efficiency or performance of any of the devices.
If you are a kind of person who is looking for best bet for proficient gadget then Blackberry playbook is the best device to quench your thirst. We feel pleasure to tell that Blackberry experts worked-hard for it and made it most-capable to finish vital and tougher business operations in matter of seconds with ease and comfort.
Its extraordinary performance comes due to high end multimedia features like capacitive, multi-touch, touchscreen, GPS, orientation sensor (accelerometer), 6-axis motion sensor (gyroscope), digital compass (magnetometer),
Camera: dual 1080p HD cameras (3 MP front facing, 5 MP rear facing) and many more.
In fact, the list of its multimedia facilities is large and every aspect could not be given in one write-up. We assure that it carries all the trendy features. All over the market, you will not find a single aspect that is not included in it. The best part of Blackberry playbook is high resolution and wide touchscreen that does only not help finish tougher tasks with ease and comfort but also lets enjoy to a large extent.
If you are fascinated with this gizmo and planning to own then do not spend big bucks from pocket or reduce bank balance to a large extent. All the leading network operators of UK including vodafone, virgin, orange, o2, three and t-mobile offer Blackberry playbook deals at competitive price. Out of those, you can buy with any per your wish.
In fact, network operators offer all Blackberry devices under beneficial mobile phone deals like Blackberry torch 9800 deals. You will feel delighted to know that all the creations from Blackberry at affordable price are easily available in market. So, you do not need to stray store to store to buy any of the devices. For convenient dealing, you can also take help from online portals.
Wednesday, April 20, 2011
Article About Batch Coding Machine
Batch coder is a high utility product which functions in many applications such as to code mandatory/obligatory variable information like Best Before, Batch No., Mfg. Date, Exp. Date, M.R.P. Incl. Of All Taxes or any other details simultaneously in one stroke. This machine is also known as batch coding machine , coding machine, batch coding machines, packaging machinery, semi automatic batch coding machine, automatic batch coding machine, Manual batch coding machine,Contact Coding Systems.
Types of Machine :-
Hand operated or manual batch printing machine. ( MBCM )
Handy Marker
Semi automatic batch printing machine
Motorise Model (SAMM)
Table top model ( SATM )
Automatic machine
For labels ( AML )
For Cartoon (AMC)
Two in one model (AMT)
Pneumatic contact coder for FFS/pouch packing machine (PCC)
Rotary contact coding machine for continuous FFS/pouch packing machine (RCC)
Hand operated or manual batch printing machine. ( MBCM )
In addition of feeding and discharging the Labels/Cartons/Bags by hand, the stroke of the machine is also operated by hand.
Handy Marker
handy coder that is operations manually and used to mark/code on corrugated cartons, plywood, wooden crates, paper bags, cement, fertilizer bags, leather, cloth and others. It is also available in various sizes and can also be customized as per the specifications
Motorize Model (SAMM)
It is Electrically Operated and foot Switch is provided to 'ON' or 'OFF' the machines as both the hands of the operator remain free in convenience of feeding and discharging the Labels/Cartons/Bags.
Table top model ( SATM )
This EMC contact coder can be used to print on bags, pouches, cartons, bottles, jars and on any even surface...Also it can be installed on conveyors,FFS machines for online coding.
Automatic Machine
Automatic feeding by slant magazine allows labels to a polished stainless steel feed wheel, The Label is picked up by rubber and it is fed to a timing chain, which carries the label under the printing head.
Printing head is having a type block where individual types can be composed, or rubber stereos can be fixed on a block of half cylinder piece. No make ready or skill is involved, as the rubber backup cylinder evens out the impression.
Pneumatic contact coder for FFS/pouch packing machine (PCC) and Rotary contact coding machine for continuous FFS/pouch packing machine (RCC) In this system unit directly install on machine which is more convenient in some industrial application. Industrial application Contact code machine are widely use in varies industry for different purpose. Few of them listed below
Pharmaceutical
Cosmetics
FMCG
Chemicals
Stationery
Ceramics & Sanitary ware
Processed Food
Watch & Electronic
Printing & Publishing units
AgricultureProducts
Oil Industries
Spices Manufacturers
Mineral Water Industries
Food Industries
Tea Packers
Feature
Lower initial and operating costs
Provide good quality precision codes
Quick drying inks suitable for virtually all surfaces
The EM series coders are suitable for semi - automatic coding
Types of Machine :-
Hand operated or manual batch printing machine. ( MBCM )
Handy Marker
Semi automatic batch printing machine
Motorise Model (SAMM)
Table top model ( SATM )
Automatic machine
For labels ( AML )
For Cartoon (AMC)
Two in one model (AMT)
Pneumatic contact coder for FFS/pouch packing machine (PCC)
Rotary contact coding machine for continuous FFS/pouch packing machine (RCC)
Hand operated or manual batch printing machine. ( MBCM )
In addition of feeding and discharging the Labels/Cartons/Bags by hand, the stroke of the machine is also operated by hand.
Handy Marker
handy coder that is operations manually and used to mark/code on corrugated cartons, plywood, wooden crates, paper bags, cement, fertilizer bags, leather, cloth and others. It is also available in various sizes and can also be customized as per the specifications
Motorize Model (SAMM)
It is Electrically Operated and foot Switch is provided to 'ON' or 'OFF' the machines as both the hands of the operator remain free in convenience of feeding and discharging the Labels/Cartons/Bags.
Table top model ( SATM )
This EMC contact coder can be used to print on bags, pouches, cartons, bottles, jars and on any even surface...Also it can be installed on conveyors,FFS machines for online coding.
Automatic Machine
Automatic feeding by slant magazine allows labels to a polished stainless steel feed wheel, The Label is picked up by rubber and it is fed to a timing chain, which carries the label under the printing head.
Printing head is having a type block where individual types can be composed, or rubber stereos can be fixed on a block of half cylinder piece. No make ready or skill is involved, as the rubber backup cylinder evens out the impression.
Pneumatic contact coder for FFS/pouch packing machine (PCC) and Rotary contact coding machine for continuous FFS/pouch packing machine (RCC) In this system unit directly install on machine which is more convenient in some industrial application. Industrial application Contact code machine are widely use in varies industry for different purpose. Few of them listed below
Pharmaceutical
Cosmetics
FMCG
Chemicals
Stationery
Ceramics & Sanitary ware
Processed Food
Watch & Electronic
Printing & Publishing units
AgricultureProducts
Oil Industries
Spices Manufacturers
Mineral Water Industries
Food Industries
Tea Packers
Feature
Lower initial and operating costs
Provide good quality precision codes
Quick drying inks suitable for virtually all surfaces
The EM series coders are suitable for semi - automatic coding
Monday, April 18, 2011
InfoWorld preview: Office 365 beta III
Working with the Office 365 beta
Before you jump into the beta, you need to decide if you're going to try testing Office 365 as a Small Business or as an Enterprise. The primary difference between the two is in your level of familiarity with the server apps. If you've never dabbled with Exchange, SharePoint, or Lync, choose the Small Business option. If the server stuff's old-hat and you're mostly wondering how (and how much) you'll move from your own servers to Microsoft's, go with the Enterprise beta.
Setting up the beta is not difficult, although the sequence is a bit confusing. Here are the steps you should follow for the Small Business beta.
Microsoft sends you a message saying you've been accepted into the beta. You click on the link to go to the sign-up site and fill out a form. That form allows you to set up a new domain name you can use during the beta; for this review I chose AskWoody.onmicrosoft.com. Enter a few more details and a password, and the sign-up site whirs for a bit, churns out an email message headed to your email inbox with a Microsoft Online Services user ID and temporary password, and puts you on a page that looks very much like the standard Office 365 portal page.
InfoWorld preview: Office 365 beta
If you already have Office 2010 installed, the initial sign-up will drop you onto a page similar to this one, which steps you through the beginning Admin activities.
Save yourself some time and bring up the Quick Start Guide, linked in Step 1 under "Start here" in the screen shown above.
In the Quick Start Guide, you find a link to go to the Office 365 sign-in page. When you receive the email with your new Microsoft Online Services user ID and password, go to the sign-in page and enter them. After a forced change of the password, you see a Downloads page.
InfoWorld preview: Office 365 beta
Before you go off into the Admin activities, get your downloads all set up.
Before you jump into the beta, you need to decide if you're going to try testing Office 365 as a Small Business or as an Enterprise. The primary difference between the two is in your level of familiarity with the server apps. If you've never dabbled with Exchange, SharePoint, or Lync, choose the Small Business option. If the server stuff's old-hat and you're mostly wondering how (and how much) you'll move from your own servers to Microsoft's, go with the Enterprise beta.
Setting up the beta is not difficult, although the sequence is a bit confusing. Here are the steps you should follow for the Small Business beta.
Microsoft sends you a message saying you've been accepted into the beta. You click on the link to go to the sign-up site and fill out a form. That form allows you to set up a new domain name you can use during the beta; for this review I chose AskWoody.onmicrosoft.com. Enter a few more details and a password, and the sign-up site whirs for a bit, churns out an email message headed to your email inbox with a Microsoft Online Services user ID and temporary password, and puts you on a page that looks very much like the standard Office 365 portal page.
InfoWorld preview: Office 365 beta
If you already have Office 2010 installed, the initial sign-up will drop you onto a page similar to this one, which steps you through the beginning Admin activities.
Save yourself some time and bring up the Quick Start Guide, linked in Step 1 under "Start here" in the screen shown above.
In the Quick Start Guide, you find a link to go to the Office 365 sign-in page. When you receive the email with your new Microsoft Online Services user ID and password, go to the sign-in page and enter them. After a forced change of the password, you see a Downloads page.
InfoWorld preview: Office 365 beta
Before you go off into the Admin activities, get your downloads all set up.
Sunday, April 17, 2011
Cisco opens green datacenter to support internal operations
If you are going to be pushing your datacenter vision out to corporate America and expect to have any credibility, it is important that you be running your own business on the infrastructure that you are selling. With the opening of their new Allen, TX datacenter, Cisco is doing just that, rolling out a new green datacenter that is operating on the full portfolio of Cisco datacenter hardware and software.
From 100 KW of solar cells generating power on the roof (for use by the offices, not the datacenter hardware) to the plans to use ambient fresh air to reduce cooling costs, Cisco has attempted to at least touch all the bases in the current green datacenter model. But with an eye towards practicality, the expected PUE of this new facility is only 1.35.
This isn’t a bad number, but with every new facility in the datacenter business trying to post PUE ratings of as close to 1.0 as possible, it is nice to see a realistic rating target from a major vendor. With aspects to the calculation such as the use of outside air, Cisco can only factor in average local temperatures, though they expect to be able to use outside air cooling at least 65% of the time. If Cisco is able to generate a better PUE after running the facility for an extended period, I’m certain they will make sure the media and their customers are aware of the improvement over the projected rating.
The data center is also one half of what Cisco describes as a metro Virtual Data center. It is paired with a datacenter in Richardson, TX to deliver IT cloud services that span the two facilities and offer the advantages of redundancy for increased uptime and disaster recovery planning.
I’m sure that Cisco will be making the most of the facility as a showcase for their Unified Computing infrastructure model, which will allow them to give potential customers a more one to one pitch when comparing their converged computing alternative to those offered by other vendors, or, more specifically, HP.
From 100 KW of solar cells generating power on the roof (for use by the offices, not the datacenter hardware) to the plans to use ambient fresh air to reduce cooling costs, Cisco has attempted to at least touch all the bases in the current green datacenter model. But with an eye towards practicality, the expected PUE of this new facility is only 1.35.
This isn’t a bad number, but with every new facility in the datacenter business trying to post PUE ratings of as close to 1.0 as possible, it is nice to see a realistic rating target from a major vendor. With aspects to the calculation such as the use of outside air, Cisco can only factor in average local temperatures, though they expect to be able to use outside air cooling at least 65% of the time. If Cisco is able to generate a better PUE after running the facility for an extended period, I’m certain they will make sure the media and their customers are aware of the improvement over the projected rating.
The data center is also one half of what Cisco describes as a metro Virtual Data center. It is paired with a datacenter in Richardson, TX to deliver IT cloud services that span the two facilities and offer the advantages of redundancy for increased uptime and disaster recovery planning.
I’m sure that Cisco will be making the most of the facility as a showcase for their Unified Computing infrastructure model, which will allow them to give potential customers a more one to one pitch when comparing their converged computing alternative to those offered by other vendors, or, more specifically, HP.
Saturday, April 16, 2011
Oracle to Make OpenOffice.org Community-Based
When a group of developers broke off from Oracle last year to establish the Document Foundation and create the new LibreOffice open-source office suite, it was unclear what would become of the well-known OpenOffice.org project they left behind. Today, Oracle has announced that it will no longer offer a commercial version of the OpenOffice.org software, and that it plans to move the suite to a purely community-based open source project.
In a statement, Oracle Chief Corporate Architect Edward Screven said, "Given the breadth of interest in free personal productivity applications and the rapid evolution of personal computing technologies, we believe the OpenOffice.org project would be best managed by an organization focused on serving that broad constituency on a non-commercial basis. We intend to begin working immediately with community members to further the continued success of Open Office. Oracle will continue to strongly support the adoption of open standards-based document formats, such as the Open Document Format (ODF)."
"Oracle has a long history of investing in the development and support of open source products," Screven continued. "We will continue to make large investments in open source technologies that are strategic to our customers including Linux and MySQL. Oracle is focused on Linux and MySQL because both of these products have won broad based adoption among commercial and government customers."
Oracle provided no additional information about OpenOffice.org, or its own proprietary Web-based office suite, Oracle Cloud Office, which also supports ODF.
The LibreOffice developers released their first stable version of that software earlier this year.
In a statement, Oracle Chief Corporate Architect Edward Screven said, "Given the breadth of interest in free personal productivity applications and the rapid evolution of personal computing technologies, we believe the OpenOffice.org project would be best managed by an organization focused on serving that broad constituency on a non-commercial basis. We intend to begin working immediately with community members to further the continued success of Open Office. Oracle will continue to strongly support the adoption of open standards-based document formats, such as the Open Document Format (ODF)."
"Oracle has a long history of investing in the development and support of open source products," Screven continued. "We will continue to make large investments in open source technologies that are strategic to our customers including Linux and MySQL. Oracle is focused on Linux and MySQL because both of these products have won broad based adoption among commercial and government customers."
Oracle provided no additional information about OpenOffice.org, or its own proprietary Web-based office suite, Oracle Cloud Office, which also supports ODF.
The LibreOffice developers released their first stable version of that software earlier this year.
Friday, April 15, 2011
Man Claiming Facebook Ownership Unveils Alleged Zuckerberg Emails
A man claiming 84 percent ownership of Facebook has filed an amended complaint that contains what he says are emails from Facebook chief Mark Zuckerberg, admitting to the ownership deal.
The emails actually read like a deleted scene from "The Social Network." According to the messages presented by Paul Ceglia, Zuckerberg successfully secured funds from Ceglia for what was then known as "The Facebook," but then argued that Ceglia should have a lower ownership stake in the company because Zuckerberg had done all the work. Ceglia agreed, but Zuckerberg then told him that the site was not really going anywhere and offered to refund Ceglia's money and just call the whole thing off. Meanwhile, Facebook had become wildly popular at Harvard and Zuckerberg had secure venture capital funding for the project, something Ceglia said Zuckerberg never disclosed.
Not surprisingly, the emails between the two men devolved into Ceglia threatening to call Zuckerberg's parents and Zuckerberg insisting that he should be paid even more for his efforts.
Facebook lawyer Orin Snyder, meanwhile, said "this is a fraudulent lawsuit brought by a convicted felon, and we look forward to defending it in court. From the outset, we've said that this scam artist's claims are ridiculous and this newest complaint is no better."
Who is Paul Ceglia? In 2003, he hired Zuckerberg to do coding work for a company called StreetFax. Ceglia said Zuckerberg then persuaded Ceglia to invest in Facebook—$1,000 for a 50 percent stake in the company, plus an extra 1 percent stake for every day Facebook was not online past January 1, 2004.
Ceglia's filing includes a July 2003 email from Zuckerberg in which Zuckerberg asks Ceglia for permission to use StreetFax source code for Facebook's search engine. A followup email also proposed charging alumni $29.95 per month to use the site. Ceglia responded that it will probably be hard to get people to sign up and suggests they "make it free until it was popular and then start charging." In the meantime, Ceglia suggested setting up a licensing agreement with Harvard to school items like sweatshirts and mugs.
Ceglia handed over an additional $1,000 in November 2003 and days later Zuckerberg sent him an email labeled "urgent" that discussed the need to move on "The Facebook" immediately.
"I have recently met with a couple of upperclassmen here at Harvard that are planning to launch a site very similar to ours. If we don't make a move soon, I think we will lose the advantage we would have if we release before them," Zuckerberg wrote. "I've stalled them for the time being and with a break if you could send another $1000 for the facebook (sic) project it would allow me to pay my roommate or Jeff to help integrate the search code and get the site live before them."
Those upperclassmen are no doubt the Winkelvoss twins, who secured a $65 million settlement from Facebook in 2008 after they claimed that they were the true brains behind Facebook. Just this week, a judge shut down an appeal to overturn that settlement.
Ceglia agreed, but by the New Year, the site was still offline. Zuckerberg again requested more money, but then argued that their deal for a 1 percent stake for every day past January 1 was unfair, and requested a written waiver exempting Zuckerberg from the contract.
The emails actually read like a deleted scene from "The Social Network." According to the messages presented by Paul Ceglia, Zuckerberg successfully secured funds from Ceglia for what was then known as "The Facebook," but then argued that Ceglia should have a lower ownership stake in the company because Zuckerberg had done all the work. Ceglia agreed, but Zuckerberg then told him that the site was not really going anywhere and offered to refund Ceglia's money and just call the whole thing off. Meanwhile, Facebook had become wildly popular at Harvard and Zuckerberg had secure venture capital funding for the project, something Ceglia said Zuckerberg never disclosed.
Not surprisingly, the emails between the two men devolved into Ceglia threatening to call Zuckerberg's parents and Zuckerberg insisting that he should be paid even more for his efforts.
Facebook lawyer Orin Snyder, meanwhile, said "this is a fraudulent lawsuit brought by a convicted felon, and we look forward to defending it in court. From the outset, we've said that this scam artist's claims are ridiculous and this newest complaint is no better."
Who is Paul Ceglia? In 2003, he hired Zuckerberg to do coding work for a company called StreetFax. Ceglia said Zuckerberg then persuaded Ceglia to invest in Facebook—$1,000 for a 50 percent stake in the company, plus an extra 1 percent stake for every day Facebook was not online past January 1, 2004.
Ceglia's filing includes a July 2003 email from Zuckerberg in which Zuckerberg asks Ceglia for permission to use StreetFax source code for Facebook's search engine. A followup email also proposed charging alumni $29.95 per month to use the site. Ceglia responded that it will probably be hard to get people to sign up and suggests they "make it free until it was popular and then start charging." In the meantime, Ceglia suggested setting up a licensing agreement with Harvard to school items like sweatshirts and mugs.
Ceglia handed over an additional $1,000 in November 2003 and days later Zuckerberg sent him an email labeled "urgent" that discussed the need to move on "The Facebook" immediately.
"I have recently met with a couple of upperclassmen here at Harvard that are planning to launch a site very similar to ours. If we don't make a move soon, I think we will lose the advantage we would have if we release before them," Zuckerberg wrote. "I've stalled them for the time being and with a break if you could send another $1000 for the facebook (sic) project it would allow me to pay my roommate or Jeff to help integrate the search code and get the site live before them."
Those upperclassmen are no doubt the Winkelvoss twins, who secured a $65 million settlement from Facebook in 2008 after they claimed that they were the true brains behind Facebook. Just this week, a judge shut down an appeal to overturn that settlement.
Ceglia agreed, but by the New Year, the site was still offline. Zuckerberg again requested more money, but then argued that their deal for a 1 percent stake for every day past January 1 was unfair, and requested a written waiver exempting Zuckerberg from the contract.
Wednesday, April 13, 2011
Windows 7 SP1 Leaked [Download]
Windows 7 is a sure hit for Microsoft, within months of the release, the Operating system managed to get 10% of the market OS share. And Today, the first Service Pack leaks to Torrents.
Windows 7 SP1 brings two big features– Dynamic Memory support and a more robust Remote Desktop client which uses RemoteFX. and the other part is the large number of Bug fixes.
It’s been leaked and is freely available to anyone willing to download and install the update from a torrent site.
Though the update is legit, its hard to say what else is included in the update.
However, what’s good is that this build is very recent, with a compile date of March 27th. The full build string is as follows: 6.1.7601.16537.amd64fre.win7.100327-0053
After the update, the main build number has been incremented by one, so you now have Build 7601. (This is exactly like Vista SP1)
windows 7 sp1
The system was stable for the period I tested after the update. however, you can try it at your own risk.
Windows 7 SP1 brings two big features– Dynamic Memory support and a more robust Remote Desktop client which uses RemoteFX. and the other part is the large number of Bug fixes.
It’s been leaked and is freely available to anyone willing to download and install the update from a torrent site.
Though the update is legit, its hard to say what else is included in the update.
However, what’s good is that this build is very recent, with a compile date of March 27th. The full build string is as follows: 6.1.7601.16537.amd64fre.win7.100327-0053
After the update, the main build number has been incremented by one, so you now have Build 7601. (This is exactly like Vista SP1)
windows 7 sp1
The system was stable for the period I tested after the update. however, you can try it at your own risk.
Monday, April 11, 2011
WP7 Updates: Multitasking, IE9, Cross-platform Gaming
Windows Phone 7 would get its much needed software update that will bring the basic stuff to WP7, bringing them inline with the current gen smartphone OSes: iOS & Android.
XBLA cross-platform gaming is also coming. So what this means is that XBox games are making their way to WP7 devices.
We had compiled a list of Windows Phone 7 Software updates 2011 Roadmap, here are few latest updates:
March 2011:
* Performance improvements,
* cut and paste,
* support for CDMA radios to the platform.
Mid-Late 2011:
* Full version of Internet Explorer 9 with hardware acceleration on the phone (no flash).
* “wave of multitasking applications” Multitasking, but it could be limited to some apps.
* Better social integration with Twitter into the People Hub, and Office document cloud support will be added.
* Cross-platform gaming
Cross platform gaming:
XBLA cross-platform gaming is also coming. So what this means is that XBox games are making their way to WP7 devices.
We had compiled a list of Windows Phone 7 Software updates 2011 Roadmap, here are few latest updates:
March 2011:
* Performance improvements,
* cut and paste,
* support for CDMA radios to the platform.
Mid-Late 2011:
* Full version of Internet Explorer 9 with hardware acceleration on the phone (no flash).
* “wave of multitasking applications” Multitasking, but it could be limited to some apps.
* Better social integration with Twitter into the People Hub, and Office document cloud support will be added.
* Cross-platform gaming
Cross platform gaming:
Sunday, April 10, 2011
Trends In Software Testing
As the complexity of software applications increases, testing becomes more crucial. And in the process, more time consuming. Here is a list of emerging testing practices.
Software is everywhere today and is becoming increasingly mission critical, whether in satellites and planes, or e-commerce websites. Software complexity is also on the rise - thanks to distributed, multi-tier applications targeting multiple devices (mobile, thin/thick clients, clouds, etc). Added to that are development methodologies like extreme programming and agile development. No wonder software testing professionals are finding it hard to keep up with the change.
As a result, many projects fail while the rest are completed significantly late, and provide only a subset of the originally planned functionality. Poorly tested software and buggy code cost corporations billions of dollars annually, and most defects are found by end users in production environments.
Given the magnitude of the problem, software-testing professionals are finding innovative means of keeping up - both in terms of tools and methodologies. This article covers some of the recent trends in software testing - and why they're making the headlines. Test driven development (TDD)
TDD is a software development technique that ensures your source code is thoroughly unit-tested as compared to traditional testing methodologies, where unit testing is recommended but not enforced. It combines test-first development (where you write a test before you write just enough code to fulfil that test), and refactoring (where, if the existing design isn't the best possible to enable you to implement a particular functionality, you improve it to enable the new feature).
TDD is not a new technique-but it is suddenly centre stage, thanks to the continued popularity of software development methodologies such as agile development and extreme programming.
Optimisations to TDD include the use of tools (such as PEX/peer exchange for Visual studio - http://research.microsoft.com/en-us/projects/pex/ ) to improve code coverage, by creating parameterised unit tests that look for boundary conditions, exceptions, and assertion failures.
TDD is gaining popularity as it allows for incremental software development - where bugs are detected and fixed as soon as the code is written, rather than at the end of an iteration or a milestone.
For more details on TDD, use the following links:
http://en.wikipedia.org/wiki/Test-driven_development
http://www.agiledata.org/essays/tdd.html
Virtualisation testing
Testing is becoming increasingly complex - the test environment set-up, getting people access to the environment, and loading it with the right bits from development, all take up about 30-50 per cent of the total testing time in a typical organisation. What is worse is that when testers find bugs, it is hard to re-create the same environment for developers to investigate and fix bugs. Test organisations are increasingly gravitating towards virtualisation technologies to cut down test set-up times significantly. These technologies include:
* accelerate set-up/tear down and restoration of complex virtual environments to a clean state, improving machine utilisation
* eliminate no repro bugs by allowing developers to recreate complex environments easily
* improve quality by automating virtual machine provisioning, building deployment, and building verification testing in an integrated manner (details later)
As an offshoot, virtualisation ensures that test labs reduce their energy footprint, resulting in a positive environmental impact, as well as significant savings.
Some of the companies that have virtual test lab management solutions are VMware, VMLogix, and Surgient. Microsoft has recently announced a Lab Management (http://channel9.msdn.com/posts/VisualStudio/Lab-Management-coming-to-Visual-Studio-Team-System-2010/) product as part of its Visual Studio Team System 2010 release. Lab Management supports multiple environment management, snapshots to easily restore to a previous state, virtual network isolation to allow multiple test environments to run concurrently, and a workflow to allow developers to have easy access to environments to reproduce and fix defects.
Theresa Lanowitz, founder of Voke, a firm involved with analysis of trends in the IT world, expects virtualisation to become ‘the defining technology of the 21st century', with organisations of every size set to benefit from virtualisation as a part of its core infrastructure.
Continuous integration
CI is a trend that is rapidly being adopted in testing, where the team members integrate their work with the rest of the development team on a frequent basis by committing all changes to a central versioning system. Beyond maintaining a common code repository, other characteristics of a CI environment include build automation, auto-deployment of the build into a production-like environment, and ensuring a self-test mechanism such that at the very least, a minimal set of tests are run to confirm that the code behaves as expected.
Leveraging virtualised test environments, tools such as Microsoft's Visual Studio Team System (VSTS) can create sophisticated CI workflows. As soon as code is checked in, a build workflow kicks in that compiles the code - deploys it on to a virtualised test environment, triggers a set of unit and functional tests on the test environment, and reports on the results.
VSTS takes the build workflow one step further, and performs the build before the check-in is finalised, allowing the check-in to be aborted if it would cause a break, or if it fails the tests. And given historical code coverage data from test runs, the tool can identify which one of the several thousand test cases needs to be run when a new build comes out - significantly reducing the build validation time.
One obvious benefit of continuous integration is transparency. Failed builds and tests are found quickly rather than having to wait for the next build. The developer who checked in the offending code is probably still nearby and can quickly fix or roll back the change.
For a complete set of tools that help enable CI, see http://en.wikipedia.org/wiki/Continuous_Integration.
Crowd testing
Crowd testing is a new and emerging trend in which, rather than relying on a dedicated team of testers (in-house or out sourced), companies rely on virtual test teams (created on demand) to get complete test coverage and reduce the time to market for their applications.
The company defines its test requirements in terms of scenarios, environments, and the type of testing (functional, performance, etc). A crowd test vendor (such as uTest - www.utest.com) identifies a pool of testers that meet the requirements, creates a project, and assigns work. Testers check the application, report bugs, and communicate with the company via an online portal. Crowd testing vendors also provide other tools, such as powerful reporting engines and test optimisation utilities. Some of the crowd testing vendors are domain specific - such as Mob4hire (www.mob4hire.com), which focuses on mobile application testing. Testers will bid on various projects specific to their handsets. Developers will choose the testers that they require, and will deploy test plans for the mobile application they are developing. On completion of the test, the mobile tester will get paid for the work.
One obvious advantage is in terms of reducing the test cycle time. But crowd testing is being used in various other scenarios as well - for example, to do usability studies on new user interfaces. The cost savings can be substantial.
Tools driven developer testing
Traditionally, developer testing was primarily limited to unit testing and some code coverage metrics. However, as organisations realised that the cost of defects found in development was exponentially lower than that found in test or production, they have begun to invest in tooling to enable developers to find bugs early on.
IDE-integrated tools have made the self-testing practice acceptable to developers, and the unit-testing and coverage analysis process automated for them. These tools also make it easy to analyse performance and compare it with a baseline by extending the unit test infrastructure.
Development teams are also expected to perform a level of security testing (threat modelling, buffer overflow, sequel injection, etc). For teams developing on native languages such as C/C++, developers are also required to use run-time analysis tools to check for memory leaks, memory corruptions and thread deadlocks. Developers are also using static analysis tools to find accessibility, localisation and globalisation issues -- and in some cases more sophisticated errors related to memory management and performance simulation -- by using data flow analysis and other techniques.
As a result of using these innovative methods, testers can now spend a lot more of their time on integration testing, stress, platform coverage, and end-to-end scenario testing. This will help them detect higher-level defects that would have otherwise trickled down to production.
Software is everywhere today and is becoming increasingly mission critical, whether in satellites and planes, or e-commerce websites. Software complexity is also on the rise - thanks to distributed, multi-tier applications targeting multiple devices (mobile, thin/thick clients, clouds, etc). Added to that are development methodologies like extreme programming and agile development. No wonder software testing professionals are finding it hard to keep up with the change.
As a result, many projects fail while the rest are completed significantly late, and provide only a subset of the originally planned functionality. Poorly tested software and buggy code cost corporations billions of dollars annually, and most defects are found by end users in production environments.
Given the magnitude of the problem, software-testing professionals are finding innovative means of keeping up - both in terms of tools and methodologies. This article covers some of the recent trends in software testing - and why they're making the headlines. Test driven development (TDD)
TDD is a software development technique that ensures your source code is thoroughly unit-tested as compared to traditional testing methodologies, where unit testing is recommended but not enforced. It combines test-first development (where you write a test before you write just enough code to fulfil that test), and refactoring (where, if the existing design isn't the best possible to enable you to implement a particular functionality, you improve it to enable the new feature).
TDD is not a new technique-but it is suddenly centre stage, thanks to the continued popularity of software development methodologies such as agile development and extreme programming.
Optimisations to TDD include the use of tools (such as PEX/peer exchange for Visual studio - http://research.microsoft.com/en-us/projects/pex/ ) to improve code coverage, by creating parameterised unit tests that look for boundary conditions, exceptions, and assertion failures.
TDD is gaining popularity as it allows for incremental software development - where bugs are detected and fixed as soon as the code is written, rather than at the end of an iteration or a milestone.
For more details on TDD, use the following links:
http://en.wikipedia.org/wiki/Test-driven_development
http://www.agiledata.org/essays/tdd.html
Virtualisation testing
Testing is becoming increasingly complex - the test environment set-up, getting people access to the environment, and loading it with the right bits from development, all take up about 30-50 per cent of the total testing time in a typical organisation. What is worse is that when testers find bugs, it is hard to re-create the same environment for developers to investigate and fix bugs. Test organisations are increasingly gravitating towards virtualisation technologies to cut down test set-up times significantly. These technologies include:
* accelerate set-up/tear down and restoration of complex virtual environments to a clean state, improving machine utilisation
* eliminate no repro bugs by allowing developers to recreate complex environments easily
* improve quality by automating virtual machine provisioning, building deployment, and building verification testing in an integrated manner (details later)
As an offshoot, virtualisation ensures that test labs reduce their energy footprint, resulting in a positive environmental impact, as well as significant savings.
Some of the companies that have virtual test lab management solutions are VMware, VMLogix, and Surgient. Microsoft has recently announced a Lab Management (http://channel9.msdn.com/posts/VisualStudio/Lab-Management-coming-to-Visual-Studio-Team-System-2010/) product as part of its Visual Studio Team System 2010 release. Lab Management supports multiple environment management, snapshots to easily restore to a previous state, virtual network isolation to allow multiple test environments to run concurrently, and a workflow to allow developers to have easy access to environments to reproduce and fix defects.
Theresa Lanowitz, founder of Voke, a firm involved with analysis of trends in the IT world, expects virtualisation to become ‘the defining technology of the 21st century', with organisations of every size set to benefit from virtualisation as a part of its core infrastructure.
Continuous integration
CI is a trend that is rapidly being adopted in testing, where the team members integrate their work with the rest of the development team on a frequent basis by committing all changes to a central versioning system. Beyond maintaining a common code repository, other characteristics of a CI environment include build automation, auto-deployment of the build into a production-like environment, and ensuring a self-test mechanism such that at the very least, a minimal set of tests are run to confirm that the code behaves as expected.
Leveraging virtualised test environments, tools such as Microsoft's Visual Studio Team System (VSTS) can create sophisticated CI workflows. As soon as code is checked in, a build workflow kicks in that compiles the code - deploys it on to a virtualised test environment, triggers a set of unit and functional tests on the test environment, and reports on the results.
VSTS takes the build workflow one step further, and performs the build before the check-in is finalised, allowing the check-in to be aborted if it would cause a break, or if it fails the tests. And given historical code coverage data from test runs, the tool can identify which one of the several thousand test cases needs to be run when a new build comes out - significantly reducing the build validation time.
One obvious benefit of continuous integration is transparency. Failed builds and tests are found quickly rather than having to wait for the next build. The developer who checked in the offending code is probably still nearby and can quickly fix or roll back the change.
For a complete set of tools that help enable CI, see http://en.wikipedia.org/wiki/Continuous_Integration.
Crowd testing
Crowd testing is a new and emerging trend in which, rather than relying on a dedicated team of testers (in-house or out sourced), companies rely on virtual test teams (created on demand) to get complete test coverage and reduce the time to market for their applications.
The company defines its test requirements in terms of scenarios, environments, and the type of testing (functional, performance, etc). A crowd test vendor (such as uTest - www.utest.com) identifies a pool of testers that meet the requirements, creates a project, and assigns work. Testers check the application, report bugs, and communicate with the company via an online portal. Crowd testing vendors also provide other tools, such as powerful reporting engines and test optimisation utilities. Some of the crowd testing vendors are domain specific - such as Mob4hire (www.mob4hire.com), which focuses on mobile application testing. Testers will bid on various projects specific to their handsets. Developers will choose the testers that they require, and will deploy test plans for the mobile application they are developing. On completion of the test, the mobile tester will get paid for the work.
One obvious advantage is in terms of reducing the test cycle time. But crowd testing is being used in various other scenarios as well - for example, to do usability studies on new user interfaces. The cost savings can be substantial.
Tools driven developer testing
Traditionally, developer testing was primarily limited to unit testing and some code coverage metrics. However, as organisations realised that the cost of defects found in development was exponentially lower than that found in test or production, they have begun to invest in tooling to enable developers to find bugs early on.
IDE-integrated tools have made the self-testing practice acceptable to developers, and the unit-testing and coverage analysis process automated for them. These tools also make it easy to analyse performance and compare it with a baseline by extending the unit test infrastructure.
Development teams are also expected to perform a level of security testing (threat modelling, buffer overflow, sequel injection, etc). For teams developing on native languages such as C/C++, developers are also required to use run-time analysis tools to check for memory leaks, memory corruptions and thread deadlocks. Developers are also using static analysis tools to find accessibility, localisation and globalisation issues -- and in some cases more sophisticated errors related to memory management and performance simulation -- by using data flow analysis and other techniques.
As a result of using these innovative methods, testers can now spend a lot more of their time on integration testing, stress, platform coverage, and end-to-end scenario testing. This will help them detect higher-level defects that would have otherwise trickled down to production.
Saturday, April 9, 2011
Windows 8 SmartScreen file checker - Smart feature or more 'dumb dialog box' security?
If early leaked screenshots are to be believed (and the feature survives without being canned for some reason or another) Microsoft is to bake into Windows 8 a file verification tool based on the SmartScreen Filter currently employed in Internet Explorer and Windows Live Messenger 2011. Is this a smart move or yet another of Microsoft’s attempts at protecting the end user by throwing dialog boxes at them?
Now, as a rule I’m pretty pro anything that makes the end user safer, but in this case I’m just not sure. Here’s why. It’s pretty clear that Microsoft knows that it cannot bake a fully-functional antivirus program into Windows without attracting the evil gaze of regulatory bodies all around the world. So instead of either fighting the fight (ultimately I have a hard time seeing governments ruling against something that will make everyone safer …) Microsoft is instead turning to a growing number of diverse tools and features to protect Windows users. It only works for as long as the checkbox is checked, and unless it offers tools to customize the experience, people will switch it off (didn’t Microsoft learn anything from UAC prompts?).
But there’s another, more important, reason why I don’t like the SmartScreen idea, and Long Zheng himself points it out:
Although it’s been proven highly effective to prevent socially engineered malware, it’s also subject to false positives which frustrates developers to “clear their name”.
False-positives are a huge pain in the rear but with live with them and accept them (and some people are majorly caught out by them). But the tool tells you it’s detected malware, tells you what it is (or what it thinks it is) and offers corrective action. In other words, it gives you an informed choice. SmartScreen, certainly in its current incarnation, tells you it thinks that it thinks that something is unsafe and gives you nothing more to go on to decide. And even the self-proclaimed 99% block rate still leaves a lot of latitude for false-positives and letting bad stuff through the net …
… which leads me to the next problem …
Are users meant to trust SmartScreen to protect them 99% of the time, or a separate antivirus tool that has a higher success rate and is more transparent about its findings?
I’m not saying that SmartScreen built into Windows is a bad idea, but after experiencing it in both IE and Live Messenger 2011, it’s also hard to say that it’s a good idea. If forced to describe the technology, ‘annoying’ is probably the word I would choose.
What do you think? OK for the masses or another UAC?
Now, as a rule I’m pretty pro anything that makes the end user safer, but in this case I’m just not sure. Here’s why. It’s pretty clear that Microsoft knows that it cannot bake a fully-functional antivirus program into Windows without attracting the evil gaze of regulatory bodies all around the world. So instead of either fighting the fight (ultimately I have a hard time seeing governments ruling against something that will make everyone safer …) Microsoft is instead turning to a growing number of diverse tools and features to protect Windows users. It only works for as long as the checkbox is checked, and unless it offers tools to customize the experience, people will switch it off (didn’t Microsoft learn anything from UAC prompts?).
But there’s another, more important, reason why I don’t like the SmartScreen idea, and Long Zheng himself points it out:
Although it’s been proven highly effective to prevent socially engineered malware, it’s also subject to false positives which frustrates developers to “clear their name”.
False-positives are a huge pain in the rear but with live with them and accept them (and some people are majorly caught out by them). But the tool tells you it’s detected malware, tells you what it is (or what it thinks it is) and offers corrective action. In other words, it gives you an informed choice. SmartScreen, certainly in its current incarnation, tells you it thinks that it thinks that something is unsafe and gives you nothing more to go on to decide. And even the self-proclaimed 99% block rate still leaves a lot of latitude for false-positives and letting bad stuff through the net …
… which leads me to the next problem …
Are users meant to trust SmartScreen to protect them 99% of the time, or a separate antivirus tool that has a higher success rate and is more transparent about its findings?
I’m not saying that SmartScreen built into Windows is a bad idea, but after experiencing it in both IE and Live Messenger 2011, it’s also hard to say that it’s a good idea. If forced to describe the technology, ‘annoying’ is probably the word I would choose.
What do you think? OK for the masses or another UAC?
Friday, April 8, 2011
IT's About Securing The Information DNA, And More!
The conference will provide opportunities for industry leaders, corporate decision makers, academics and government officials to exchange ideas on technology trends and best practices.
Securitybyte and OWASP India, organisations committed on raising InfoSec awareness in the industry, are hosting an information security event called Securitybyte & OWASP AppSec Asia 2009 at Hotel Crowne Plaza, Gurgaon, Delhi NCR from 17 November to 20 November 2009.
Microsoft MCTS Certification, MCITP Certification and over 2000+ Exams at Actualkey.com
The highlight of the event is India's first information security focussed India Technology Leadership Summit 2009 with panel discussion on Security concerns for off-shoring between industry leaders representing outsourcers, service providers and regulators. The panel is being moderated by cyber security expert Howard Schmidt.
This year's conference will draw attendance from information security professionals from all over the world. There are 18 international speakers coming in from USA, New Zealand, Sweden, Germany, UK, Canada, Thailand and Taiwan to talk on subjects like The international state of cyber security: Risk reduction in a high threat world by Howard Schmidt, Critical infrastructure security: Danger without borders by John Bumgarner, to name a few.
The conference has three main tracks focussed on security professionals, developers and leaders in the security space. Speakers like Kevvie Fowler will address the security professionals to talk about techniques used to bypass forensics in databases. Additionally, speakers like Jason Lam will reveal how their SANS Dshield Webhoneypot Project is coming along. Microsoft Security Response Center will reveal how things work under the cover in their team.
People attending the event will have the opportunity to partake in three different types of war games. These scenario-based games not only include attacking Web applications and networks, but also show how real world cyber attacks take place.
This event also marks an entry of international information security training leaders SANS and ISC2 who are conducting two days of hands-on trainings by their instructors from USA. The four-day event will also host many advanced trainings like advance forensics techniques, building advanced network security tools, advanced Web hacking, in-depth assessment techniques etc.
Click here to take part in the event.
Tuesday, April 5, 2011
Why Google's tighter control over Android is a good thing
Limiting availability of Android 3.0 code and apparent tightening of Android smartphone standards means that Google finally gets it about the platform
Last week, Google said it would not release the source for its Android 3.0 "Honeycomb" tablet to developers and would limit the OS to select hardware makers, at least initially. Now there are rumors reported by Bloomberg Businessweek that Google is requiring Android device makers to get UI changes approved by Google.
As my colleague Savio Rodrigues has written, limiting the Honeycomb code is not going to hurt the Android market. I believe reining in the custom UIs imposed on Android is a good thing. Let's be honest: They exist only so companies like Motorola, HTC, and Samsung can pretend to have any technology involvement in the Android products they sell and claim they have some differentiating feature that should make customers want their model of an Android smartphone versus the umpteenth otherwise-identical Android smartphones out there.
[ Compare mobile devices using your own criteria with InfoWorld's smartphone calculator and tablet calculator. | Keep up on key mobile developments and insights via Twitter and with the Mobile Edge blog and Mobilize newsletter. ]
The reality of Android is that it is the new Windows: an operating system used by multiple hardware vendors to create essentially identical products, save for the company name printed on it. That of course is what the device makers fear -- both those like Acer that already live in the race-to-the-bottom PC market and those like Motorola and HTC that don't want to.
But these cosmetic UI differences cause confusion among users, sending the message that Android is a collection of devices, not a platform like Apple's iOS. As Android's image becomes fragmented, so does the excitement that powers adoption. Anyone who's followed the cell phone industry has seen how that plays out: There are 1 billion Java-based cell phones out there, but no one knows it, and no one cares, as each works so differently that the Java underpinnings offer no value to anyone but Oracle, which licenses the technology.
Google initially seemed to want to play the same game as Oracle (and before it Sun), providing an under-the-hood platform for manufacturers to use as they saw fit. But a couple curious things happened:
* Vendors such as Best Buy started selling the Android brand, to help create a sense of a unified alternative to BlackBerry and iOS, as well as to help prevent customers from feeling overwhelmed by all the "different" phones available. Too much choice confuses people, and salespeople know that.
* Several mobile device makers shipped terrible tablets based on the Android 2.2 smartphone OS -- despite Google's warnings not to -- because they were impatient with Google's slow progress in releasing Honeycomb. These tablets, such as the Galaxy Tab, were terrible products and clear hack jobs that only demonstrated the iPad's superiority. I believe they also finally got the kids at Google to understand that most device makers have no respect for the Android OS and will create the same banal products for it as they do for Windows. The kids at Google have a mission, and enabling white-box smartphones isn't it.
Last week, Google said it would not release the source for its Android 3.0 "Honeycomb" tablet to developers and would limit the OS to select hardware makers, at least initially. Now there are rumors reported by Bloomberg Businessweek that Google is requiring Android device makers to get UI changes approved by Google.
As my colleague Savio Rodrigues has written, limiting the Honeycomb code is not going to hurt the Android market. I believe reining in the custom UIs imposed on Android is a good thing. Let's be honest: They exist only so companies like Motorola, HTC, and Samsung can pretend to have any technology involvement in the Android products they sell and claim they have some differentiating feature that should make customers want their model of an Android smartphone versus the umpteenth otherwise-identical Android smartphones out there.
[ Compare mobile devices using your own criteria with InfoWorld's smartphone calculator and tablet calculator. | Keep up on key mobile developments and insights via Twitter and with the Mobile Edge blog and Mobilize newsletter. ]
The reality of Android is that it is the new Windows: an operating system used by multiple hardware vendors to create essentially identical products, save for the company name printed on it. That of course is what the device makers fear -- both those like Acer that already live in the race-to-the-bottom PC market and those like Motorola and HTC that don't want to.
But these cosmetic UI differences cause confusion among users, sending the message that Android is a collection of devices, not a platform like Apple's iOS. As Android's image becomes fragmented, so does the excitement that powers adoption. Anyone who's followed the cell phone industry has seen how that plays out: There are 1 billion Java-based cell phones out there, but no one knows it, and no one cares, as each works so differently that the Java underpinnings offer no value to anyone but Oracle, which licenses the technology.
Google initially seemed to want to play the same game as Oracle (and before it Sun), providing an under-the-hood platform for manufacturers to use as they saw fit. But a couple curious things happened:
* Vendors such as Best Buy started selling the Android brand, to help create a sense of a unified alternative to BlackBerry and iOS, as well as to help prevent customers from feeling overwhelmed by all the "different" phones available. Too much choice confuses people, and salespeople know that.
* Several mobile device makers shipped terrible tablets based on the Android 2.2 smartphone OS -- despite Google's warnings not to -- because they were impatient with Google's slow progress in releasing Honeycomb. These tablets, such as the Galaxy Tab, were terrible products and clear hack jobs that only demonstrated the iPad's superiority. I believe they also finally got the kids at Google to understand that most device makers have no respect for the Android OS and will create the same banal products for it as they do for Windows. The kids at Google have a mission, and enabling white-box smartphones isn't it.
Monday, April 4, 2011
This is your power plant on Windows
If you're wondering where the next big disaster will come from, consider the news about SCADA (supervisory control and data acquisition), the industrial systems used to monitor and control a raft of functions at power plants, refineries, water systems, and manufacturing plants. Doesn't ring a bell? Here's a tip: Siemens's Windows-based Simatic WinCC SCADA systems were the suspected target of the Stuxnet worm that devastated Iran's nuclear program by altering the spin rate of its uranium centrifuges.
A CERT advisory on April 1 for a different Siemens SCADA product called out vulnerabilities allowing an intruder to perform DoS attacks, directory traversal, and arbitrary code execution. Additionally, an Ecava SCADA product was cited in a March 23 advisory warning of an unauthenticated SQL vulnerability that could allow data leakage, data manipulation, and remote code execution. Siemens and Ecava both issued patches.
Siemens and Ecava aren't alone. The previous Monday Italian researcher Luigi Auriemma published details of 34 vulnerabilities in four SCADA products, complete with exploit code; Auriemma had no previous experience with SCADA systems but was able to discover vulnerabilities within hours simply by downloading free trial versions. The day before Auriemma's announcement, researcher Ruben Santamarta revealed vulnerabilities and source code for Advantech products that could be used to attack a power grid. Santamarta felt forced to publish the source code after the vendor denied there was a problem.
A week prior, GLEG, a Russian-based security firm announced it was releasing its Agora SCADA + pack with 11 zero-day SCADA system vulnerabilities in an effort to "collect all publicly available SCADA vulnerabilities in one exploit pack." Shortly after the tool was released, the company website suffered a sustained DoS attack.
Though Stuxnet was propagated through removable media, the fact is that many of today's SCADA systems not only run on Windows, but often sit on networks with paths to the Internet that can be discovered and breached by a clever hacker. Many are not routinely patched, because it's difficult to test patches to ensure they won't disrupt the systems they're meant to manage.
More worrying, poor security practices are not unusual at critical infrastructure facilities. Witness the case of a Southern California water system, highlighted in a recent Los Angeles Times article, that hired current eEye Digital Security CTO and well-known hacker Marc Maiffret to test its network vulnerabilities. Within one day, Maiffret managed to take over systems that added chemical treatments to drinking water, with the potential of rendering water undrinkable for thousands of local residents. It turned out he discovered that employees were logging into the network from their unsecured home computers and opening up the system to outside vulnerabilities.
A CERT advisory on April 1 for a different Siemens SCADA product called out vulnerabilities allowing an intruder to perform DoS attacks, directory traversal, and arbitrary code execution. Additionally, an Ecava SCADA product was cited in a March 23 advisory warning of an unauthenticated SQL vulnerability that could allow data leakage, data manipulation, and remote code execution. Siemens and Ecava both issued patches.
Siemens and Ecava aren't alone. The previous Monday Italian researcher Luigi Auriemma published details of 34 vulnerabilities in four SCADA products, complete with exploit code; Auriemma had no previous experience with SCADA systems but was able to discover vulnerabilities within hours simply by downloading free trial versions. The day before Auriemma's announcement, researcher Ruben Santamarta revealed vulnerabilities and source code for Advantech products that could be used to attack a power grid. Santamarta felt forced to publish the source code after the vendor denied there was a problem.
A week prior, GLEG, a Russian-based security firm announced it was releasing its Agora SCADA + pack with 11 zero-day SCADA system vulnerabilities in an effort to "collect all publicly available SCADA vulnerabilities in one exploit pack." Shortly after the tool was released, the company website suffered a sustained DoS attack.
Though Stuxnet was propagated through removable media, the fact is that many of today's SCADA systems not only run on Windows, but often sit on networks with paths to the Internet that can be discovered and breached by a clever hacker. Many are not routinely patched, because it's difficult to test patches to ensure they won't disrupt the systems they're meant to manage.
More worrying, poor security practices are not unusual at critical infrastructure facilities. Witness the case of a Southern California water system, highlighted in a recent Los Angeles Times article, that hired current eEye Digital Security CTO and well-known hacker Marc Maiffret to test its network vulnerabilities. Within one day, Maiffret managed to take over systems that added chemical treatments to drinking water, with the potential of rendering water undrinkable for thousands of local residents. It turned out he discovered that employees were logging into the network from their unsecured home computers and opening up the system to outside vulnerabilities.
Sunday, April 3, 2011
Amity Innovation Incubator
The vision to convert job-seekers into job-generators through science and technology (S&T) interventions led Amity Innovation Incubator to set up a state-of-the-art facility in Noida, UP. We take a look at what makes this incubator a promising destination for those with innovative business ideas, looking to set up their ventures in an incubated environment.
"Countries don't create economies. It is entrepreneurs and companies that create and revitalise economies. The role of the governments should be to create a nourishing environment for entrepreneurs and companies to flourish." These words of John Naisbitt-- an American author and public speaker-- aptly describe not only the potent role that a healthy entrepreneurial eco-system plays in building nations, but also reflects on what fosters entrepreneurship in an economy.
Naisbitt would have approved of the Amity Innovation Incubator (AII), a registered 'not for profit' society situated in Noida, with a mission to create and foster the entrepreneurial spirit. The team at AII wishes to promote technology-based start-ups so as to maximise its impact on the economic development of the country.
AII (www.amity.edu/aii) was established in December 2007 by the Ritnand Balved Education Foundation, the umbrella organisation of the Amity institutions, with support from the National Science and Technology Entrepreneurship Development Board (NSTEDB), Department of Science & Technology, government of India and currently houses twelve start-up ventures on its premises.
It's all there
AII offers a range of incubation services, such as business planning, company formation, legal and IPR (intellectual property rights) assistance, managerial support, technology support, affordable state-of-the-art infrastructure, venture capital funding, networking, collaborations and alliances, mentors, board members and advisors, training and team development. The aim is to provide a one-stop platform, equipped with all the essential services, to new businesses, enabling them to enter the market with maturity and confidence.
AII is supported by an advisory body comprising industrialists, venture capitalists, technical specialists and managers, that help entrepreneurs realise their dreams by providing them with a range of infrastructural, business advisory, mentoring and financial services. Each company at the incubator is assisted by relevant mentors from AII's large pool of mentor capital.
"Countries don't create economies. It is entrepreneurs and companies that create and revitalise economies. The role of the governments should be to create a nourishing environment for entrepreneurs and companies to flourish." These words of John Naisbitt-- an American author and public speaker-- aptly describe not only the potent role that a healthy entrepreneurial eco-system plays in building nations, but also reflects on what fosters entrepreneurship in an economy.
Naisbitt would have approved of the Amity Innovation Incubator (AII), a registered 'not for profit' society situated in Noida, with a mission to create and foster the entrepreneurial spirit. The team at AII wishes to promote technology-based start-ups so as to maximise its impact on the economic development of the country.
AII (www.amity.edu/aii) was established in December 2007 by the Ritnand Balved Education Foundation, the umbrella organisation of the Amity institutions, with support from the National Science and Technology Entrepreneurship Development Board (NSTEDB), Department of Science & Technology, government of India and currently houses twelve start-up ventures on its premises.
It's all there
AII offers a range of incubation services, such as business planning, company formation, legal and IPR (intellectual property rights) assistance, managerial support, technology support, affordable state-of-the-art infrastructure, venture capital funding, networking, collaborations and alliances, mentors, board members and advisors, training and team development. The aim is to provide a one-stop platform, equipped with all the essential services, to new businesses, enabling them to enter the market with maturity and confidence.
AII is supported by an advisory body comprising industrialists, venture capitalists, technical specialists and managers, that help entrepreneurs realise their dreams by providing them with a range of infrastructural, business advisory, mentoring and financial services. Each company at the incubator is assisted by relevant mentors from AII's large pool of mentor capital.
Saturday, April 2, 2011
Google +1 Is Not a Social Network
Social networking services and features like Google +1 can be lonely places when they first launch. Google's latest experiment presents easy-to-use features that let you tell the world you like something, but so few people are actually using the service that it's a bit quiet, dull and not all that fulfilling. Google has many millions of users around the world, so +1 won't remain this way for long, but the more time I spend with it, the more I'm convinced that no matter how vibrant it becomes, Google +1 is not, at least by itself, a social network.
Google tried and, I think, largely failed to build a social network on its platform of services. Buzz still exists, but the reception hasn't been warm. Buzz is a social network inside an e-mail service and leveraged Gmail's contact list. In fact that leverage was a bit too strong and on the day Google announced +1, it was also apologizing for over-reaching with the original Buzz. Google will, to a certain extent, be paying for that mistake with scheduled privacy audits for the next 20 years.
Buzz has conversation around shared content. I know this because I'm a buzz member and every once in a while I see a little conversation about something I Tweeted (I connected Buzz and my Twitter feed) pop up in my Gmail inbox. I've never been a fan of the Buzz interface. If I want conversation around content and ideas, I'll stick with the cleaner Twitter and better organized Facebook. In fact, I think Facebook does the best job of driving shared conversation. +1 (or is the vernacular "Plus One"?) is not about conversation. It's about finding things that those in your circle of contacts (and possibly people outside that same circle) deem worthy of this little "+1" tag (or button). That button, by the way, appears to replace Google's personalized Stars, which is odd because I think those helped define what might show up on top of frequent searches and has nothing to do with result curation. Of course, those Stars would only appear on search results and these new +1 tags will ultimately show up in a variety of places: in search results, on pages (sites, stories) and even ad ads. At least that's what Google's promising.
Google +1 is what I like to call passive curation. Passive because there isn't a global aggregation of curated +1 content for everyone to see. Yes, you could view all the collected +1's of a particular user on their Profile +1 tab, but that page has to be shared before you can see it. For most people, I suspect their experience with +1 will be fairly random. They'll see it here and there based on who they're connected to, but they may also see it on sites that have a lot of +1's. It seems, though I am not certain, that Google may show these +1 buttons on results and content that gets enough +1 clicks. For example, CNN.com might get a lot of +1's, so perhaps a search result that shows the site might have the tag for everybody's search result, regardless of whether anyone they know has actually +1'ed it. By the way, if Google really wants people to adopt this, they need to come up with something better than "I Plus One'd It." Facebook's "Like" is so much more obvious.
So Google +1 is not about conversation or, really, interaction with other people. It's about finding good stuff. By itself, it's not a social network and barely a social tool. Google Profiles, what's behind +1, is what makes it truly interesting. You see, you can't +1 anything if you don't have a Google Profile. Millions of people do have these profiles, which are a little like Facebook profiles in that you have your name, employment info, education, relationships, if you're looking for a new relationships, etc. You can choose to hide or show your profile in search. Profiles have tabs: One is "About", another is "Buzz", the third is PicasaWeb (if you store and share photos on Picasa's' Web-based entity) and the newest is the experimental "+1's." The problem is, for all that Google Profiles have, it's really very little like Facebook. There's no sense of community or real social interaction. People can e-mail you through your profile (if you allow it), but the mail is not on display there. Conversations in Buzz are hidden under the buzz tab. There's nothing about what others in your network (those who are connected to you through your contacts in Gmail) are doing online. Google Profile isn't really ready to be a social platform.
+1 does resemble Facebook's Likes in that they're a way of telling other people that you like the page, content or ad, but they don't aggregate in a way you can perceive, and your +1's are not actively shared with others in your network. If, on the other hand, a contact searches for something you've searched for before, they should see your +1 tag. What this little bug will mean to people in search results, though, is questionable. For me, it won't mean much. There's no way someone else can know if a particular page is the right result for my particular query. Remember, Web sites, pages and ads come up in a variety of search results based on dozens of different embedded keywords and SEO (search engine optimization) measures. In other words, some +1'ed results will be more relevant to your query than others.
There is nothing wrong with this small and relatively cautious step by Google. +1 is just another piece of a much bigger social and content-curation puzzle. However to truly compete with Facebook, Google will have to transform Google Profile pages into a destination that brings together all its tools: mail, photos, video, messaging, search results, and sharing into a cohesive page where people want to spend their time.
Google tried and, I think, largely failed to build a social network on its platform of services. Buzz still exists, but the reception hasn't been warm. Buzz is a social network inside an e-mail service and leveraged Gmail's contact list. In fact that leverage was a bit too strong and on the day Google announced +1, it was also apologizing for over-reaching with the original Buzz. Google will, to a certain extent, be paying for that mistake with scheduled privacy audits for the next 20 years.
Buzz has conversation around shared content. I know this because I'm a buzz member and every once in a while I see a little conversation about something I Tweeted (I connected Buzz and my Twitter feed) pop up in my Gmail inbox. I've never been a fan of the Buzz interface. If I want conversation around content and ideas, I'll stick with the cleaner Twitter and better organized Facebook. In fact, I think Facebook does the best job of driving shared conversation. +1 (or is the vernacular "Plus One"?) is not about conversation. It's about finding things that those in your circle of contacts (and possibly people outside that same circle) deem worthy of this little "+1" tag (or button). That button, by the way, appears to replace Google's personalized Stars, which is odd because I think those helped define what might show up on top of frequent searches and has nothing to do with result curation. Of course, those Stars would only appear on search results and these new +1 tags will ultimately show up in a variety of places: in search results, on pages (sites, stories) and even ad ads. At least that's what Google's promising.
Google +1 is what I like to call passive curation. Passive because there isn't a global aggregation of curated +1 content for everyone to see. Yes, you could view all the collected +1's of a particular user on their Profile +1 tab, but that page has to be shared before you can see it. For most people, I suspect their experience with +1 will be fairly random. They'll see it here and there based on who they're connected to, but they may also see it on sites that have a lot of +1's. It seems, though I am not certain, that Google may show these +1 buttons on results and content that gets enough +1 clicks. For example, CNN.com might get a lot of +1's, so perhaps a search result that shows the site might have the tag for everybody's search result, regardless of whether anyone they know has actually +1'ed it. By the way, if Google really wants people to adopt this, they need to come up with something better than "I Plus One'd It." Facebook's "Like" is so much more obvious.
So Google +1 is not about conversation or, really, interaction with other people. It's about finding good stuff. By itself, it's not a social network and barely a social tool. Google Profiles, what's behind +1, is what makes it truly interesting. You see, you can't +1 anything if you don't have a Google Profile. Millions of people do have these profiles, which are a little like Facebook profiles in that you have your name, employment info, education, relationships, if you're looking for a new relationships, etc. You can choose to hide or show your profile in search. Profiles have tabs: One is "About", another is "Buzz", the third is PicasaWeb (if you store and share photos on Picasa's' Web-based entity) and the newest is the experimental "+1's." The problem is, for all that Google Profiles have, it's really very little like Facebook. There's no sense of community or real social interaction. People can e-mail you through your profile (if you allow it), but the mail is not on display there. Conversations in Buzz are hidden under the buzz tab. There's nothing about what others in your network (those who are connected to you through your contacts in Gmail) are doing online. Google Profile isn't really ready to be a social platform.
+1 does resemble Facebook's Likes in that they're a way of telling other people that you like the page, content or ad, but they don't aggregate in a way you can perceive, and your +1's are not actively shared with others in your network. If, on the other hand, a contact searches for something you've searched for before, they should see your +1 tag. What this little bug will mean to people in search results, though, is questionable. For me, it won't mean much. There's no way someone else can know if a particular page is the right result for my particular query. Remember, Web sites, pages and ads come up in a variety of search results based on dozens of different embedded keywords and SEO (search engine optimization) measures. In other words, some +1'ed results will be more relevant to your query than others.
There is nothing wrong with this small and relatively cautious step by Google. +1 is just another piece of a much bigger social and content-curation puzzle. However to truly compete with Facebook, Google will have to transform Google Profile pages into a destination that brings together all its tools: mail, photos, video, messaging, search results, and sharing into a cohesive page where people want to spend their time.
Friday, April 1, 2011
How to Buy the Best Tablet
When the first Apple iPad and the Fusion Garage JooJoo were released within days of each other in early 2010, the world got its first real taste of tablets—and, what some might say, is an excellent summation of the breadth of quality future tablets would offer. At the high end, the iPad, and now the iPad 2, is the benchmark tablet to beat, with top-notch, seamless design paired with a robust app store. The now-discontinued JooJoo was a clunker—it lacked internal storage, often crashed, and basically didn't have any apps, only some basic tools. In between these bookends lies the rest of the tablet field, with early Android tablets (anything running a version lower than Android 3.0) ranking closer to the JooJoo end of the spectrum and newer Android tablets like the Motorola Xoom and upcoming second-generation Samsung Galaxy Tabs taking aim at the iPad. Upon first glance, the upcoming RIM BlackBerry PlayBook also looks to be quite the competitor, with its own operating system and the ability to run some Android apps. So which of the plethora of deceivingly similar-looking tablets is worth your sizable investment? Let's look at the key factors you need to consider:
First Off: Do You Even Need a Tablet?
Simply put, tablets aren't really filling any true need right now—they are neither replacements for full-fledged computers nor smartphones. A tablet is a touch-screen media device that is actually most similar to a very advanced portable media player—or an MP3 player with a much larger screen. Yes, many of them have mobile service features, but currently none of them make phone calls via a traditional mobile provider. And while you can get work done on a tablet, you won't get a desktop-grade operating system, like you'll find on a PC. Tablets are basically lightweight versions of laptops in every sense—they weigh less, and they're lighter on features. The advantage they offer over laptops is an easy way to check e-mail, browse the Web, consume media, and play games—just like a smartphone. But with a tablet you get a much bigger screen with more real estate. The bottom line is, you probably don't need one, but if you want a tablet, read on.
Operating System
First, just like with a computer, you must choose your allegiance. Apple's iOS is the mobile platform used by the iPad, as well as the iPhone and iPod touch. By now, you're probably familiar with iOS even if you don't own an iPhone, seeing as the device is as ubiquitous in public as it is in television and movies. On the iPad and the iPad 2, iOS works very similarly to the way it does on the iPhone, with certain tweaks made here and there to take advantage of the tablet's larger 9.7-inch screen. The built-in iPod app on the iPad, for instance, has an extra side menu for additional navigation options that wouldn't fit on a 3.5-inch screen. Generally speaking, the great strength of Apple's iOS is twofold: it's incredibly easy to use, and the wide selection of iPad apps—more than 65,000 tablet-specific titles at the time of this writing—download easily and quickly and work uniformly well with very few exceptions.
Google's mobile OS, Android, is a different story. There are several iterations of Android, but only one—Android 3.0, a.k.a. Honeycomb—is designed specifically for tablets. Right now, only one tablet offers Honeycomb—the Motorola Xoom—and that makes it the iPad's most viable contender, for now. It is a showcase for Android 3.0, which features an improved, more visual multitasking bar than iOS, as well as superior e-mail notifications. Unfortunately, these two particular strengths, though legitimate, are not strong enough to topple Apple's iOS when you look at the bigger picture. The home-screen for Honeycomb, for example, can get easily cluttered because there are so many different ways to organize, rather than just putting things in tidy folders as you can with iOS. The one you choose will largely depend on your personal preference, so if you can try before you buy, you should.
Apps
Android lacks a strong selection of apps. Even with the newly announced Amazon App Store, the number of Honeycomb tablet-friendly apps that work well is very low. We could linger on this section, but the bottom line is simple: if you want lots of apps for your tablet, right now, nothing out there beats the iPad. Apple's App Store is well-curated and offers deep selection—no competitor can come close to claiming this right now, partially because apps made for Android tablets have to work across multiple screen sizes, while iPad apps are designed specifically for one device. It sounds simple, but the variation in size (and manufacturers) complicates things greatly. It remains to be seen what kind of options will exist for the BlackBerry PlayBook. Eventually, one hopes, the other app stores will catch up to Apple, but if a wide range of compelling apps is your main priority, Apple is currently your best bet.
Design and Size
This consideration is a bit obvious, but size—both screen real estate and storage capacity—is important to consider. First things first: When you hear the term "10-inch tablet," this typically refers to the size of the screen, measured diagonally, and not the size of the tablet itself. Apple continues to offer the iPad in one size only (9.7-inch screen). The Xoom comes in one screen size too (10.1 inches), but Samsung just announced new Galaxy Tab models in multiple sizes (8.9- and 10.1-inches) in addition to the current 7-inch Tab and the trend for other companies seems to be: the more sizes, the better. In other words, you have plenty of options, but the higher quality tablets thus far have veered towards the larger end of the scale since they offer a better finger-centric, touch screen experience. The weight of a tablet is one definite advantage it has over a laptop—but let's be clear, at around 1.5 pounds (in the case of the iPad 2) they're not as light, as say, your cell phone. After you hold one on the subway for ten minutes, your hand will get tired. Setting it flat in your lap, rather propped up on a stand, is also a little awkward.
As for storage, the more the better—those apps, when combined with a typical music, video, and photo library, can take up a lot of space. Right now storage tops out at 64GB of flash-based memory, with many of the quality tablets we've seen available in 16, 32, and 64GB varieties. Larger capacity models can get as expensive as full-featured laptops, especially when you factor in cellular service plans.
Wi-Fi-Only vs. Cellular Models
Most tablets come in a Wi-Fi-only model or with the option to pay by the month for 3G (or eventually, 4G) always-on cellular service from a provider like AT&T, Sprint, T-Mobile, or Verizon Wireless. If you want to use your tablet to get online anywhere, you should opt for a model with a cell radio. Of course, this adds to the device's price, and then you need to pay for cellular service. Generally, though, you can purchase data on a month-to-month basis, without signing a contract, and charges typically don't exceed $20 monthly, as long as you stay within data-usage limits.
Another way to get your tablet online: Use your 3G or 4G phone as a Wi-Fi hotspot for your tablet—this won't work with every phone/tablet combo, so you should check with the carriers before you buy in.
Cameras & Video Chatting
With the release of the iPad 2, Apple caught up rather quickly to its tablet competition and added front- and rear-facing cameras for stills and video. The Xoom has a higher quality rear-facing camera than the iPad's lackluster offering, but the bottom line is: the cameras on all of these tablets are currently more toy than tool. None of them is a legitimate replacement for even a point-and-shot camera.
But the inclusion of front-facing cameras means tablets offer video chat features—but not all video chat apps are created equal. Google Talk for Honeycomb, which comes preloaded on the Xoom, is a top-notch app; simple to use, and it operates via Google accounts. You can chat with anyone who has a Google account. However, not all Android tablets are created equal—be wary of any tablets that lack access to the Android Market, like the Dell Streak 7, for instance. Despite its cameras and video chat capabilities, the Streak 7 utilizes inferior apps for chatting and cannot access the Market to download Google Talk. Apple's FaceTime works similarly well, but is limited to certain Apple products, making it far less versatile than Google Talk.
Price
Like with most gadgetry, you get what you pay for, and tablets are no exception. If you spend anything less than $500-$600 (which seems to be the magic entry-range for Wi-Fi-only models like the iPad 2 and the Motorola Xoom), don't say we didn't warn you. The CherryPal Cherry Pad is a fine example of what $188 will get you in the tablet world—not a lot, including a low-quality screen and a serious lack of features. As for 3G (and 4G) enabled tablets, the pricing varies widely depending on manufacturer, capacity, and plan, but expect to pay at least about $20 per month on top of a higher up front fee—the lowest iPad 2 3G tablet price is $629 for 16GB, for instance.
Finally, before you buy, if you can, head to your local electronics store to get hands-on time with some different tablets, so you can see which feels and works the best for you. And for the latest lab-tested tablet reviews, hit our Tablet Product Guide.
First Off: Do You Even Need a Tablet?
Simply put, tablets aren't really filling any true need right now—they are neither replacements for full-fledged computers nor smartphones. A tablet is a touch-screen media device that is actually most similar to a very advanced portable media player—or an MP3 player with a much larger screen. Yes, many of them have mobile service features, but currently none of them make phone calls via a traditional mobile provider. And while you can get work done on a tablet, you won't get a desktop-grade operating system, like you'll find on a PC. Tablets are basically lightweight versions of laptops in every sense—they weigh less, and they're lighter on features. The advantage they offer over laptops is an easy way to check e-mail, browse the Web, consume media, and play games—just like a smartphone. But with a tablet you get a much bigger screen with more real estate. The bottom line is, you probably don't need one, but if you want a tablet, read on.
Operating System
First, just like with a computer, you must choose your allegiance. Apple's iOS is the mobile platform used by the iPad, as well as the iPhone and iPod touch. By now, you're probably familiar with iOS even if you don't own an iPhone, seeing as the device is as ubiquitous in public as it is in television and movies. On the iPad and the iPad 2, iOS works very similarly to the way it does on the iPhone, with certain tweaks made here and there to take advantage of the tablet's larger 9.7-inch screen. The built-in iPod app on the iPad, for instance, has an extra side menu for additional navigation options that wouldn't fit on a 3.5-inch screen. Generally speaking, the great strength of Apple's iOS is twofold: it's incredibly easy to use, and the wide selection of iPad apps—more than 65,000 tablet-specific titles at the time of this writing—download easily and quickly and work uniformly well with very few exceptions.
Google's mobile OS, Android, is a different story. There are several iterations of Android, but only one—Android 3.0, a.k.a. Honeycomb—is designed specifically for tablets. Right now, only one tablet offers Honeycomb—the Motorola Xoom—and that makes it the iPad's most viable contender, for now. It is a showcase for Android 3.0, which features an improved, more visual multitasking bar than iOS, as well as superior e-mail notifications. Unfortunately, these two particular strengths, though legitimate, are not strong enough to topple Apple's iOS when you look at the bigger picture. The home-screen for Honeycomb, for example, can get easily cluttered because there are so many different ways to organize, rather than just putting things in tidy folders as you can with iOS. The one you choose will largely depend on your personal preference, so if you can try before you buy, you should.
Apps
Android lacks a strong selection of apps. Even with the newly announced Amazon App Store, the number of Honeycomb tablet-friendly apps that work well is very low. We could linger on this section, but the bottom line is simple: if you want lots of apps for your tablet, right now, nothing out there beats the iPad. Apple's App Store is well-curated and offers deep selection—no competitor can come close to claiming this right now, partially because apps made for Android tablets have to work across multiple screen sizes, while iPad apps are designed specifically for one device. It sounds simple, but the variation in size (and manufacturers) complicates things greatly. It remains to be seen what kind of options will exist for the BlackBerry PlayBook. Eventually, one hopes, the other app stores will catch up to Apple, but if a wide range of compelling apps is your main priority, Apple is currently your best bet.
Design and Size
This consideration is a bit obvious, but size—both screen real estate and storage capacity—is important to consider. First things first: When you hear the term "10-inch tablet," this typically refers to the size of the screen, measured diagonally, and not the size of the tablet itself. Apple continues to offer the iPad in one size only (9.7-inch screen). The Xoom comes in one screen size too (10.1 inches), but Samsung just announced new Galaxy Tab models in multiple sizes (8.9- and 10.1-inches) in addition to the current 7-inch Tab and the trend for other companies seems to be: the more sizes, the better. In other words, you have plenty of options, but the higher quality tablets thus far have veered towards the larger end of the scale since they offer a better finger-centric, touch screen experience. The weight of a tablet is one definite advantage it has over a laptop—but let's be clear, at around 1.5 pounds (in the case of the iPad 2) they're not as light, as say, your cell phone. After you hold one on the subway for ten minutes, your hand will get tired. Setting it flat in your lap, rather propped up on a stand, is also a little awkward.
As for storage, the more the better—those apps, when combined with a typical music, video, and photo library, can take up a lot of space. Right now storage tops out at 64GB of flash-based memory, with many of the quality tablets we've seen available in 16, 32, and 64GB varieties. Larger capacity models can get as expensive as full-featured laptops, especially when you factor in cellular service plans.
Wi-Fi-Only vs. Cellular Models
Most tablets come in a Wi-Fi-only model or with the option to pay by the month for 3G (or eventually, 4G) always-on cellular service from a provider like AT&T, Sprint, T-Mobile, or Verizon Wireless. If you want to use your tablet to get online anywhere, you should opt for a model with a cell radio. Of course, this adds to the device's price, and then you need to pay for cellular service. Generally, though, you can purchase data on a month-to-month basis, without signing a contract, and charges typically don't exceed $20 monthly, as long as you stay within data-usage limits.
Another way to get your tablet online: Use your 3G or 4G phone as a Wi-Fi hotspot for your tablet—this won't work with every phone/tablet combo, so you should check with the carriers before you buy in.
Cameras & Video Chatting
With the release of the iPad 2, Apple caught up rather quickly to its tablet competition and added front- and rear-facing cameras for stills and video. The Xoom has a higher quality rear-facing camera than the iPad's lackluster offering, but the bottom line is: the cameras on all of these tablets are currently more toy than tool. None of them is a legitimate replacement for even a point-and-shot camera.
But the inclusion of front-facing cameras means tablets offer video chat features—but not all video chat apps are created equal. Google Talk for Honeycomb, which comes preloaded on the Xoom, is a top-notch app; simple to use, and it operates via Google accounts. You can chat with anyone who has a Google account. However, not all Android tablets are created equal—be wary of any tablets that lack access to the Android Market, like the Dell Streak 7, for instance. Despite its cameras and video chat capabilities, the Streak 7 utilizes inferior apps for chatting and cannot access the Market to download Google Talk. Apple's FaceTime works similarly well, but is limited to certain Apple products, making it far less versatile than Google Talk.
Price
Like with most gadgetry, you get what you pay for, and tablets are no exception. If you spend anything less than $500-$600 (which seems to be the magic entry-range for Wi-Fi-only models like the iPad 2 and the Motorola Xoom), don't say we didn't warn you. The CherryPal Cherry Pad is a fine example of what $188 will get you in the tablet world—not a lot, including a low-quality screen and a serious lack of features. As for 3G (and 4G) enabled tablets, the pricing varies widely depending on manufacturer, capacity, and plan, but expect to pay at least about $20 per month on top of a higher up front fee—the lowest iPad 2 3G tablet price is $629 for 16GB, for instance.
Finally, before you buy, if you can, head to your local electronics store to get hands-on time with some different tablets, so you can see which feels and works the best for you. And for the latest lab-tested tablet reviews, hit our Tablet Product Guide.
Subscribe to:
Posts (Atom)