Tuesday, August 5, 2008

Ars Technica Guide to Virtualization: Part I

By Jon Stokes

From buzz to reality

A free PDF of this guide is available to non-subscribers via the Enterprise IT Guide, presented by Ars Technica and Intel. Check it out for this and other free whitepapers.

In 2003, Intel announced that it was working on a technology called "Vanderpool" that was aimed at providing hardware-level support for something called "virtualization." With that announcement, the decades-old concept of virtualization had officially arrived on the technology press radar. In spite of its long history in computing, however, as a new buzzword, "virtualization" at first smelled ominously similar to terms like "trusted computing" and "convergence." In other words, many folks had a vague notion of what virtualization was, and from what they could tell it sounded like a decent enough idea, but you got the impression that nobody outside of a few vendors and CIO types was really too excited.

Fast-forward to 2008, and virtualization has gone from a solution in search of a problem, to an explosive market with an array of real implementations on offer, to a word that's often mentioned in the same sentence with terms like "shakeout" and "consolidation." But whatever the state of "virtualization" as a buzzword, virtualization as a technology is definitely here to stay.

Virtualization implementations are so widespread that some are even popular in the consumer market, and some (the really popular ones) even involve gaming. Anyone who uses an emulator like MAME uses virtualization, as does anyone who uses either the Xbox 360 or the Playstation 3. From the server closet to the living room, virtualization is subtly, but radically, changing the relationship between software applications and hardware.

In the present article I'll take a close look at virtualization—what it is, what it does, and how it does what it does.

Abstraction, and the big shifts in computing

Most of the biggest tectonic shifts in computing have been fundamentally about remixing the relationship between hardware and software by inserting a new abstraction layer in between programmers and the processor. The first of these shifts was the instruction set architecture (ISA) revolution, which was kicked off by IBM's invention of the microcode engine. By putting a stable interface—the programming model and the instruction set—in between the programmer and the hardware, IBM and its imitators were able to cut down on software development costs by letting programmers reuse binary code from previous generations of a product, an idea that was novel at the time.

Another major shift in computing came with the introduction of the reduced instruction set computing (RISC) concept, a concept that put compilers and high-level languages in between programmers and the ISA, leading to better performance.

Virtualization is the latest in this progression of moving software further away from hardware, and this time, the benefits have less to do with reducing development costs and increasing raw performance than they do with reducing infrastructure costs by allowing software to take better advantage of existing hardware.

Right now, there are two different technologies being pushed by vendors under the name of "virtualization": OS virtualization, and application virtualization. This article will cover only OS virtualization, but application virtualization is definitely important and deserves its own article.

The hardware/software stack

Figure 1 below shows a typical hardware/software stack. In a typical stack, the operating system runs directly on top of the hardware, while application software runs on top of the operating system. The operating system, then, is accustomed to having exclusive, privileged control of the underlying hardware, hardware that it exposes selectively to applications. To use client/server terminology, the operating system is a server that provides its client applications with access to a multitude of hardware and software services, while hiding from those clients the complexity of the underlying hardware/software stack.

Figure 1: Hardware/OS stack

Because of its special, intermediary position in the hardware/software stack, two of the operating system's most important jobs are isolating the various running applications from one another so that they don't overwrite each other's data, and arbitrating among the applications for the use of shared resources (memory, storage, networking, etc.). In order to carry out these isolation and arbitration duties, the OS must have free and uninterrupted rein to manage every corner of the machine as it sees fit... or, rather, it must think that it has such exclusive latitude. There are a number of situations (described below) where it's helpful to limit the OS's access to the underlying hardware, and that's where virtualization comes in.

Virtualization basics

The basic idea behind virtualization is to slip a relatively thin layer of software, called a virtual machine monitor (VMM) directly underneath the OS, and then to let this new software layer run multiple copies of the OS, or multiple different OSes, or both. There are two main ways that this is accomplished: 1) by running a VMM on top of a host OS, and letting it host multiple virtual machines, or 2) by wedging the VMM between the hardware and the guest OSes, in which case the VMM is called a hypervisor. Let's look at the second, hypervisor-based method, first.

The hypervisor

In a virtualized system like the one shown in Figure 2, each operating system that runs on top of the hypervisor is typically called a guest operating system. These guest operating systems don't "know" that they're running on top of another software layer. Each one believes that it has the kind of exclusive and privileged access to the hardware that it needs in order to carry out its isolation and arbitration duties. Much of the challenge of virtualization on an x86 platform lies in maintaining this illusion of supreme privilege for each guest OS. The x86 ISA is particularly uncooperative in this regard, which is why Intel's virtualization technology (VT-x, formerly known as Vanderpool) is so important. But more on VT-x later.

Figure 2: Hardware/software stack with virtualization

In order to create the illusion that each OS has exclusive access to the hardware, the hypervisor (also called the virtual machine monitor, or VMM) presents to guest OS a software-created image or simulation of an idealized computer—processor, peripherals, the works. These software-created images are called virtual machines (VMs), and the VM is what the OS runs on top of and interacts with.

In the end, the virtualized software stack is arranged as follows: at the lowest level, the hypervisor runs multiple VMs; each VM hosts an OS; and each OS runs multiple applications. So the hypervisor swaps virtual machines on and off of the actual system hardware, in a very low-granularity form of time sharing.

I'll go into much more technical detail on exactly how the hypervisor does its thing in a bit, but now that we've got the basics out of the way let's move the discussion back out to the practical level for a moment.

The host/guest model

Another, very popular method for implementing virtualization is to run virtual machines as part of a user-level process on a regular OS. This model is depicted in Figure 3, where an application like VMware runs on top of a host OS, just like any other user-level app, but it contains a VMM that hosts one or more virtual machines. Each of these VMs, in turn, host guest operating systems.

Figure 3: Virtualization using a host and guest OS.

As you might imagine, this virtualization method is typically slower than the hypervisor-based approach, since there's much more software sitting between the guest OS and the actual hardware. But virtualization packages that are based on this approach are relatively painless to deploy, since you can install them and run them like any other application, without requiring a reboot.

Why virtualization?

Virtualization is finding a growing number of uses, in both the enterprise and the home. Here are a few places where you'll see virtualization at work.

Server consolidation

A common enterprise use of virtualization is server consolidation. Server consolidation involves the use of virtualization to replace multiple real but underutilized machines with multiple virtual machines running on a single system. This practice of taking multiple underutilized servers offline and consolidating all of them onto a single server machine with virtualization saves on space, power, cooling, and maintenance costs.

Live migration for load balancing and fault tolerance

Load balancing and fault tolerance are closely related enterprise uses of virtualization. Both of these uses involve a technique called live migration, in which an entire virtual machine that's running an OS and application stack is seamlessly moved from one physical server to another, all without any apparent interruption in the OS/application stack's execution. So a server farm can load-balance by moving a VM from an over-utilized system to an under-utilized system; and if the hardware in a particular server starts to fail, then that server's VMs can be live migrated to other servers on the network and the original server shut down for maintenance, all without a service interruption.

Performance isolation and security

Sometimes, multi-user OSes don't do a good enough job of isolating users from one another; this is especially true when a user or program is a resource hog or is actively hostile, as is the case with an intruder or a virus. By implementing a more robust and coarse-grained form of hardware sharing that swaps entire OS/application stacks on and off the hardware, a VMM can more effectively isolate users and applications from one another for both performance and security reasons.

Note that security is more than an enterprise use of virtualization. Both the Xbox 360 and the Playstation 3 use virtual machines to limit the kinds of software that can be run on the console hardware and to control users' access to protected content

Software development and legacy system support

For individual users, virtualization provides a number of work- and entertainment-related benefits. On the work side, software developers make extensive use of virtualization to write and debug programs. A program with a bug that crashes an entire OS can be a huge pain to debug if you have to reboot every time you run it; with virtualization, you can do your test runs in a virtual machine and just reboot the VM whenever it goes down.

Developers also use virtualization to write programs for one OS or ISA on another. So a Windows user who wants to write software for Linux using Windows-based development tools can easily do test runs by running Linux in a VM on the Windows machine.

A popular entertainment use for virtualization is the emulation of obsolete hardware, especially older game consoles. Users of popular game system emulators like MAME can enjoy games written for hardware that's no longer in production.

Original here

Could 64-bit Windows finally be taking off?

Posted by Ina Fried

If you build it, it appears they will come, eventually.

Such is the case with 64-bit computing. Advanced Micro Devices launched 64-bit chips for the desktop back in 2003, hoping the fact that it was there and didn't cost extra would convince consumers.

"Our industry, right now, is hungry for another round of innovation," AMD chief Hector Ruiz told the crowd at the San Francisco launch in September 2003. Not that hungry, apparently.

Of course, the hardware wasn't much use without a 64-bit operating system. After several fits and starts, Microsoft finally released a 64-bit version of Windows XP in the fall of 2005.

"64-bit versions of Windows will begin to find their way into high-end gaming notebooks, which increasingly are being used as high-end notebook workstations as opposed to strictly gaming systems."
--Richard Shim, analyst, IDC

Still, several factors have held up adoption of 64-bit computing, long after the operating system was available. First of all, there wasn't a lot of need for it. The primary advantage of 64-bit computing is the ability to use more than 4GB of RAM, and until very recently most PC buyers had little need for that much memory. Also, to connect to a computer running 64-bit Windows, printers, scanners, and other peripherals need to have a special 64-bit driver.

But it appears the benefits are starting to outweigh the drawbacks.

In a blog post this week, Microsoft's Chris Flores noted that 20 percent of new Windows Vista PCs in the U.S. that connected to Windows Update in June were running a 64-bit version of the OS, compared with 3 percent of new computers in March.

"Put more simply, usage of 64-bit Windows Vista is growing much more rapidly than 32-bit," he said. "Based on current trends, this growth will accelerate as the retail channel shifts to supplying a rapidly increasing assortment of 64-bit desktops and laptops."

The trend is also evident by looking at the kinds of systems being sold at retailers. In its circular this Sunday most of the desktops and half of the dozen notebook models being advertised by Office Depot had the 64-bit version of Windows pre-installed.

The mix was similar in Circuit City's advertisement, with nearly all of the desktops and many of the notebooks running 64-bit Windows

Gateway, for example, is shifting to an entirely 64-bit Windows lineup on its desktops, starting with the back-to-school shopping season.

It's a dramatic shift even from last quarter, in which only about 5 percent of its total desktop and notebook models had a 64-bit OS installed. For the third quarter, 95 percent of desktop models and 30 percent of notebook systems will have a 64-bit OS.

Among the factors leading to the shift are the fact that 64-bit machines, unlike their 32-bit brethren, can directly address more than 4GB of memory. Also, more 64-bit software is finally coming to market, as evidenced by last week's release of a 64-bit optimized version of Adobe Lightroom.

IDC analyst Richard Shim said he expects even more computers will start shipping preloaded with 64-bit Windows toward the end of this year. "64-bit versions of Windows will begin to find their way into high-end gaming notebooks, which increasingly are being used as high-end notebook workstations as opposed to strictly gaming systems," he said.

Original here

5 Linux Commercials I Like

Since this is a Linux advocacy blog, it’s only normal for me to share with you guys my favorite list of Linux commercials. I love these because of the deep messages and meanings behind each of them:

1-Float Like A Butterfly

Muhammed Ali and Linux have many things in common, his quick fists and swift feet are akin to Linux’s flexibility and speed. Ali is the greatest boxer in history, and to me, Linux is the greatest OS! Indeed Ali claimed he is the greatest boxer, and he is probably right about Linux also. Watch the great legend promote openness:

2-Novell Knows Spoofing

I know I know, we all grew bored of these “I am a Mac I am a PC” commercials, but you gotta admit they are quite clever. Here are two from Novell

3-Then Linux Wins

A strong message, and true in every sense.

4-IBM: The New Allie

5-The Boy Has Been Adopted

Original here

Don’t be a Victim of DNS Security Holes

Written by Gary

The internet has been ablaze with news about the Kaminsky DNS vulnerability over the last week or so, especially in light of some vendors’ taking their sweet time with supplying a fix.

Behind all the security technobabble, what this means for you is that if your ISP hasn’t applied the appropriate fixes to the DNS servers they set for you when you go online, then should you type or into the address-bar of your browser, you might very well actually end up on a spoof site that looks exactly like the real thing, but which collects your username and password before forwarding your connection to the real site. That’s a serious problem in anyone’s book!

You can check whether the servers you’re calling have been fixed by clicking the Check My DNS button on Dan Kaminsky’s Site. If they come up short, you really should switch to an alternative DNS service. In many respects, using a free provider that specializes in DNS is more likely to also keep you safe from any future security problems than relying on your ISP — who has plenty of other things to maintain in addition to your DNS servers.

OpenDNS provides just such a service at no cost, and even though my ISP passes the Kaminsky test, I’ve already switched my whole network over to the OpenDNS servers by following these straight forward instructions, which boil down to changing all /etc/resolv.conf nameserver lines to:


And then flushing any cached addresses on all computers you use for browsing. On Ubuntu, type the following into a terminal:

   sudo /etc/init.d/networking restart

And the equivalent for Mac OS X:

   sudo lookupd -flushcache

And Windows Vista:

   ipconfig /flushdns

Original here

My favorite useful Compiz features

Users of Compiz, a window manager that provides pretty visual effects, know that a lot of those effects are just for fun. Things like drawing fire on the screen or folding up windows like a paper airplane to close them look cool but have little real value. I think a lot of those features (plugins) were written more to show off what Compiz can do than to provide useful functionality. I don't doubt that lots of users are still using them though. Linux users cherish the ability to customize settings to the nth degree.

Personally, I am most concerned with the Compiz plugins that add functionality to my desktop. There are plenty of those too. I am going to outline some of my favorites and most useful. First though, I want to point out that if you have Compiz installed, you will want to also have the CompizConfig Settings Manager (ccsm) installed too. You can add it from Add/Remove Applications. Also, when I refer to the Super key it is most likely the Windows key or Apple Key on your keyboard. And now, on to the list:

Scale Effect (Shift+Alt+Up)
The scale effect is like the OS X "All Windows" Exposé feature that is invoked with F9. It shrinks all the windows down to fit on your desktop so you can see a thumbnail of everything running to find the window you want. This feature is most useful when you have lots of windows open. The more windows you have open, the smaller each thumbnail gets. It also puts the application icon down in the corner for you to help with identification of applications. You can use your mouse to select the window you want or while still holding down Shift+Alt you can use the arrow keys to move to the window you want.
Ring Switcher (Super+Tab)
The ring switcher is another feature for switching between windows. With this plugin all your windows are shrunk and rotated as if on a rod. The windows farther away are smaller and the window you are switching to is front and center. The window title is also displayed. Although not as useful as the scale effect for selecting a window, it is another good way to scroll through all your open windows and switch applications. Maybe you like the way this one looks better too. It is more like the traditional Alt+Tab but allows you to see all of the windows available at once.

Enhanced Zoom Desktop (Super+Mouse Scroll Up/Down)
Zoom can be a really handy feature. If you run your system at a really high resolution, sometimes you need to be able to take a closer look at something. I've found this feature very useful when watching videos that I can't resize or when using a CRT that just isn't very sharp. It also provides a universal way to zoom so instead of having to know how to zoom in different applications, you can always use this.

Expo (Super+E)
Expo is a feature that makes switching between workspaces (a feature Windows is sorely lacking) a lot easier. It will spread out all your workspaces in a row (with some nice reflection) to allow you to see what is running on all of them at once and then switch to the one you need. Since I've used Linux more I have started to rely on multiple workspaces. I usually have one just for my IM client, one for my personal web browsing, one for work web browsing, one for my media player, one for document editing, etc.. With Expo, seeing what is where is a lot easier and getting there is faster.

Shift Switcher (Shift+Super+E)
The shift switcher is another of the features for switching between running applications. It works like cover flow in iTunes. Because you only see 3 windows at a time, I don't use it as much as the scale effect or the ring switcher but it still useful when you have less windows open at once.

Window Previews
I first saw a feature like this on Windows Vista. Maybe someone else thought it up first but who cares as long as I can use it. I think this feature has great potential but it also has a HUGE problem as it currently works. If you want to see a thumbnail now, the window has to be visible already. If the windows is minimized, it will not draw the thumbnail. I can understand the technical limitations that lead to this but this feature is most useful when the window is minimized. To see these all you have to do is mouse over the application on the taskbar.

One last honorable mention that I really love is the Viewport Switcher which allows you to use your mouse scroll wheel to switch workspaces when the pointer is over the background. I could not really get a screen shot to show that.

Also keep in mind that you can customize most any of these settings for days on end to get these features to work just the way you want them to. Just install the CompizConfig Settings Manager (ccsm). Some of these features I mentioned are not enabled by default either (on Ubuntu 8.04 at least) so don't expect them to all work untill you enable them.

One last thing, if you haven't seen Compiz in action, just look on YouTube. There are tons of screencasts showing these features and the crazy awesome ones too.

Original here

Cablevision wins on appeal: remote DVR lawful after all

By John Timmer

Does it matter where a DVR's hard drive lives? Hardware from outfits such as TiVo records shows onto a local disk, but the cable provider Cablevision decided to dispense with dedicated hardware and a local drive, and instead it rolled out a service where users could record shows through their existing cable box; those recordings stayed on a remote server in the central office for storage and playback. Content providers sued, alleging copyright violations, and they won a landmark injunction that blocked deployment of the system. But Cablevision appealed, and has now won a sweeping victory that may clear the way for the company to deploy its remote DVR service after all.

The initial case was filed by film studios and TV channels, and it alleges that the Cablevision service violates their copyrighted works in three ways:

  1. The process of recording creates a temporary buffer that contains pieces of every copyrighted work that Cablevision broadcasts
  2. The individuals that subscribe to the service store copies of copyrighted material on servers controlled by Cablevision
  3. The process of streaming a work from this storage to a home constitutes an unauthorized public performance

The claims resulted in a summary judgment and an injunction that prevented Cablevision from deploying the system.

A 1.2s buffer: not infringement

It's hard to imaging a more sweeping reversal than the one in a decision (PDF) handed down today by a three judge panel of the Second Circuit's Court of Appeals. The summary judgments on all counts are reversed, leaving Cablevision the victor, and the injunction against deployment of its remote DVR service has been lifted.

The new ruling includes an extensive examination of the technical details of the DVR system, and those details were used to throw out the claims regarding the buffering system. Data in the two buffers at issue typically constitutes 0.1 and 1.2 seconds-worth of content, and the initial decision ruled (apparently correctly) that this was an "embodiment" of the copyrighted work. The relevant statute, however, also specifies that this copy has to be embodied "for a period of more than transitory duration."

On this count, the buffers failed to infringe, leading to a reversal of the ruling.

Who owns the copies?

There's little doubt that the copies that end up in the users' storage space are copyrighted material, but the question here revolves around who "owns" that copy. The court notes that the hardware is provided by Cablevision but used by others to make the copies, and it says that "mere ownership" of the hardware does not establish liability.

Because those copies are made at the direction of the users, and have to be arranged in advance of Cablevision's broadcasts, the court held that these copies were essentially controlled by the user. "We are not inclined to say that Cablevision, rather than the user, 'does' the copying produced by the RS-DVR system," the court decided.

The decision suggests that the remote DVR system might constitute contributory infringement, since it is designed to specifically produce copies of copyrighted works. Unfortunately for the content owners, all of their allegations focused on direct infringement.

Public performance

The final point at issue was whether playing the stored file constituted an unauthorized public performance of it. The Appeals Court focused on the transmit clause of the Copyright Act, writing, "Although the transmit clause is not a model of clarity, we believe that when Congress speaks of transmitting a performance to the public, it refers to the performance created by the act of transmission."

Since that transmission is destined for the viewer who recorded it in the first place, it doesn't run afoul of the rules governing public performances.

The ruling appears to sweep away any barriers to Cablevision (or anyone else) deploying a remote DVR. Any further legal action appears destined to focus on the question of whether this sort of service is infringement-enabling, but the issues there are likely to be murky enough that wholesale injunctions against deployment won't be forthcoming.

In general, the service appears to be very consumer-friendly. Cablevision plans to charge less than the going rate for DVR box rentals, and its centralized processing and storage are likely to be more resource- and energy-efficient than distributing thousands of set-top boxes. The centralized nature of the system should also make capacity and software updates better. With all these positives, and no practical differences between the functionality of remote and local DVR services, it would be unfortunate if legal technicalities stifled the potential for this technology.

Original here

There’s a method behind StumbleUpon’s madness

StumbleUpon is all about site discovery. I used to click on the “Stumble!” button and figured it would return me some random site based on the categories I said I was interested in. But then I noticed that the more I used it, better sites were being sent my way. This is because it’s not actually random, but rather sites are served up based on a series of processes that go on within the StumbleUpon Recommendation Engine.

I had the chance to meet up with co-founder and chief architect Garrett Camp at the StumbleUpon offices last week. He walked me through (in laymen’s terms) what actually goes on in the backend when you click the Stumble button.

As you can see in the chart below, there are three key parts to the Recommendation Engine. There are pages from the topics you marked that interest you, socially endorsed pages and peer endorsed pages. Socially endorsed pages are the ones that users you have befriended on the site like, while peer endorsed pages are ones from users who have similar voting habits (giving a site the thumbs up or thumbs down) as you.

These three factors are why it’s important to not only choose categories you like, but to choose friends with similar interests and to only vote up sites you really enjoy in order to get the best experience out of StumbleUpon.

When a site is first stumbled, it is put through both the Classification Engine and the Clustering Engine as shown above. The Classification Engine filters the page by topic and tags. Sometimes a user does this work, but sometimes it’s submitted without any of this information, so the engine has to determine where to put the content. This is a big job when you have over 30,000 pages each day being submitted, as StumbleUpon has.

The Clustering Engine sorts out the votes a site is getting so it can determine which sites are the quality ones that should be served. Again, this sounds simple enough until you realize that StumbleUpon has 5.6 million users. This engine is a key cog in what serves up over 10 million stumbles that take place every day.

Like any good social algorithm maker, Camp wouldn’t divulge all the little details of what goes into the promotion of sites, but he did say that things such a comments on stories and so called “quick stumbles” (when a user quickly hits the stumble button again after landing on a page without voting on it — they dub this a “soft not for me,” or down-vote) are taken into account as well.

This all makes for a system of “quality plus relevance,” as Camp put it.

I was interested to know how this method compared to Digg’s recently launched Recommendation Engine. Camp said he hadn’t look to closely at it yet, but that it seemed to employ many of the same ideas minus much of the content analysis.

As with any of these recommendation engines, the more data you have, the better it’ll perform. Since it was bought by eBay in May of last year, StumbleUpon has more than doubled its user base, but the company knows that this growth can only last so long given its major restriction: Right now, the vast majority of people who use StumbleUpon use it through a browser plug-in. This limits the service to either Firefox or Internet Explorer users (who happen to have this plug-in). That is why the team is pushing hard to perfect a web-only version of the site.

Creating a way to use the service no matter what browser you are on or what plug-ins you have installed could take the service to an even bigger level in terms of usage, Camp acknowledged.

You can see an example of how the web version of StumbleUpon looks here. (And above.)

Alongside adding potentially millions more users with the web-based version, StumbleUpon is finally gearing up to expand its friend limitation. Previously you could only have 200 friends on the service, that will soon be increased to 1,000, Camp told me.

StumbleUpon was launched out of Canada in 2002 and didn’t move to the San Francisco Bay Area until 2006. It took a $1.5 millon angel round of funding in 2006 before its purchase by eBay for $75 million.

Original here

Circuit City, Mad Magazine and Streisand

Trust me, I'll connect the dots.

Someone at Circuit City sees a too-true-to-be-really-funny Mad Magazine parody of their beleaguered company and dashes off an e-mail demanding that all copies of the periodical be purged from the chain's shelves.

(Major update, 2 p.m.: Circuit City apologizes, vows to put magazines back on shelves and teach employees to lighten up. Full response below.)

I already know that you're thinking two things?

"Mad Magazine still publishes?"

"And they sell it at Circuit City?"

The unequivocal answer to the first question is yes. I hesitate on the second only because comments on this item from The Consumerist raise an eyebrow or two on that point and my e-mail to Circuit City public relations has yet to yield a reply. However, the Circuit City e-mail cited by The Consumerist does have the ring of authenticity (absent any confirmation). It reads:

"Immediately remove all issues and copies of Mad Magazine from your sales floor. Destroy all copies and throw them away. They are not inventoried, and your store will not incur shrink. Thank you for your immediate attention to this."

I'm trying to contact the purported author of the e-mail. Primary among my questions is this: "Do you regret sending that e-mail in light of the fact that publicity about the request to destroy the magazines will now dwarf any damage that might have been generated by the -- what? six or seven copies -- sold at Circuit City?"

Should I get a response, you'll be the first to know.

In the meantime, we can't forget about Barbara. Did you know there's a name for this phenomenon -- increasingly common -- of seeing the effort to suppress some bit of embarrassing or proprietary news backfire on the suppressor? It's called The Streisand Effect, according to Wikipedia and few thousand online references. (New one on me.)

The term Streisand effect originally referred to a 2003 incident in which Barbra Streisand sued photographer Kenneth Adelman and for $50 million in an attempt to have the aerial photo of her house removed from the publicly available collection of 12,000 California coastline photographs, citing privacy concerns. Adelman claims he was photographing beachfront property to document coastal erosion as part of the California Coastal Records Project. Paul Rogers of the San Jose Mercury News later noted that the picture of Streisand's house was popular on the Internet.

The most famous example from the world of technology involved digital rights management code, HD DVD disks, and, thanks primarily to Digg, just about everybody on the Internet.

Finally, is there a name for people who make fun of other people for not knowing an Internet meme like The Streisand Effect? I ask because I anticipate being ridiculed for my admission and I'd like to be prepared with a snappy comeback since simply deleting the insults would only produce ... well, you know.

Major update, 2 p.m.: Here's the e-mail I just received from Circuit City spokesman Jim Babb:

I became aware of this "situation" only this morning, and I have sent a note today to the Editors of Mad Magazine.

Speaking as "an embarrassed corporate PR Guy," I apologized for the fact that some overly sensitive souls at our corporate headquarters ordered the removal of the August issue of MAD Magazine from our stores. Please keep in mind that only 40 of our 700 stores sell magazines at all.

The parody of our newspaper ad in the August MAD was very clever. Most of us at Circuit City share a rich sense of humor and irony ... but there are occasional temporary lapses.

We apologize for the knee-jerk reaction, and have issued a retraction order; the affected stores are being directed to put the magazines back on sale.

As a gesture of our apology and deep respect for the folks at MAD Magazine, we are creating a cross-departmental task force to study the importance of humor in the corporate workplace and expect the resulting Powerpoint presentation to top out at least 300 pages, chock full of charts, graphs and company action plans.

In addition I have offered to send the Mad Magazine Editor a $20 Circuit City Gift Card, toward the purchase of a Nintendo Wii ... if he can find one!

Jim Babb

Corporate Communications

Circuit City Stores, Inc.

Richmond, VA

That's about the best you can expect, as damage control goes.

Welcome regulars and passersby. Here are a few more recent Buzzblog items. And, if you'd like to receive Buzzblog via e-mail newsletter, here's where to sign up.

I apologize for that Verizon/pit-bull post.

Doing the Laptop Drive of Shame.

Bank of America to support Firefox, finally.

What does Cisco have against Quebec? (Answer: silly contest rules.) nails another nitwit ... this time with Monty Python's Holy Grail.

"I have a lost laptop horror story for you."

The REAL sticking point between Microsoft and Yahoo!

This Year's 25 Geekiest 25th Anniversaries.

Top 10 Buzzblog posts for '07: Verizon's there, of course, along with Gates, Wikipedia and the guy who lost a girlfriend to Blackberry's blackout.

Original here

Report: Facebook letting employees unload stock options?

Posted by Caroline McCarthy

If you see an increase in the number of 20-somethings driving nice cars around Palo Alto any time soon, maybe this is why: VentureBeat reported Monday that Facebook is ready to let current employees unload a fifth of their stock options, at the company's internal valuation of $4 billion. It's slated to start this fall. For early employees of the company, which was founded in a Harvard dorm room, this could mean some legit cash.

Facebook's valuation was reported at $15 billion when Microsoft took a $240 million stake last year, but the company has backtracked on that number, referring to it as a "business deal" rather than a former paper valuation. Microsoft's stake was considered to be in "preferred stock," whereas the $4 billion valuation refers to common stock.

The company's actual valuation came under scrutiny in the last throes of the ConnectU vs. Facebook trial, in which plaintiff ConnectU's founders cried foul that their erstwhile rival hadn't disclosed its true worth during the legal process.

If VentureBeat's report is true, this could be a sign that another way for Facebook employees to cash in their stock--an initial public offering or a sale to a big tech or media company--isn't on the immediate horizon. It also might raise a few eyebrows: for a young corporation still abiding by a mantra of "growth over profits," employees selling stock could seem a little bit presumptuous.

Facebook representatives were not immediately available for comment.

Original here

Anonymous relaunches fight against Scientology

By John Leyden

The Anonymous group is calling for disillusioned former members to return to the fold ahead of a new phase in its battle against the Church of Scientology.

A video posted on YouTube on Friday said it was time for the original founders of the protest movement to return in time for a "shift to more subtle and shocking tactics".

Anonymous has staged demonstrations against Scientology around the world over the last six months as part of Project Chanology. Members of the loosely-knit group have demonstrated outside Scientology offices worldwide, often wearing V for Vendetta-style Guy Fawkes masks and other disguises to safeguard their identities.

Other tactics have also included running denial of service attacks on Church of Scientology websites or making prank calls to Scientology offices.

The loose-knit group is protesting against the church's alleged financial exploitation of members as well as its hostility towards critics. The protests sprang up in response to attempts by the Church of Scientology to force websites to pull a video clip of Tom Cruise expounding some of the more colourful tenets of his beliefs. Anger at this move expressed on the 4chan discussion board and elsewhere provided the initial impetus behind Anonymous but has led to problems, according to the video.

"Those who answered the call outnumber the number from the motherland... This has resulted in Project Chanology becoming polluted by people who are judgemental of our ideology and serve the interests of special interest groups. The time for making allies at the expanse of our ideals is over," the video states.

The time for the movement's original creators to "return and reclaim" it has come, the video states. This will mark a "shift to more subtle and shocking tactics" as part of phase three of the operation. What these tactics are remains to be seen but the video warns the leaders of Scientology that their "card is marked" and promises new (unspecified) initiatives, starting on 8 August.

A second video, also posted over the weekend, and titled Anonymous and our Allies, said that newer members of the group are still welcome.

"In any conflict of this magnitude every part plays a role. All Anonymous are welcome regardless of their origins, tactics, beliefs or reasons for their fight."

The message says those who prefer to limit their protests to candlelit vigils (and support for people leaving Scientology) should continue their actions while not interfering with those who wish to up the ante.

"If activism of a darker nature leaves a bitter taste in your mouth do not participate in those actions," the second video states.

Original here

Air Force cracks software, carpet bombs DMCA

By John Timmer

Last week, a US Court of Appeals upheld a ruling on software piracy. The organization doing the piracy, however, happened to be a branch of the US government, and the decision highlights the significant limits to the application of copyright law to the government charged with enforcing it. Most significantly, perhaps, the court found that because the DMCA is written in a way that targets individual infringers, the government cannot be liable for claims made under the statute.

The backstory on the case involved, Blueport v. United States, borders on the absurd. It started when Sergeant Mark Davenport went to work in the group within the US Air Force that ran its manpower database. Finding the existing system inefficient, Davenport requested training in computer programming so that he could improve it; the request was denied. Showing the sort of personal initiative that only gets people into trouble, Davenport then taught himself the needed skills and went to work redesigning the system.

Although Davenport did his development on a personal system at home, he began to bring beta versions of his code in for testing, and eventually started distributing his improved system within his unit, giving the software a timed expiration. A demonstration to higher-ups led to a recommendation for his immediate promotion, but that was followed by demands that the code for his software be turned over to the USAF.

Davenport responded by selling his code to Blueport, which attempted to negotiate a license with the Air Force, which responded by hiring a company to hack the compiled version by deleting the code that enforced the expiration date. Blueport then sued, citing copyright law and the DMCA.

DMCA: We'll enforce it, but won't abide by it

The Court of Federal Claims that first heard the case threw it out, and the new Appellate ruling upholds that decision. The reasoning behind the decisions focuses on the US government's sovereign immunity, which the court describes thusly: "The United States, as [a] sovereign, 'is immune from suit save as it consents to be sued . . . and the terms of its consent to be sued in any court define that court’s jurisdiction to entertain the suit.'"

In the case of copyright law, the US has given up much of its immunity, but the government retains a few noteworthy exceptions. The one most relevant to this case says that when a government employee is in a position to induce the use of the copyrighted material, "[the provision] does not provide a Government employee a right of action 'where he was in a position to order, influence, or induce use of the copyrighted work by the Government.'" Given that Davenport used his position as part of the relevant Air Force office to get his peers to use his software, the case fails this test.

But the court also addressed the DMCA claims made by Blueport, and its decision here is quite striking. "The DMCA itself contains no express waiver of sovereign immunity," the judge wrote, "Indeed, the substantive prohibitions of the DMCA refer to individual persons, not the Government." Thus, because sovereign immunity is not explicitly eliminated, and the phrasing of the statute does not mention organizations, the DMCA cannot be applied to the US government, even in cases where the more general immunity to copyright claims does not apply.

It appears that Congress took a "do as we say, not as we need to do" approach to strengthening digital copyrights.

A sad footnote to this story is that we became aware of it through the blog of copyright lawyer William Patry, only to see Patry shut down the blog late last week. Patry says that a major factor in his decision was frustration with the current state of copyright law and with the aggressive stupidity that he felt typified a number of responses to his musings on the law.

But Patry also cites the inability of many to separate his personal thoughts on copyright from those he voices through his duties as Google's Senior Copyright Counsel. Given that Google (and many other companies) offer many significant announcements through their blogs, and Patry is notable in part due to his employer, this sort of confusion seems inevitable; still, it's unfortunate that it has brought a (temporary?) end to such a learned and public voice on copyright issues.

Original here

Opinion: Can Google be bested? Not anytime soon

By Don Reisinger

Google may be the de facto leader in search today, but will its lead last forever? With services like Mahalo and Cuil gaining attention and Microsoft willing to pour continued billions into its quest for online dominance, Google's rivals are legion, and they're hungry, but that doesn't mean the Big G needs to elevate its corporate blood pressure; Google's dominance is assured far into the future.

According to comScore's latest figures, Google commanded 61.5 percent of the US search market, while Yahoo owned 20.9 percent and Microsoft trailed with 9.2 percent. Both and AOL follow far behind the big three. And where are the hot startups? Smaller search engines like Mahalo, Powerset, and Quintura didn't even make the list.

Making room

A search engine can be an extremely lucrative endeavor when it's popular. But with Google, Yahoo, and Microsoft commanding more than 90 percent of the market, is it even possible anymore for a small company to be anything more than the nichest of niche players?

The answer is "no" and the reason is simple: if a search service is good enough to make significant headway, deep-pocketed Google or Microsoft will acquire it before it even has a chance to hit the mainstream. Case in point: Microsoft acquired Powerset just a few months ago to bolster its search business as it tries to live up to Ballmer's lofty goals for the future of Live Search.

"I would say 'Hey, you know, you're just three years old,'" Ballmer said in response to a question asking him if Live Search needs to get its act together quickly. "And we've got you in there playing basketball with the 12-year-olds, and you know what? You're growing up quick, you're getting better every day, and you've got all the potential in the world. If it takes you until you're six, seven, eight, nine, ten—but you're gonna dunk, and you're gonna dunk every other guy some day.'"

In its quest to dunk the ball in Google's face, Microsoft has poured money into search for years. In addition to acquiring Powerset, the company has repeatedly made improvements to its search function that included a complete overhaul and a quadrupled index size last year, and a more interactive homepage earlier this week.

With Powerset picked up, let's consider other hot startups. Mahalo has enjoyed some success of late, but human-powered search has significant limitations and, as the company points out, it needs to support the 25,000 most popular search terms to become a success. Unfortunately, Mahalo's search function doesn't really allow for tweaking search queries, which puts it at a significant disadvantage when people are looking for specific topics on a given subject. Of course, Mahalo would argue that it's trying to make search results easier and not force users to tweak, but as queries and users become more sophisticated, that's increasingly difficult.

Cuil launched this week to much fanfare due to its claim to be the "biggest search engine on the Web" with an index of 120 billion pages (three times more than Google and ten times more than Microsoft). But after using it for a while, Ars quickly found that it failed to deliver the best and most relevant search results and it seemed to work well only with generic terms.

So if the three most prominent small search engines—Powerset, Mahalo, and Cuil—were either acquired or simply don't have what it takes to supplant Google, can any search service truly compete, or is all that glorious ad revenue destined to wind up finding vegan cuisine, private party planes, and lava lamps at the Googleplex?

Searching for the "Google Killer"

Some believe that since "Google it" has become such a ubiquitous phrase, name recognition alone makes it impossible to kill Google. Mark Cuban contends that the only way to kill Google is to force it to deliver bad results by having major websites "recuse themselves from Google search results."

But what popular website wants to cut off incoming links from Google? Google is synonymous with search in consumers' minds, and pulling your site from Google's index is the one thing most companies are desperate to avoid.

Yahoo and Microsoft search may be destinations, but they simply don't command mainstream attention the way Google does. Sure, Yahoo's search results may be more relevant in some cases and Microsoft's service may be sleeker, but the vast majority of people use Google because its overall usability and relevance make it the best solution.

Smaller companies like Cuil and Mahalo might be trying something new, but in the search field, it comes down to whether or not they can provide an experience that easily eclipses Google's (emphasis on "easy"). Do Mahalo's generic search results make it better than Google's? Does Cuil's huge index even matter if it doesn't provide you with the best search results on any number of generic or sophisticated queries?

Status quo

No, chances are that for the foreseeable future, the search industry will stay much the same: three major companies will vie for control, but one will dominate the others like an NBA player at a YMCA pick-up game.

With Microsoft's attempts at making Live Search just one key component of a broader online strategy that revolves around Live convergence, Microsoft hopes to attract more users to its search engine through its other products (in a reverse of Google's model).

And although it may be trudging through tumultuous times now, Yahoo is still the second most popular search engine in the US. With its BOSS platform, Yahoo is looking to offer small search engines Yahoo search results, much like Clusty uses Google's, and this may produce some interesting innovation.

But these moves alone make it abundantly clear that both Microsoft and Yahoo are trying to capitalize on areas that don't matter nearly as much as they think. Convergence is fine if you can get people to use your products. Allowing other companies to use Yahoo search results seems like an obvious move, considering Google has been doing that for quite some time.

What about the search engine itself? Instead of making it more interactive, Microsoft should focus on making it more relevant. And instead of maintaining the status quo, Yahoo should consider taking a cue from Google: make the search page all about search, remove the eyesores, and keep it simple.

Google didn't become the world's most popular search engine by presenting distraction and complexity; it won by being relevant, clean, and easy. And with huge coffers of cash to acquire or invest in small companies that could make a big impact, Google is in a prime position to hold the top spot for years with nary a worry about competitors.

If Microsoft and Yahoo should learn anything from Google's initial success, it's that getting "rid" of users quickly is more important than keeping them searching. Sure, it sounds counter-intuitive, but it worked for Google.

Original here

Intel Plans Chip to Boost Computer Performance

Washington Post Staff Writer

The computer revolution has been powered by chips that operated at ever-higher frequencies. From the Ataris and Commodores of the late 1970s, which ran at roughly one megahertz, to today's devices that run at nearly four gigahertz, one year's star at the electronics store was typically outdone by higher speeds in the next year's model.

But that era of progress may be drawing to a close.

Today, Intel, the world's largest chipmaker, is revealing details about a new chip that seeks to improve performance not by boosting frequency, but by putting more processors or "cores" onto a single chip.

While so-called dual-core and quad-core chips have become commonplace in recent years, Intel expects to place more than 10 "cores" or processors on its new project, code-named Larrabee.

To keep up with the demands of the increasingly digital world, the "multi-core" or "many core" approach is necessary because by 2015, running chips at faster and faster frequencies could have yielded products like laptop or desktop computers that create as much heat as a nuclear reactor, engineers said.


"There is a fundamental physics issue we can no longer get around," said Anwar Ghuloum, a principal engineer managing an Intel group that addresses the software challenges posed by such chips. "If we kept going as we had been, the heat density on a chip would have equaled the surface of the sun."

Intel is releasing features of the Larrabee project today, in advance of an industry conference scheduled for next week in Los Angeles.

First intended for graphics-intensive applications, such as games and other visually intensive programs, company officials said the multi-core approach could become the model for common desktop computers. Engineers expect it to be capable of processing about a trillion instructions a second.

The first product based on Larrabee is expected in 2009 or 2010, and Intel officials anticipate that not long after 2010, there will be laptops running on chips with more than 10 cores.

The drawback of the new approach is that it requires an equally dramatic shift in the software industry. Some experts, such as Microsoft founder Bill Gates, initially expressed reservations because of the disruptive nature of the transition.

To take advantage of a chip with many processors, software has to be broken into chunks of instructions that can run in parallel on multiple processors. So, a computer program that now consists of one set of sequential instructions would have to be parceled into two, four or more than 10 sets of instructions, depending on the number of cores, that can be run in parallel.

Once chips with 10 cores reach consumer desktops, however, the entire corpus of the world's software may have to be rewritten to take advantage of the extra power.

To meet the challenge, new programming languages are being created and technology leaders are encouraging computer science departments at universities to bulk up in courses in parallel processing.

An array of technical possibilities -- in language interpretation, robotics and visual recognition -- depend upon increased processing power.

Some game firms, such as Crytek and Valve, have hailed the advances. But multi-core chips present massive and expensive difficulties.

Executives at Microsoft initially balked at the idea when they met with Intel several times about four years ago.

At the first one, Pat Gelsinger, a senior vice president at Intel, described why the company intended to start developing multi-core and then many-core chips. Gelsinger had been warning the industry of the imminent change for years.

Though Microsoft had been researching the multi-core area since 2001, company officials had hoped to delay the transition.

"It was like, 'thanks very much for your input, Pat. Now, it's wrong, go fix it,'" Gelsinger recalled of the response from Gates and other Microsoft engineers.

Gates and Microsoft were "testing Intel's real sense of needing to make this architectural shift," Microsoft said in a statement. The statement added: "In 2004 it became clear this shift would begin in earnest by the end of the decade."

Original here

AMD Fusion details leaked: 40/32 nm, dual-core CPU, RV800 graphics

By Theo Valich

Taipei (Taiwan) – AMD pushed Fusion as one of the main reasons to justify its acquisition of ATI. Since then, AMD’s finances have changed colors and are now deep in the red, the top management has changed, and Fusion still isn’t anything AMD wants to discuss in detail. But there are always “industry sources” and these sources have told us that Fusion is likely to be introduced as a half-node chip.

It appears that AMD’s engineers in Dresden, Markham and Sunnyvale have been making lots of trips to little island of Formosa lately - the home of contract manufacturer TSMC, which will be producing Fusion CPUs. Our sources indicated that both companies are quite busy laying out the productions scenarios of AMD’s first CPU+GPU chip.

The first Fusion processor is code-named Shrike, which will, if our sources are right, consist of a dual-core Phenom CPU and an ATI RV800 GPU core. This news is actually a big surprise, as Shrike was originally rumored to debut as a combination of a dual-core Kuma CPU and a RV710-based graphics unit. A few more quarters of development time gave AMD time to continue working on a low-end RV800-based core to be integrated with Fusion. RV800 chips will be DirectX 10.1 compliant and are expected to deliver a bit more than just a 55 nm-40 nm dieshrink.

While Shrike will debut as a 40 nm chip, the processor is scheduled to transition to 32 nm at the beginning of 2010 - not much later than Intel will introduce 32 nm - and serve as a stop-gap before the next-gen CPU core, code-named "Bulldozer" arrives. The Bulldozer-based chip, code-named “Falcon”, will debut with TSMC's 32nm SOI process, instead of the originally planned 45 nm.

As Fusion is shaping up right, we should expect the chip be become the first half-node CPU (between 45 and 32 nm) in a very long time.

Original here

Report: iPhone Nano to ring in the holidays

Posted by Steven Musil

You can expect an iPhone Nano to be on the shelves in time for the holiday shopping season, according to a report on the U.K.'s Daily Mail Web site Sunday.
The report, which cited "an industry source," said the product would launch in the U.K. by mobile phone operator O2 for the pay-as-you-go market, but offered no clue when or if it would be launched in the United States.

The report seems to indicate the iPhone Nano would be a dumb-down version of the current iPhone 3G.

"The iPhone 3G has been the fastest-selling phone ever in the U.K., but it is too expensive to be a realistic proposition in the pay-as-you-go market," the source told the newspaper. "However, a cut down version, with the candy bar shape of iPod Nano music players, would be a huge hit as a Christmas gift."

The newspaper suggests that the new iPhone Nano could have a touch wheel interface on one side and a screen on the other, meaning that calls would be dialed from behind and lack full Internet browsing functionality.

If this all sounds a bit familiar, it's because this rumor was floating around last year. Considering the wild success of the iPhone and Apple's plans for a family of iPhones, a move like this certainly makes sense. Whether Apple is ready to do it soon seems to be a bigger question.

One of the more-recent rumors has the iPod Nano getting a slimmed-down makeover. iLounge reports that Apple plans to bring back the thinner iPod Nano design of years past but in a taller package that's a nod to the screen size of today's "fat" iPod Nano.

Apple has held a September iPod event the last several years, and we're pretty sure they'll have another one on tap this year, with a revamped iPod Touch likely to accompany a new iPod Nano. In support of that suspicion, AppleInsider is reporting that resellers have been told to expect shortages of iPods and Macbooks in the coming weeks.

Original here

MobileMe gets new leadership, Jobs admits Apple made a big mistake

by Ryan Block

Not that anyone could really dance around the facts of the matter at this point, but in an email to Apple employees sent today, apparently Steve said, "It was a mistake to launch MobileMe at the same time as iPhone 3G, iPhone 2.0 software and the App Store. We all had more than enough to do, and MobileMe could have been delayed without consequence." Apple exec Eddie Cue appears to taking the much maligned service under his wing (as well as the App Store, adding to his original gig as VP of iTunes), hopefully making good on the other bit in El Jobso's email where he resets Apple's call to action on .Mac's replacement: "The MobileMe launch clearly demonstrates that we have more to learn about Internet services. And learn we will. The vision of MobileMe is both exciting and ambitious, and we will press on to make it a service we are all proud of by the end of this year." We'll see about that!

Update: You can check out the actual here email here, if you're looking to see how Jobs uses em-dashes as bullets.

Original here

Steve Jobs: MobileMe "not up to Apple's standards"

By Jacqui Cheng

In an internal e-mail sent to Apple employees this evening, Steve Jobs admitted that MobileMe was launched too early and "not up to Apple's standards." The e-mail, seen by Ars Technica, acknowledges MobileMe's flaws and what could have been done to better handle the launch. In addition to needing more time and testing, Jobs believes that Apple should have rolled MobileMe's services out slowly instead of launching it "as a monolithic service." For example, over-the-air iPhone syncing could have gone up initially, then web apps one by one (Mail, Calendar, etc.).

Jobs goes on. "It was a mistake to launch MobileMe at the same time as iPhone 3G, iPhone 2.0 software and the App Store," he says. "We all had more than enough to do, and MobileMe could have been delayed without consequence." We agree with that one.

Apple is learning a lot of lessons from its numerous MobileMe foibles, it seems, and has even reorganized the MobileMe team. For one, the entire group will now report to Eddy Cue (you may remember Cue's name showing up in numerous iTunes-related press releases). Cue will now lead all Internet-related services at Apple—including iTunes, the App Store, and now MobileMe—and will report directly to Steve Jobs.

"The MobileMe launch clearly demonstrates that we have more to learn about Internet services," Jobs says. "And learn we will. The vision of MobileMe is both exciting and ambitious, and we will press on to make it a service we are all proud of by the end of this year."

Original here

iPhone 2.0.1 Update Now Available (Also Available For iPod Touch)

A reader just tipped us off to the iPhone 2.0.1 update being out RIGHT NOW. Just fire up your iTunes and click the old update button and you'll be able to grab it. We're updating now and will let you know what's different. Right now all we see is "Bug Fixes" listed under the changelog, but there's a security update info link in the update screen as well, so it might be that. [Thanks tipster!]

Update: It's an E. Honda-like 249MB, so this will take a few minutes to download.

Update 2: iPod Touch users can also update.

Update 3: The didn't wipe out our media (pics, vids, tunes) on the iPhone 3G. Awesome.

Update 4: Is it me, or does flipping pages on the home screen seem faster and smoother?

Update 5: Marcelo says iTunes sync and backup is faster. Anyone else agree?

Update 6: Confirmed that it doesn't work with Pwnage tool just yet.

Original here

App Store bringing in strong revenue for some iPhone devs

By Justin Berka

Almost as soon as the App Store launched, 10 million applications were downloaded, but it was unclear how the overall download numbers would break down into sales numbers for individual applications. Fortunately, Apple recently added daily and weekly sales data to iTunes Connect, giving developers an idea of how well their apps are doing. One developer, Eliza Block, was quite surprised by the sales numbers that popped up, and was kind enough to share her revenue numbers with 9to5Mac.

Block is the author of the popular crossword puzzle application 2 Across, which sells for $5.99 on the App Store. Based on her iTunes Connect screenshots, Block has been earning almost $2,000 per day on the application, quite a bit more than she was expecting to make. With the sales volume that 2 Across has been seeing, even a $0.99 application would be bringing in around $300 per day.

Adding to Block's data, the folks behind Tap Tap Tap released their App Store sales numbers this morning as well. Between the company's two apps, Where To? for $2.99 and Tipulator for $0.99, 3,546 copies were sold (the majority being copies of Where To?), for a total of $9,896.After Apple took its 30 percent cut, Tap Tap Tap made out with $6,927 over a seven-day period. Tap Tap Tap's John Casasanta (a name you may recognize from MacHeist) goes on to discuss what has triggered some spikes and the overall marketing strategy in the remainder of the blog post.

Overall, these results are pretty encouraging and these examples should be promising news for iPhone and iPod touch developers. The fact that the App Store hasn't been open for very long could mean that sales numbers will go down over time, although after seeing these revenue numbers, devs may be more than happy to take a chance, release an application, and hope for some nice additional income.

Original here