Followers

Thursday, July 31, 2008

Top 10 Command Line Tools

When you need something done quickly, efficiently, and without any software overhead, the command line is where it's at. It was the first way humans told computers what to do, but as graphics became increasingly important, the command line, or terminal, became an insiders' secret weapon. But with the right commands and a little bit of know-how, anyone can get things done from a text-only interface. Let's take a look at 10 commands and tricks that make the terminal more accessible, and more powerful, on any system. Photo by blakepost.


Note: Mac OS X and Linux users have robust command line interfaces baked right into their systems. To get to them, head to Applications->Utilities->Terminal in Finder. It varies in Linux, depending on your distro and interface, but a "terminal" can usually be found in an "Accessories" or "Utilities" menu panel. Windows users are best served by installing and configuring Cygwin, a Unix emulator, which we've detailed in a three part series.


10. Customize your prompt

term_customized.jpgIf you're going to spend any time at the terminal, or want to start doing so, it should be a welcoming place. To go beyond green or white on black, check out this Ask Lifehacker response, in which Gina runs through a few simple ways to change the colors, and the greeting message, on your prompt for Windows, Mac, or Linux systems.


9. Force an action with sudo !! ("bang bang")

sudo_bang_bang.jpg You already know that prefixing a command with sudo makes your system execute it with superuser privileges. But when you forget to sudo, the !! or "bang bang" comes to the rescue. When you've perfectly crafted a long command that does exactly what you need, hit Enter, and d'oh—you don't have sufficient access privileges—you can sudo !! to repeat the last command with superuser privileges. It's the ultimate nerd triumph: "Oh, you didn't like that command? Well, then sudo !!"


8. Create whole directory trees with mkdir

When it comes to organizing music, pictures, documents, or other media, nested folders become a necessary annoyance—as in right-clicking, choosing "New Folder" and then naming and clicking through each of "The Beatles->White Album->Disc 1." It's far easier from the terminal, as the Codejacked blog points out:
mkdir The Beatles\White Album\Disc 1
Some terminal users have to add a \ before spaces, but you get the idea. If you're a Vista user who's just not down with Cygwin, you can still pull this off with the md tool in command line.


7. Filter huge lists with grep

Some terminal commands spit back a bit too much information, and that's where grep comes in. Need to manually kill a faltering Thunderbird? Punch in ps aux | grep bird, and you'll get back the specific number to kill. Need to know which files don't have your company name in them? grep -v DataCorp *.doc. Programmer Eric Wendelin explains grep more in-depth.


6. RTFM with man (and more)

man_cropped.pngLet's say a program, or web site, has just asked you to run a command to unlock or enable something, but you'd like to know just a little more before jumping in. Add man before the command (as in man ssh) and you'll get manual-style pages detailing how to use the command. Bit too much material to process? Try whatis for a brief description, --help after the command for basic usage, or any of these other command-line learning tools.


5. Manage processes with top

Most systems have a tool to view "tasks" or "running programs," but they usually hide the true guts of what your system's doing from you. The Hackszine blog points out that Mac and Linux users can harness the power of the built-in top command to track and kill runaway processes making your system unstable. There's also ps -aux for a single-screen, non-updating look at what's bugging your computer.

4. Master wget for powerful file-grabbing

wget_cropped.jpgThe wget command has been around since before there was all that much stuff to actually yank from the net, but this extensible, multi-purpose tool has lots of great uses these days. You can mirror entire web sites locally, resume huge downloads on the flakiest of connections, download the same file every hour to keep tabs on a project, and do much, much more with wget. It's one of those elegantly simple tools that's only as powerful as your creativity.


3. Get way beyond system searching with find

Once again, programmer Eric Wendelin offers real-world examples of how powerful a command line tool like find can be in, well, finding files and directories that match the smallest criteria you can imagine. Want a list of every HTML file that references the hexidecimal color #FF0000 (red)? find can totally do that for you. As Wendelin points out, find, by itself, is about as convenient and powerful as a total-system searcher like Google Desktop or Quicksilver, but piped into and out of other tools like grep, it's a powerhouse. For a more pared-down look at some of find's powers, check out this tutorial at Debian/Ubuntu Tips & Tricks.


2. Set up powerful backups with rsync

rsync_cropped.jpgYou can spend a lot of money and time hunting down a perfect backup app that works with all your systems just the way you want. Or you can spend a few minutes learning the basics of rsync, the flexible, powerful command that makes one folder (on your system) look like another (where you back up). To put it simply, rsync is a cross-platform, completely free Time Machine, if you use it right. Luckily, Gina's already shown us how to do that.


1. See your most-used commands with history, make aliases for them

awk_cropped.jpgOnce you're comfortable with the terminal and getting good use from it, you might notice some of the more useful commands require an astute memory and typo-free typing—unless you make them shorter and easier. Start off by copying and pasting this command (on one line):
history|awk '{print $2}'|awk 'BEGIN {FS="|"} {print $1}'|sort|uniq -c|sort -r
It will return a ranked list of your most commonly-entered commands using your command history—and you can start creating aliases to shorten them and make them easy to remember. Or you could search through your recently-used commands with as-you-type results for quick-fire repeats.


While these 10 commands are generic and applicable on all systems with a Unix-like terminal, Mac OS X offers a few Mac-specific tools. Here are useful command line tricks for Mac users.


We're love to have some CLI fun around here, and we know our savvier readers have tons of cool terminal hacks and tricks that are new to us. So, please—share the knowledge and spread the wealth in the comments.

Original here

Face-Swapping Tech Keeps Your Privacy Online By Making You Look Horrifying

People don't like showing up in Google Street View. Nobody wants their face to show up on Google Maps when they were just minding their own business buying home pregnancy tests, hemorrhoid cream and slim-fit condoms. Well, this new "Face Swapper" software found on Boing Boing automatically switches out features on peoples faces with features from photos in its database, creating horrifying cross-gender hybrids.

Face swapping software finds faces in a photograph and swaps the features in the target face from a library of faces. This can be used to "de-identify" faces that appear in public, such as the faces of people caught by the cameras of Google Street View. So instead of simply blurring the face, the software can substitute random features taken from say Flickr's pool of faces. A mouth here, an eye there.

Interesting. Who knows if Google will ever implement anything like this, but if they do, Street Maps will make every city look like it's populated by girls with gross facial hair and unsettling boy-women. [Kevin Kelly via Boing Boing]

Original here

Next Debian's 'Lenny' frozen

Linux distros get ready

By Phil Manchester

The next version Debian has come a step closer to completion with the freezing of the current testing distribution version codename Lenny. This will form the basis of Debian 5.0, expected in September.

The freeze means that package developers who have not uploaded software for inclusion in the Debian 5.0 release have effectively missed the boat. It also means that their packages will almost certainly be omitted from the next versions of popular Linux distros such as Ubuntu, Xandros and Linspire that are based on Debian.

Click here to find out more!

Debian developers have their work cut out over the next few weeks if they are to maintain Debian's reputation for top-quality releases. There are 363 bugs currently outstanding in the many pieces of software that make up Debian.

The job is to fix these bugs during the current test phase for the release to be considered "stable" in September. The current stable release - codenamed Etch (4.0) - will then become the "oldstable" version. The latest, so-called "unstable" version, is always codenamed Sid and includes experimental code that may or may not figure in future releases.

The prime development goals for Lenny aim to bring Debian up to date with advances in hardware architectures and software. These include support for IPv6, the latest version of internet protocols, support for large file systems (LFS) and version four of NFS. Support for future GNU Compiler Collection (GCC) releases and Python 2.5 are also on the list.

The release will also tidy up Debian's build functions to improve installation. It will be tested for so-called "double compilation" support to ensure build consistency and obsolete functions such as "debmake" - and any packages that require it - are being removed.

New or updated support for I18N description standards and full support for UTF-8 will also be included.®

Original here

Get to know Ubuntu's Logical Volume Manager

By Benjamin Mako Hill, Corey Burger, Jonathan Jesse, and Jono Bacon

Hard drives are slow and fail often, and though abolished for working memory ages ago, fixed-size partitions are still the predominant mode of storage space allocation. As if worrying about speed and data loss weren't enough, you also have to worry about whether your partition size calculations were just right when you were installing a server or whether you'll wind up in the unenviable position of having a partition run out of space, even though another partition is maybe mostly unused. And if you might have to move a partition across physical volume boundaries on a running system, well, woe is you.

This article is excerpted from the newly published book The Offical Ubuntu Book, Third Edition published by Prentice Hall Professional, June 2008, Copyright 2008 Canonical, Ltd.

RAID helps to some degree. It'll do wonders for your worries about performance and fault tolerance, but it operates at too low a level to help with the partition size or fluidity concerns. What we'd really want is a way to push the partition concept up one level of abstraction, so it doesn't operate directly on the underlying physical media. Then we could have partitions that are trivially resizable or that can span multiple drives, we could easily take some space from one partition and tack it on another, and we could juggle partitions around on physical drives on a live server. Sounds cool, right?

Very cool, and very doable via logical volume management (LVM), a system that shifts the fundamental unit of storage from physical drives to virtual or logical ones. LVM has traditionally been a feature of expensive, enterprise Unix operating systems or was available for purchase from third-party vendors. Through the magic of free software, a guy by the name of Heinz Mauelshagen wrote an implementation of a logical volume manager for Linux in 1998, which we'll refer to as LVM. LVM has undergone tremendous improvements since then and is widely used in production today, and just as you expect, the Ubuntu installer makes it easy for you to configure it on your server during installation.

LVM theory and jargon

Wrapping your head around LVM is a bit more difficult than with RAID because LVM rethinks the whole way of dealing with storage, which expectedly introduces a bit of jargon that you need to learn. Under LVM, physical volumes, or PVs, are seen just as providers of disk space without any inherent organization (such as partitions mapping to a mount point in the OS). We group PVs into volume groups, or VGs, which are virtual storage pools that look like good old cookie-cutter hard drives. We carve those up into logical volumes, or LVs, that act like the normal partitions we're used to dealing with. We create filesystems on these LVs and mount them into our directory tree. And behind the scenes, LVM splits up physical volumes into small slabs of bytes (4MB by default), each of which is called a physical extent, or a PE.

You take a physical hard drive and set up one or more partitions on it that will be used for LVM. These partitions are now physical volumes (PVs), which are split into physical extents (PEs) and then grouped in volume groups (VGs), on top of which you finally create logical volumes (LVs). It's the LVs, these virtual partitions, and not the ones on the physical hard drive, that carry a filesystem and are mapped and mounted into the OS. If you're confused about what possible benefit we get from adding all this complexity only to wind up with the same fixed-size partitions in the end, hang in there. It'll make sense in a second.

The reason LVM splits physical volumes into small, equally sized physical extents is that the definition of a volume group (the space that'll be carved into logical volumes) then becomes "a collection of physical extents" rather than "a physical area on a physical drive," as with old-school partitions. Notice that "a collection of extents" says nothing about where the extents are coming from and certainly doesn't impose a fixed limit on the size of a volume group. We can take PEs from a bunch of different drives and toss them into one volume group, which addresses our desire to abstract partitions away from physical drives. We can take a VG and make it bigger simply by adding a few extents to it, maybe by taking them from another VG, or maybe by tossing in a new physical volume and using extents from there. And we can take a VG and move it to different physical storage simply by telling it to relocate to a different collection of extents. Best of all, we can do all this on the fly, without any server downtime.

Setting up LVM

Surprisingly enough, setting up LVM during installation is no harder than setting up RAID. Create partitions on each physical drive you want to use for LVM just as you did with RAID, but tell the installer to use them as physical space for LVM. Note that in this context, PVs are not actual physical hard drives; they are the partitions you're creating.

You don't have to devote your entire drive to partitions for LVM. If you like, you're free to create actual filesystem-containing partitions alongside the storage partitions used for LVM, but make sure you're satisfied with your partitioning choice before you proceed. Once you enter the LVM configurator in the installer, the partition layout on all drives that contain LVM partitions will be frozen.

Consider a server with four drives, which are 10GB, 20GB, 80GB, and 120GB in size. Say we want to create an LVM partition, or PV, using all available space on each drive, and then combine the first two PVs into a 30GB volume group and the latter two into a 200GB one. Each VG will act as a large virtual hard drive on top of which we can create logical volumes just as we would normal partitions.

As with RAID, arrowing over to the name of each drive and pressing Enter will let us erase the partition table. Then pressing Enter on the FREE SPACE entry lets us create a physical volume -- a partition that we set to be used as a physical space for LVM. Once all three LVM partitions are in place, we select Configure the Logical Volume Manager on the partitioning menu.

After a warning about the partition layout, we get to a rather spartan LVM dialog that lets us modify VGs and LVs. According to our plan, we choose the former option and create the two VGs we want, choosing the appropriate PVs. We then select Modify Logical Volumes and create the LVs corresponding to the normal partitions we want to put on the system -- say, one for each of /, /var, /home, and /tmp.

You can already see some of the partition fluidity that LVM brings you. If you decide you want a 25GB logical volume for /var, you can carve it out of the first VG you created, and /var will magically span the two smaller hard drives. If you later decide you've given /var too much space, you can shrink the filesystem and then simply move over some of the storage space from the first VG to the second. The possibilities are endless.

Remember, however, that LVM doesn't provide redundancy. The point of LVM is storage fluidity, not fault tolerance. In our example, the logical volume containing the /var filesystem is sitting on a volume group that spans two hard drives. This means that either drive failing will corrupt the entire filesystem, and LVM intentionally doesn't contain functionality to prevent this problem.

When you need fault tolerance, build your volume groups from physical volumes that are sitting on RAID. In our example, we could have made a partition spanning the entire size of the 10GB hard drive and allocated it to physical space for a RAID volume. Then, we could have made two 10GB partitions on the 20GB hard drive and made the first one also a physical space for RAID. Entering the RAID configurator, we would create a RAID 1 array from the 10GB RAID partitions on both drives, but instead of placing a regular filesystem on the RAID array as before, we'd actually designate the RAID array to be used as a physical space for LVM. When we get to LVM configuration, the RAID array would show up as any other physical volume, but we'd know that the physical volume is redundant. If a physical drive fails beneath it, LVM won't ever know, and no data loss will occur. Of course, standard RAID array caveats apply, so if enough drives fail and shut down the array, LVM will still come down kicking and screaming.

If you've set up RAID and LVM arrays during installation, you'll want to learn how to manage the arrays after the server is installed. We recommend the respective how-to documents from The Linux Documentation Project at http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html and http://www.tldp.org/HOWTO/LVM-HOWTO. The how-tos sometimes get technical, but most of the details should sound familiar if you've understood the introduction to the subject matter here.

Original here

10-band Equalizer


Adjusting the equalizer can help make low-quality speakers (like laptop speakers) sound much better.


Internet Radio

Banshee 1.2 has a dedicated Internet Radio station library, where you can add your favorite internet radio stations, play them, and even organize them into playlists. Get your news fix, or listen to streaming music all day, all quickly accessible from within Banshee.


Music Recommendations

The much-loved recommendations pane has returned in Banshee 1.2. View recommended artists, albums, and tracks by the currently playing artist.


DAAP (iTunes Music Sharing) Client

Browse, search, play, and import from others' music shares.

Playlist Importing (.pls, .m3u)

Import your carefully created .m3u and .pls playlists.

Multi-Artist (Compilation) Album Support

Banshee 1.2 has great support for albums by multiple artists. Your compilation albums, like soundtracks, will be sorted and grouped as you'd expect.

Manual Reordering of Playlists and Play Queue

Change the order of tracks in your playlists and the play queue by dragging and dropping them. Your order persists across Banshee restarts.

Amarok Migration

If you have ratings, play counts, and podcast subscriptions in Amarok, but want to try Banshee, now it's pain free. Banshee can import your Amarok library including your ratings and play counts, and even migrate your podcast subscriptions over.

Original here

What Do Small Open Source Projects Do With Money? Not Much.

By Scott Gilbertson

Money_luismi1985_flickrWhat would your favorite small open source project do with a sudden influx of money? Imagine you donated $5000 to a project, where would the money go? Less scrupulous developers might spend the money on Mountain Dew and Twinkies, but more likely the money would just sit, doing nothing. Why? Because it takes time to spend money, and in the open source world time is in short supply.

In fact, it isn’t easy for small open source projects to spend money responsibly. If it was we wouldn’t have huge organizations like the Apache Foundation or the Django Foundation, which, among other things, are charge with distributing money where it’s needed.

But smaller projects are often caught off guard by money. The scenario posited above really happened. Developer Jeff Atwood donated some of the ad revenue from his blog to an open source project he liked. Atwood didn’t attach any strings, if the devs wanted to blow it on cocaine and whiskey they could, but the money is still sitting unused.

The lead developer responded to Atwood’s follow up e-mail saying:

The grant money is still untouched. It’s not easy to use it. Website hosting fees are fully covered by ads and donations, and there are no other direct expenses to cover. I thought it would be cool to launch a small contest with prizes for the best plugins and/or themes, but that is not easy because of some laws we have here in Italy that render the handling of a contest quite complex.

What would you suggest?

Atwood posted the question and kicked off a flurry of comments with suggestions ranging from hiring technical writers to improve documentation, to hiring graphic designers to work on the UI, to using the money to fly the developer to a conference of some kind.

Whatever ends up happening to the money, it raises an interesting point: smaller open source software projects are large built on donated time, not donated money. As Jon Galloway tells Atwood

Open source teams, and culture, have been developed such that they’re almost money-agnostic. Open source projects run on time, not money. So, the way to convert that currency is through bounties and funded internships. Unfortunately, setting those up takes time, and since that’s the element that’s in short supply, we’re back to square one.

Of course the other side of the story is that money can buy time. Indeed the vast majority of open source development is funded by corporations who have the resources to pay full-time employees to work on improving projects like the Linux kernel. But smaller projects, like the one in this case, don’t have many interested corporations and get by time and skill donated by individuals.

Short of starting a foundation to handle these sorts of tasks (which itself requires more money than most small projects have), what would you like your favorite open source project to do with a sudden windfall?

Original here

8 Best E-mail Clients for Linux

Managing e-mail is made easy with the use of e-mail client, also known as e-mail reader. Some e-mail clients can also function as feed reader and can support plug-ins and themes.

When it comes to picking the right e-mail client, Linux users have tons of choices. I have here a list of 8 of the best free and open source e-mail clients that are available for Linux.


Mozilla Thunderbird

Mozilla Thunderbird is my favorite e-mail and news client. I love it for its speed and simplicity, and for its all important features like:

*Message management - Thunderbird can manage multiple e-mail, newsgroup and RSS accounts and supports multiple identities within accounts;
*Junk filtering - Thunderbird incorporates a Bayesian spam filter, a whitelist based on the included address book, and can also understand classifications by server-based filters such as SpamAssassin.
*Standards support - Thunderbird supports POP and IMAP. It also supports LDAP address completion. The built-in RSS/Atom reader can also be used as a simple news aggregator.
*Security - Thunderbird provides enterprise and government-grade security features such as SSL/TLS connections to IMAP and SMTP servers.
*Extensions
*Themes


Evolution
Evolution combines e-mail, calendar, address book, and task list management functions. It has been an official part of GNOME and its development is sponsored primarily by Novell.

Its user interface and functionality are similar to Microsoft Outlook. It has some distinguishing features: iCalendar support, full-text indexing of all incoming mail, powerful email filters writable in Scheme, and a "Search Folders" feature (i.e., saved searches that look like normal mail folders).

Evolution can be connected to a Microsoft Exchange Server using its web interface and an Evolution add-on formerly called Ximian Connector. Using gnome-pilot, it may be synchronized with Palm Pilot devices, and OpenSync enables it to be synchronized with mobile phones and other PDAs.


KMail
KMail is the e-mail client of the KDE desktop environment. It supports folders, filtering, viewing HTML mail, and international character sets. It can handle IMAP, dIMAP, POP3, and local mailboxes for incoming mail. It can send mail via SMTP or sendmail. KMail allows manual filtering of spam directly on the mail server, a very interesting feature for dial-up users. Emails that exceed some threshold size (standard is 50 kb, but it may be set any value) are not automatically copied to the local computer. With "get, decide later, delete" options, KMail lists them but does not download the whole message, which allows the deletion of spam and over-sized messages without wasting time.


Mutt
Mutt is a text-based e-mail client for Unix-like systems. It was originally written by Michael Elkins in 1995 and released under the GNU General Public License.

Mutt is a pure Mail User Agent (MUA) and cannot send e-mail in isolation. To do this, it needs to communicate with a Mail Transfer Agent (MTA) using, for example, the common Unix sendmail interface. More recently, SMTP support has been added. It also relies on external tools for composing and filtering messages. Also in latest Mutt versions you can use smtp_url config vars to send your mail directly from Mutt.

The mutt slogan is "All mail clients suck. This one just sucks less". The authors of mutt claim that while all e-mail clients are flawed, mutt has fewer flaws than any of the competition.


Alpine
Alpine, the replacement of Pine, is a fast, easy to use email client based on the Pine Message System. Alpine boasts that it is suitable for both inexperienced email users and the most demanding of power users. Alpine is developed at the University of Washington, as was Pine before it. Alpine can be learned by exploration and the use of context-sensitive help. The user interface is highly customizable.


Balsa
Balsa is a lightweight e-mail client for GNOME. It has a graphical front end, support for MIME attachments coming and going, directly supports POP3 and IMAP protocols. It has a spell checker and direct support for PGP and GPG for encryption. It has some basic filtering capabilities, and natively supports several e-mail storage protocols. It also has some internationalization support, including Japanese fonts.

Balsa builds on top of these other open source packages: GNOME, libtool, libESMTP, aspell, and gmime. It also can optionally use libgtkhtml for HTML rendering, libkrb5 for GSS, and openldap for LDAP functionality. It can optionally be configured to use gpg-error and gpgme libraries.


Claws Mail
Claws Mail, (formerly known as Sylpheed-Claws), is a GTK+-based e-mail client and news client for Linux. It started in April 2001 as the development version of Sylpheed, where new features could be tested and debugged, but evolved enough to now be a completely separate program. It forked from Sylpheed in August 2005.

Claws Mail provides the following features:

* Search and filtering
* Security (GPG, SSL, anti-phishing)
* Import/export from standard formats
* External editor
* Templates
* Foldable quotes
* Per-folder preferences
* Face, X-Face support
* Customisable toolbars
* Themes support
* Plugins


Gnus
Gnus is a message reader running under GNU Emacs and XEmacs. It supports reading and composing both news and e-mail.

Some Gnus features:

* simple or advanced mail splitting (automatic sorting of incoming mail to user-defined groups)
* incoming mail can be set to expire instead of just plain deletion
* custom posting styles (eg. From address, .signature etc) for each group
* virtual groups (e.g., directory on the computer can be read as a group)
* an advanced message scoring system
* user-defined hooks for almost any method (in emacs lisp)
* many of the parameters (e.g., expiration, posting style) can be specified individually for all of the groups

Original here

Band Leaks Track to BitTorrent, Blames Pirates

Written by Ernesto

When we reported about the leak of a BuckCherry track last week, and specifically the band’s response to it, we hinted that this could be a covert form of self-promotion. Indeed, after a few days of research we found out that the track wasn’t leaked by pirates, but by Josh Klemme, the manager of the band.

buckcherryWhen BuckCherry found out that their latest single had leaked on BitTorrent, they didn’t try to cover this up, or take the file down. No, instead, they issued a press release, where they stated: “Honestly, we hate it when this s*** happens, because we want our FANS to have any new songs first.”

This is strange to say the least. Not only because their label, Atlantic Records, is known to release (and spam) tracks for free on BitTorrent sites, but also because the press release was more about promoting the band than the actual leak. Without any hard evidence, we suggested that this leak may have been set up to get some free promotion and publicity, which BuckCherry seems to need.

Out of curiosity, we decided to follow this up, to see if this was indeed the case. With some help of a user in the community, we tracked down some of the initial seeders of the torrent. A BitTorrent site insider was kind enough to help us out, because BitTorrent is not supposed to be “abused” like this, and confirmed that the IP of one of the early seeders did indeed belong to the person who uploaded the torrent file.

It turns out that the uploader, a New York resident, had only uploaded one torrent, the BuckCherry track. When we entered the IP-address into the Wiki-scanner, we found out that the person in question had edited the BuckCherry wikipedia entry, and added the name of the band manager to another page.

This confirmed our suspicions, but it was not quite enough, since it could be an overly obsessed fan (if they have fans). So, we decided to send the band manager, Josh Klemme - who happens to live in New York - an email to ask for his opinion on our findings. Klemme, replied to our email within a few hours, and surprisingly enough his IP-address was the same as the uploader.

Epic fail….

Unfortunately Klemme only replied once, and ignored all further requests to comment on this issue. However, the press release, sent out by Atlantic Records and BuckCherry, seems to be a promotional stunt. It could be that the manager acted on his own, and that the band and the record label were not not in on this, but that’s less plausible.

Klemme has been caught with his pants down, and he will probably think twice before he tries to pull off a stunt like this again. A song doesn’t leak by itself and pirates don’t have some sort of superhuman ability to get their hands on pre-release material. No, most leaked movies, TV-shows and albums come from the inside so blaming pirates is useless.

Of course, it’s great that BuckCherry can get some free promotion for the band using BitTorrent, and we encourage everyone to promote their band or movie via this great system too. But wouldn’t it be more constructive if bands embraced the technology and admitted it, instead of playing the injured party and giving the protocol a bad image, just to boost their own? There’s a great opportunity here, don’t waste it.

Original here

Comcast Illegally Interfered With Web File-Sharing Traffic, FCC Says

Washington Post Staff Writer




Industry insiders and FCC members say Chairman Kevin J. Martin isn't expected to fine Comcast, but the ruling isn't official yet.
Industry insiders and FCC members say Chairman Kevin J. Martin isn't expected to fine Comcast, but the ruling isn't official yet. (Katherine Frey - The Washington Post)

A majority of the Federal Communications Commission has concluded that cable operator Comcast unlawfully disrupted the transfer of certain digital video files, affirming the government's right to regulate how Internet companies manage Web traffic.

Three commissioners on the five-member FCC have signed off on an order that finds Comcast violated federal rules by purposely slowing the transmission of video files shared among users of the application BitTorrent.

Comcast has said it delayed the files to assure that enough bandwidth remained available for other users on its network. But the company did not disclose its practices until public interest groups and the video-sharing site complained to the FCC, alleging that the company had set itself up to be a secret gatekeeper of content, picking and choosing which applications to favor.

Comcast continued to defend its practices, even as it and other carriers have begun exploring alternatives for discouraging heavy-bandwidth users.

"Our network management practices were reasonable, wholly consistent with industry practices and . . . we did not block access to Web sites or online applications, including peer-to-peer services," said Sena Fitzmaurice, a spokeswoman for Comcast.

As of Friday, Republican FCC Chairman Kevin J. Martin and Democrats Michael J. Copps and Jonathan S. Adelstein had affirmed the complaint. Republican Robert M. McDowell is preparing to vote against the complaint and Republican Deborah Taylor Tate has not indicated how she will rule. The full board is scheduled to formally vote on the matter Friday.

Details of the order have not been announced, though Martin is not expected to fine Comcast, according to industry insiders and members of the FCC who spoke on the condition of anonymity because the ruling still is pending.

The ruling could set a precedent, analysts said, in that it would send a message to other carriers that they must fully disclose how they manage the flow of traffic over their networks and not single out any specific applications for more scrutiny.

"This is a slap on the wrist for Comcast, but it will be a cutting off of the hand for the next provider who violates rules," said Roger Entner, a senior vice president with IAG Research.

In the months leading to the agency's ruling, some cable and telecommunications carriers have moved away from attempting to interrupt specific applications in favor of adopting new pricing and usage models that would make it more expensive to send and receive large batches of data.

Time Warner, for instance, is testing a metering pricing system in Beaumont, Tex., that charges users by the amount of bandwidth they consume. Comcast is testing its technique of temporarily slowing traffic for the heaviest users, regardless of the applications a customer uses, in four cities.

Lariat, a small provider of wireless broadband service in Laramie, Wyo., blocks any use of direct file sharing -- called peer-to-peer -- because such traffic overwhelms the network, the company said.

"If we didn't do this, we'd go out of business," said Brett Glass, Lariat's owner.

AT&T doesn't meter Internet service, but like many carriers has indicated that it will evaluate usage-based policies given the rapid growth of data use on their networks. The telecom giant has predicted total bandwidth use on its network will increase by four times over the next three years.

Some public advocacy groups, while not dismissing the practice of metering, say companies should invest more of their billions of dollars in annual revenue on increasing bandwidth capacity.

"I don't quite see [metering] as an outrage, and in fact is probably the fairest system going -- though of course the psychology of knowing that you're paying for bandwidth may change behavior," said Tim Wu, a law professor at Columbia University and chairman of the board of public advocacy group Free Press.

Original here

Google testing “AdSense for Games” in bid to shake up in-game advertising

Dean Takahashi

Google is the sleeping giant when it comes to advertising in video games. While the company dominates search advertising, it has yet to make a big splash in video games. That could change soon, as the company has been quietly testing its “AdSense for Games” product for months.

Sources close to the matter said that the company has developed an in-game advertising technology that allows it to insert video ads into games. In demos of the technology, a game character can introduce a video ad, saying something like, “And now, a word from our sponsor,” before showing a short video at the end of a sequence in a game. Since testing has been going on for some time, Google could launch the technology fairly quickly, if it so chooses.

But it’s not clear why Google hasn’t already launched its in-game advertising business, given that the seeds of AdSense for Games were planted in early 2007. Google did not respond to a request for comment this morning. I’ll update if that changes.

“I don’t know what’s taking them so long,” said one source close to the matter. “They could move into this market very quickly, given what they have shown off.”

If the company enters the market, it should stir up the competition the way it has in other ad markets. Companies such as Double Fusion, IGA Worldwide, Microsoft’s Massive, MochiMedia and NeoEdge Networks have been carving out niches with in-game or wrap-around ads for some time.

All of the companies know the potential of the market. Advertisers are turning to in-game ads because it’s one of the only ways to reach young male gamers who have stopped watching TV. The Yankee Group predicts the market will be worth $971.3 million by 2011. Google’s top executives know that search advertising may not last forever, and in-game advertising could become a compelling technology over time as both games and in-game ad technology become more and more engaging. Google would cover its bases by making a small side bet on in-game ads.

Google’s technology can be applied to console games, disk-based PC games, web-based PC games and cell phone games. But those who are kicking the tires on the technology (outside the company) have not seen all of those platforms in action.

One of its options is to keep testing its technology while it waits for the market to get bigger. The company drew attention to its game-ad intentions when it bought Adscape for $23 milion in February, 2007. Bernie Stolar, the former head of both Sega of America and Sony Computer Entertainment America, was Adscape’s chairman. Working for Google, he gave a speech just about a year ago describing “AdSense for Games” at the 2007 Casual Connect conference in Seattle. In the talk, Stolar said Google had no plans to make games or otherwise enter the game portal business; Google just wanted to do ads.

A flurry of stories appeared in November last year that Google was launching its beta test with Bunchball. That involved only pre-roll advertising, not with in-game characters. The Bunchball Facebook games rolled out with the Google ads, but not much else happened. That false move is a reason why some of the partners are wondering if Google is really going to go forward or not.

In buying Adscape, Google was reacting to Microsoft’s own move into in-game advertising. In May 2006, Microsoft bought Massive, the pioneer of in-game ad networks that was founded in 2004. Since the acquisition, the market gathered steam. Alison Lange Engel, global marketing director for Massive, said that the company now has more than 200 advertisers in its network. Those companies can insert either fixed or live ads into games. The live ads are more suitable for short-term campaigns because the companies can change the ads on the fly, using Internet connections to pipe new content into video game consoles. More than 70 games now use Massive’s in-game ads.

The battle lines have been drawn. Yahoo, which draws 18 million gamers a month to its Yahoo Games portal in the U.S., recently signed up NeoEdge and Double Fusion as its in-game ad partners. Electronic Arts has a variety of partners. And Sony has signed up Double Fusion and IGA Worldwide. Sony is thought to be a prime potential customer since it is launching its Home virtual world for gamers in the fall on its PlayStation Network for the PlayStation 3. Among the console makers, only Nintendo has been quiet when it comes to in-game ads. At this rate, there may not be much left for Google. It better not wait too long.

The insider buzz is growing about Google’s plans, particularly since its big sales force could generate a lot of interest in the ad platform. A bunch of Google representatives attended the 2008 Casual Connect show in Seattle last week, but they didn’t answer questions about when Google would jump into the in-game ad market.

Google spilled part of its intentions by announcing its virtual world — or more appropriately virtual room. The company launched Lively by Google earlier this month. Lively by Google would be a natural vehicle for Google’s AdSense for Games product, which could insert ads into the rooms of users. In fact, others expect it to be a proving ground.

Original here

Google Moves to Reinvent Transportation

Katie Fehrenbacher


On a sunny afternoon back in June of 2007, members of the media, academia and the tech industry gathered to watch Google co-founders Larry Page and Sergey Brin drive a white Prius around the parking lot of the search giant’s Mountain View, Calif., headquarters.

It wasn’t just a slow news day — the Prius had been converted into a plug-in vehicle, and Page and Brin had gotten behind the wheel in order to announce the company’s RechargeIT initiative, which included, among other things, $10 million to back plug-in vehicle technology.

It’s been a year since that awkward scene, and the motivation behind Google’s foray into transportation has only recently started to become clear. Google just named the first two recipients of funds from its plug-in vehicle program: lithium-ion battery maker ActaCell and electric vehicle maker Aptera Motors.

While Google commonly makes small investments in web and mobile startups and has started backing renewable energy companies as well, this was the first time it has funded companies focused on electric vehicles. With the move, Google has gone from advocating plug-in vehicle technology to investing in it, much the way a venture capitalist would.

The investments themselves shed some light onto the value that Google sees in electric vehicles. As a massive power user, the company has pledged hundreds of millions of dollars toward helping remake the energy industry, investing in solar and wind technology and in making its data centers more energy efficient. Since plug-in vehicles can help utilities stabilize energy delivery, lithium-ion battery technology like ActaCell’s and plug-in vehicles like Aptera’s Typ-1 are essentially an extension of Google’s energy investments as they could provide important energy storage capability to the power grid.

Google is also betting that the future of transportation will be networked and controlled via software, just like our laptops and gadgets. And not just connected via the Internet, but through the network of the power grid, too. Rolf Schreiber, an engineer with RechargeIT, says that beyond these initial investments, Google is also looking to back companies that build software that can control the rate at which plug-in vehicles charge.

And much the way Google has built a business of providing information via the web, the company could add its broadband expertise to the future of connected transportation. Schreiber, for example, recently completed a test of Google’s own, in-house plug-in vehicles using wireless communications and GPS to determine that the cars are getting more than 93 miles per gallon.

Let’s not kid ourselves: Google’s investment in transportation so far is paltry compared to what it’s spending on other industries. But while we’re not predicting that Google will make a G-car any time soon, its efforts to push plug-in vehicles as a way to build out a smarter power grid, and to bring some of the intelligence of information technology to transportation, will be worth watching.

Original here

Lawyer Exposes RIAA’s Legal Bullying

Written by Ben Jones

For many people, justice is something that is bought and sold in the US, especially where filesharing is concerned. Few lawyers are willing to represent, and fewer still understand the technologies involved in cases. Ray Beckerman is one of the few that seem to, and he now has an article in the current edition of The Judges Journal, about the RIAA lawsuits.

Beckerman’s article, entitled “Large Recording Companies vs. The Defenseless” (pdf) seeks to explain the processes of the RIAA in simpler terms, and makes suggestions for those working in courts to ensure that justice is always kept in mind.

Repeatedly hammered home throughout is that the RIAA has very little by way of a case. Starting with the weakness of the ‘expert witness’, where Beckerman notes that the evidence by the three people at MediaSentry that form the basis of all their lawsuits, don’t meet basic standards). He further discusses the repeated rulings that lawsuits shouldn’t be joined together as ‘Doe 1 – whatever’ (the first almost 4 years ago). The highly questionable tactic of filing a Doe suit, using it to get information, and then filing a named suit, is also mentioned.

Suggestions put forward by Mr Beckerman include watching for wrongly joined cases (and dealing with such cases as a contempt of court), ‘don’t be baffled by jargon’ (simply put as ‘if you don’t understand the case, then maybe the plaintiffs haven’t got one’) and “have all decisions published”.

For those of us keeping track of RIAA cases, the technological details are a little light. But then, Mr Beckerman is not professing to be an expert in p2p technology, nor technology in general. He is, after all, a lawyer not a techie, and could probably explain the tech side as well as I could the rules of disclosure. Instead, he presents a working knowledge that is simple to understand even for the most luddite of jurists. Indeed, as that is the target audience, there is more than a sprinkling of legal terms, but again, none too complex as to defy understanding.

To some, it might appear that the article, a substantial, but not overly weighty 8 pages (with another 2 for footnotes) is nothing more than a rehashing of material previously posted to his blog. However, the coherence and progression of the document means that it is of great use to someone who has just been targeted for litigation, or for their counsel. In this matter, it succeeds, perhaps unintentionally.

Perhaps most significantly, though, is that hundreds - if not thousands – of judges up and down the United States will be reading this, and will keep certain things in mind should a case come to trial in their court. It is entirely likely that many of the judges involved in cases already, were unaware of some of the cases and judgments, and that others have already ruled against practices that may be used in a case they’re involved in. Everything from admonitions for joining cases, to reasons why ex-parte motions should be examined closely.

This article may have done more to dampen the legal juggernaut that the RIAA has unleashed on the American people, than anything short of a Supreme Court victory, or federal legislation. It is another fine example of an entertainment industry having their claims published, and found to be contradictory.

Original here

Web Scout: Spinning through online entertainment and connected culture.

Revision3's web TV runs on star power


Patrick Norton and Veronica Belmont, hosts of "Tekzilla." (Photo credit: Dave Getzschman / For the Los Angeles Times.)

I've a feeling we’re not in Hollywood anymore.

But you might like it here too, Toto. This is Dogpatch, the bayside sliver of east San Francisco that’s home to the Internet TV start-up Revision3. Through the doors of this old brick warehouse and up the stairs, there’s a roomful of people who make a point of ignoring the old rules of the television business. Starting with the TV part. Revision3 is home to 19 original shows, 10 of which are filmed weekly in its on-site studio. But you won’t find any of them by flipping channels.

You see, here in Dogpatch, they’re setting television free — releasing the concept from its poison prison of glass and metal, so it can return to its native meaning: watching from anywhere.

And so far, people are. Revision3 was started in 2005 by Kevin Rose and Jay Adelson, the guys behind Digg.com, the popular site where users vote on the best news stories of the day. Rose co-hosts the show “Diggnation,” a weekly rundown of the site’s top stories, which Revision3 beams out to about 200,000 viewers per 40-minute episode. He has become a model for the kind of smart celebrity the technology scene loves — people who are entertaining while the camera’s rolling, and enterprising when it isn’t.

“What’s working are these host-driven shows,” said Revision3 Chief Executive Jim Louderback. “The ones where you’ve got an engaging host with a proven ability to aggregate social networks around them online, and who are great at talking about their passions.”

Revision3 owes that approach to another pioneering enterprise of which it’s a genetic descendant. The now-defunct cable network TechTV built a loyal audience earlier in the decade and minted many of the technology world’s best-known stars. A half-dozen TechTV alumni, including Rose and Louderback, currently fill Revision3’s roster.

But even with the overlap and the similar programming philosophy, it’s a lot different this time, said Patrick Norton, who got his television start at TechTV and now co-hosts Revision3’s popular techno-variety show “Tekzilla.”

“It’s incredibly expensive to launch a new cable channel,” Norton said. “Even if you do spend an enormous amount of money these days, you’re probably going to end up in the nosebleed sections of digital cable. “Our studio cost nothing by comparison,” Norton said of Revision3’s state-of-the-art, high-definition setup. “And by being online, we can target anyone with a broadband connection, which gives us huge potential audience all across the United States without having to sign a single distribution deal.”

But Revision3’s biggest asset is its stable of Web personalities who — even if they’re not familiar to the general public — are ubiquitous in tech circles. Louderback points to a website called Twitterholic, which tracks the 100 most popular users on the messaging service Twitter.

[Twitter-torial (thanks Dave): The site allows users to accrue “followers.” Every time a user sends a short message, all of his or her followers immediately receive it. As the site has grown — there are reportedly over 200,000 users now — the higher-profile users began a kind of arms race to see who could recruit the largest possible Twitter-follower army. The result is that Twitterholic functions as a rough proxy for overall Internet fame. *Web readers: this was for the print audience -- I know you already know.]

Diggfounders5_k3cl1ync
Alex Albrecht and Kevin Rose host "Diggnation."
(Photo credit: Randi Lynn Beach / For the Los Angeles Times

Revision3 hosts occupy about a dozen of the top 100 Twitter spots. Rose reigns with 53,000 followers, edging out runner-up Barack Obama. Co-host Albrecht has 34,000 followers, while Veronica Belmont of “Tekzilla” is also in the top 10 with nearly 30,000.

In a culture where buzz, and the ability to generate it, is becoming one of the most valuable commodities, Revision3’s Twitter titans wield substantial influence. With a few keystrokes, they can put a new website on the map — or they can take one off.

Last week, Belmont pointed her followers to a video site she found interesting. “I took them down,” she said. All that influence had sent the site crashing to its doom. “Twice.”

Revision3 makes its shows available on a number of partner sites around the Web. This mass distribution tactic has become the industry’s preferred strategy — more platforms, more eyeballs. But in the fragmented online-video landscape, star power may be among the promotional forces that shines most bright and constant.

“There’s still not one place to go to find the best new shows,” said Dina Kaplan, a co-founder of Blip.tv, which hosts a variety of online programming including shows from Revision3. “So what ends up happening is that your content is about 25% to 40% of the cause of your success — and the rest of it is all about how you market, market, market.”

All that marketing doesn’t stop at Revision3. The Web stars can aim their publicity fire hoses at whatever they feel like. Gary Vaynerchuk, who is host of the hit show “Wine Library TV,” which plays in a shortened format on Revision3, uses his uncorked personality to build a personal brand he sees as a never-ending work in progress.

“I want to create a world where you’re not branded one way. I want to be a social media expert, and a marketing guru, and a big-time wine guy, and a Jets fan, and a family guy, and a sensitive guy, and a wrestling maniac,” Vaynerchuk enthused.

Original here

Top 10 Most Pirated TV Shows on BitTorrent (wk30)

Written by Ernesto

TV shows are by far the most wanted files via BitTorrent, and according to some, it’s fast becoming the modern day TiVo. But what are all those people downloading?

top gearThe data is collected by TorrentFreak from a representative sample of BitTorrent sites and is for informational and educational reference only.

At the end of the year we will publish a list of most downloaded TV-shows for the entire year, like we did last December.

TV-shows such as “Lost” and “Heroes” can get up to 10 million downloads per episode, in only a week.

Top Downloads June 20 - July 27


Ranking (last week) TV-show
1 (1) Top Gear
2 (3) Stargate Atlantis
3 (2) Weeds
4 (4) Generation Kill
5 (5) The Daily Show
6 (new) Penn and Teller Bullshit
7 (8) In Plain Sight
8 (10) Burn Notice
9 (6) Psych
10 (9) The Colbert Report

Original here

Mozilla releases first Firefox 3.1 alpha

By Ryan Paul

Mozilla has announced that the first Firefox 3.1 alpha is now available for download. This release, which is codenamed Shiretoko, includes Gecko 1.9.1 and adds a handful of new features.

Firefox 3.1 alpha 1 includes some user interface enhancements like a new graphical tab selector that shows page previews and a smarter filtering system for the Awesome Bar. This release also offers some nice improvements for web developers, such as new CSS properties and selectors. The HTML canvas element also got a boost in alpha 1 with the introduction of support for the canvas text API.

Binary builds are available for all three major platforms, but users should proceed with caution, since its an early alpha release and it's primarily intended for testers and developers. For additional details, see the official release notes.

Original here

RIAA Critic Beckerman Scores Judiciary's Ear

By David Kravets

Beckermanpic

New York attorney Ray Beckerman, an outspoken critic of the Recording Industry Association of America, has acquired the ear of thousands of federal judges nationwide.

The American Bar Association's "Judge's Journal" summer issue is publishing his lengthy paper, Large Recording Companies v. The Defenseless (.pdf). The publisher of the blog, Recording Industry vs The People, does an excellent job of explaining the finer legal points of the RIAA's litigation machine.

Beckerman, who defends people sued by the RIAA, chronicles RIAA litigation start to finish -- from the investigative stage, to how the RIAA acquires the name of the ISP account holder to the payment of a few thousand dollars that usually settles a lawsuit out of court.

The bulk of the article deals in highly legal matters concerning venue, jurisdiction, dismissal, discovery, confidentiality, legal fees and default judgments. It's a must read for anybody who has ever been on a file sharing networks like Kazaa, and a must read about an area of litigation that has ensnared more than 20,000 defendants.

Original here

Intel to lose its lead in chip manufacturing tech in 2009, sort of

By Theo Valich

Chicago (IL) – Intel is proud of its dominant position in semiconductor in production technology and especially the fact that, for as long as we can remember, has led the industry in terms of the smallest chip structures. Its 45 nm technology is still at least one year ahead of AMD. But it appears that Intel will have to give up that lead next year, at least for a few months, when GPUs will make their transition to 40 nm.

This was bound to happen sooner or later. After speaking with several of our sources at ATI (AMD GPG) and Nvidia, we were told that a 40 nm GPU manufacturing process is on the way for first half of 2009. In fact, both companies are working on parts that should capture the spotlights at CeBit 2009 in Hannover. Both low-end and mainstream products are ready to be manufactured in 40 nm soon and should be on display at the tradeshow.

It appears that TSMC’s previously announced $10 billion investment in manufacturing technology is yielding results already, since the company is now able to develop 45 nm and 40 nm processes at the same time. The next step for TSMC is either 32 nm or 30 nm - or below. Samsung is investing heavily in 30 nm, but that is for DRAM only.

Intel has 32 nm CPUs still in development at its research, development and production facilities in Hillsboro, Oregon. 45 nm Nehalem CPUs will be the focus at the upcoming fall IDF, but it is generally expected that prototype 32 nm processors will be first shown at the company’s spring developer forum in H1 2009. Production of the chips should begin early in H2 2009, with volume shipments beginning in late Q3 or early Q4. First chips should surface in commercial products in late 2009, while 32 nm will be a 2010 topic for the mainstream buyer.

However, by that time, millions of 40 nm GPUs will have shipped already and you can bet the farm on the fact that both AMD and Nvidia will be pitching that story to the media in the same way Intel did in previous years.

According to TSMC’s internal roadmap, the company will offer three different production nodes. CLN40G is the general purpose process and will be used by the GPU manufacturers. CLN40LP is the low-power process and will be used for the production of notebook derivatives of GPUs manufactured in the general purpose process. The third 40nm node is CLN40LPG, which is targeted at manufacturing chips for handheld devices. So expect Nvidia’s Tegra chips to shrink all the way down to 40 nm.

TSMC is likely to launch a 32nm process in a similar frame with Intel. CLN32G is scheduled for a roll-out in Q4 2009. A low-power version will follow about one quarter later. If TSMC is able to stay with its roadmap then we may see 32 nm GPUs in 2009. Intel is expected to launch its Larrabee cGPU on 45 nm in 2010 and move it to 32 nm as soon as possible.

As things shape up right now, GPUs will overtake CPUs for the first time in history of IT in terms of production nodes. Of course, a lead in chip production tech is defined by many more components than size, but the fact that GPU production nodes will surpass Intel CPUs is a significant event.

If we look at the conversion of the complete GPU line-up from 55nm and 65nm to 40 nm - ahead of CPU cycle – it showcases how much more important the GPU has become and will attract more attention to the technology than before. GPUs were introduced by 3Dfx (big “D”) and Nvidia, trailing CPU manufacturing processes from Intel and AMD by two to three generations. Now GPUs are about to leap ahead.

Kudos to TSMC for developing the 45 nm and 40 nm half-node die-shrink at the same time.

Original here

Overclock world record: Q6600 2.4GHz run at 5.1GHz

Written by Devin Coldewey
What a ridiculous project! But how awesome would it be to be the hardcore system building nerds they asked to do this? A couple months ago, a French Tom’s Hardware-related superteam got together to overclock an Intel Core 2 Quad Q6600 2.4GHz as far as it would go. They just put up the pictures and everything yesterday. They used liquid nitrogen cooling and a pretty serious-looking compressor to suck the heat right out of the thing, and ended up more than doubling the cycles. For reference, it’s generally safe to overclock your stock hardware about five percent, and even the real pros get maybe an extra thirty percent — and at that point you’re risking a lot of errors, artifacts, and so on. If you’ve ever wondered what liquid nitrogen cooling looks like in motion, check out the video (en Francais).

Original here

Low-end grudge match: Nano vs. Atom

By Joel Hruska

Introduction

The tech world has kept an interested eye on VIA's Nano since before the turn of the year, but the level of interest in the new processor has grown significantly in recent months, thanks in part to Intel's focus on the ultra-low-power/low-cost market. Over the past six months, VIA has found itself pushed from the perpetual twilight of an also-also-ran into a position of genuine competitive interest. The company is finally ready to sample Nano for performance testing, and I've had the opportunity to put the chip through its paces.

Atom vs. Nano: not a perfect match

In order to test VIA's new chip, I've benchmarked it against Intel's Atom. There's a lot of curiosity out there about how the two low-power processors stack up against each other, and this article attempts to satisfy that curiosity, but it's important to note that this is not an apples-to-apples comparison. According to Intel executive VP Sean Maloney, Atom is "built for low power and designed specifically for a new wave of Mobile Internet Devices and simple, low-cost PC's." As for Nano, VIA's whitepaper (PDF) states: "It [Nano] will initially power a range of ‘slim ‘n’ light’ notebooks." and "will also appear in ultra mobile mini-note devices and small form factor, green desktop systems for home and office use." In this case, we're benchmarking a Nano reference system at the upper end of VIA's product range. The L2100 CPU at the heart of the system is a single-core 1.8GHz processor, with a TDP of 25W.

Chip Design Type Process Frequency
MHz
SMT FSB
MHz
L2 cache TDP
(W)
Intel Atom 230
In Order
45nm 1600 Yes 533
512K 4
VIA Nano L2100 Out-of-order 65nm
1800 No 800
1024K 25

While there is a certain degree of overlap between the two processors, it's limited to the relative upper end of Atom's target market and the relative lower end of Nano's. This might not seem so evident at the moment, given the limited number of Atom configurations Intel is currently selling on the DIY market (one), but the two products are focused in two different directions. There are other factors that cloud the comparison, including an early reference platform from VIA and a horribly mismatched processor+chipset combination from Intel, but I've done what I can to tease those differences out and present the two products from a variety of angles.

Performance summary

There are a number of different facets to consider when evaluating Atom vs. Nano, and that's a good thing for Intel. Were this simply a question of which CPU was faster, Nano would win, and by no small margin. Our benchmark results demonstrate that VIA's wunderkind is more than capable of competing in its target market; Nano beat the tar out of Atom in the majority of the tests we ran. The chip might have extended its lead further on a different platform; several tests indicated that the integrated S3 GPU was limiting total performance. Results are directly accessible from the links below. Anyone interested in the questionable effects of benchmark "optimization" should find the PCMark 2005 results of particular interest, while the DVD/HD content playback tests are one spot where the Atom + 945GC chipset pull well ahead of Nano's integrated GPU.

The entire point of these platforms, however, is that they don't focus on raw performance to the exclusion of all else. Power efficiency is at least as important as raw speed these days, but how VIA and Intel rank in this area depends entirely on how we choose to measure performance-per-watt (ppw). If we only consider processor TDP, Atom wins by a landslide. It may lose most benchmarks in absolute terms, but it always remains competitive enough to easily win any power efficiency comparison. So, VIA wins absolute performance but Intel wins power efficiency, right?

Wrong. Superman has Kryptonite, Rogue can't touch people, and Atom, for all its super-low TDP, has been effectively hamstrung by the 945GC chipset. With a TDP of 22W, Intel's chipset draws nearly six times more power than the processor itself, a fact that's driven home when you realize that the tall heatsink + fan combination on the retail D945GCLF board is actually cooling the northbridge, rather than the CPU.


The power-hungry nature of its platform destroys any current chance Atom had of establishing itself as a truly low-power alternative. Total system power draw is still quite low by desktop standards, but the D945GCLF's maximum load power of 59W is only about nine percent lower than VIA's reference motherboard. That narrow discrepancy isn't enough to offset VIA's sizeable performance advantage in many benchmarks, and the Nano ends with a higher overall, platform-level performance-per-watt ratio than Atom in many of our benchmark tests.

The bottom line

Nano is an excellent step forward for VIA. It's by far the most compelling CPU the company has ever launched, and could potentially carve a spot for itself in its target market segments. VIA's mini-ITX reference platform is similarly impressive; the board's PCIe x16 slot opens the door for a variety of potential applications that the Atom reference platform can only dream about. Intel's D945GCLF may run just $75 for a 1.6GHz HyperThreaded Atom processor, but it's painfully obvious that the board was designed with an eye towards guarding Celeron sales, and the lack of expansion capabilities hurts Atom's overall attractiveness.

VIA's CN896 chipset may be a better overall fit for Nano, but the integrated Chrome9 HC graphics solution leaves much to be desired. While it proves marginally faster than Intel's GMA950 in some tests, it slumps badly when asked to decode much of anything. The built-in PCIe x16 slot significantly addresses this issue, but the availability of an expansion slot, in and of itself, does not compensate for lousy integrated video—even on a netbook-class solution.

There are too many long-term questions across too many areas to deem this a complete slam-dunk for VIA. Reviewer samples and reference platforms are great for publicity, but VIA has yet to demonstrate that it can ship Nano boards and chipsets in volume. The company has promised that mini-ITX Nano boards will be available in the retail channel by the end of the third quarter, so we should know in a few months if the company can make good on its promise of availability. The fact that Nano is a drop-in replacement for C7 could make the chip attractive to manufacturers with C7-based devices, but again, neither VIA nor its partners have announced plans in this area. A few words from HP confirming Nano as a basis for an upcoming refresh of the 2133 would do wonders for both Nano sales and VIA's reputation.

The largest potential barrier to Nano's long-term success, of course, is Intel. Santa Clara has made no secret of the fact that it believes MIDs, netbooks, and nettops are the future of the industry, and that it intends to offer an Atom that could fit inside any of these devices. Right now that might sound laughable, but Intel isn't kidding. The company's current retail Atom 230-based board might not be what you'd call compelling, but that doesn't mean that future products won't be. VIA's long-term success will be directly proportional to the number of design wins the company can gather for itself in an area Intel has announced it intends to dominate long term.

Testbed configuration

The following components were identical between both testbeds.

  • Acer AL2216W 22" 1680x1050 LCD.
  • Seagate Barracude 7200.9 250GB HDD
  • 2GB OCZ DDR2-1066 @ DDR533
  • Windows XP w/SP3 installed
  • Enermax 250W PSU

Our VIA reference board consisted of a VIA Nano combined with the CN896 northbridge. More information on the northbridge can be found over at the company's website, but the board we tested is a full implementation, with two DIMM slots and a single PCIe x16 slot. As for Intel's Atom, the company currently offers just one SKU on the retail market. Details on the Atom 230-based D945GCLF are available here.

It doesn't take Nancy Drew's insight to see that these are two very different products. The Nano board is full-featured for a mini-ITX product, with two DIMM slots and a PCIe x16 slot, while the Atom-based D945GCLF can scarcely be expanded at all. Intel makes a limp-wristed token gesture to expansion with a single PCI slot, but the highly integrated nature of the board, combined with the extremely limited performance of PCI, leaves me wondering why the company bothered. As I'll explore, Atom's limited expansion does impact the board's attractiveness, even when judged solely by the standard of its target low-end market. Both systems were tested with 2GB of RAM, and both are limited to a single, 64-bit memory channel.

As I mentioned earlier, the large heatsink+fan on top of the Atom board is cooling the chipset, not the CPU. The fan itself isn't audible unless you literally put your ear up to it. Neither the Atom nor the Nano testbed produced any noise above a simple operating hum unless a CD was spinning up in one of the drives.The largish fan on VIA's reference board is an integrated solution that covers both northbridge and CPU, with a separate heatsink for the southbridge.

I debated between Windows XP and Windows Vista, but eventually settled on XP as the better fit for these type of systems. The LCDs I used both support 1680x1050 resolutions, but Worldbench 5 runs all of its tests in 1024x768 by default. This seemed a reasonable resolution for the systems anyway, so I kept it for all further performance testing. The one exception was when I tested HD content playback—I pushed both monitors up to 1680x1050 when measuring DVD, HD DVD, and MPEG-2 performance.

As for the benchmarks themselves, I chose a suite of older tests that could measure system performance without requiring Vista or requiring more than 2GB of total RAM. Tests like Cinebench and PCMark aren't strictly representative of likely workloads, at least not in the netbook market, but they provide additional subsystem performance information. Ultimately, I think we'll see a new series of MID and netbook-specific tests emerge as this market matures.

One final note before we turn to benchmarks. Although I haven't broken the results out here, I ran a number of tests with HyperThreading disabled, in order to judge how much of a performance boost Atom gained from the feature. Back in the Netburst days, Intel targeted a ~20 percent performance increase with HT enabled over HT disabled in an appropriately threaded test, but that was years ago, and a very different architecture. Atom is an in-order architecture, and as such, is potentially vulnerable to stalling out. HT could potentially help the processor keep its pipeline full by seeding the execution units with additional uops.

Based on what I saw, HyperThreading plays a major role in keeping Atom data crunching. Exact results varied from application to application, but enabling HT often boosted performance by around 50 percent. Almost all of the Atom SKUs Intel has announced will offer HyperThreading; it seems to be an essential part of the secret sauce that keeps Atom (somewhat) competitive.

The other advantage of HT is the way it smoothes system responsiveness. The Nano might be faster than the Atom, but the chip suffers from the same small stutters and tiny pauses that impacts all single-core processors when the OS is busy doing other things. Thanks to HyperThreading, Atom feels smoother, and the system remains more responsive while engaged in other tasks. This effect is something of an optical illusion, since a system that finishes a task more quickly can move on more rapidly to do something else, but it does give Intel's new core a perceived advantage.

Worldbench 5

Worldbench 5

Before there was Worldbench 6.0 Beta 2 for Vista, there was Worldbench 5 for XP, and it's this older version of the test we're examining today. I've highlighted certain tests—the Roxio Videowave test refused to run on either system—so there's no composite WB score listed for either platform.

VIA sweeps the first series of tests, save for Nero Express 6, where Atom pulls ahead. This is almost certainly a platform issue; Nero tends to be sensitive to southbridge performance and Intel has often held an edge over its competitors in these types of benchmarks. Note that the gap between Atom and Nano can vary widely; this will be a recurring factor in other tests.
Another set of tests and another set of wins for Nano, though Atom does come within 14 percent of VIA's new core in the OfficeXP benchmark. Of all the applications we've examined thus far, this (and possibly Nero) are the two most likely to be encountered by actual Atom or Nano users, and the two cores have compared relatively well thus far, though Atom does slip back quite a bit in WME9 encoding, despite that application's support for SMT.

Our final set of Worldbench results are another clean sweep for Nano, though we do see the impact of Atom's HT support in several tests. VIA wins both the WME9 and Mozilla 1.4 tests, as well as the multi-tasking test that combines the two, but it takes the chip 1.77x longer to complete the combined task than it did to finish the Mozilla 1.4 benchmark. Atom takes almost twice as long to finish its Mozilla test, but tossing a WME job on top of the browser only extends the test length by about 33 percent.

Original here