Monday, February 16, 2009

Online thieves scam state of Utah out of $2.5 million

By Joel Hruska

Online thieves scam state of Utah out of $2.5 million

States have been slashing funding allocations and contemplating tax increases as a means of balancing their budgets, which makes a recent revelation concerning the state of Utah's treasury all the more embarassing. According to investigators, state officials recently uncovered evidence that some $2.5 million had been transferred from the state's coffers into various holding accounts. The scheme is thought to have originated in Africa, possibly in Nigeria, but is not the same sort of attack that's typically referenced when a person or article refers to a "Nigerian scam."

According to the Salt Lake Tribune, the chain of events leading to the theft were set in motion when one of the would-be thieves (or an associate) acquired a vendor number for the University of Utah's design and construction department. That information allowed the miscreants to forge documents, changing the bank account information for the account in question. Once the account was under new management, the criminals invoiced the state of Utah for various imaginary repairs and/or expenses with instructions to deposit the cash into the hacked account (a Bank of America account in Texas).

The state paid out $2.5 million before the bank finally started making inquiries as to why the account was seeing such a large amount of traffic. Of the $2.5 million transferred, the receiving bank was able to freeze about $1.8 million; the net loss at this point is around $700K.

There is no simple trail back to the perpetrators themselves; the thieves obscured their own records by providing false identifications and addresses. The Texas account at Bank of America was reportedly opened by a man with a Minnesota license, and other individuals mentioned in the warrant are also from that state. The involvement and guilt, if any, of these individuals has yet to be determined, and it may be that the individuals in question were hooked through what would have appeared to be a standard "419 scheme" on their end.

Existing security measures at both the University of Utah and within the Utah state government obviously failed, but identifying exactly where the failure points were seems more important than pointing fingers at this stage of the investigation. Was the vendor number leak the result of spyware infestation, poor security, or a failure on the part of the university to recognize how vendor numbers could be used for illicit purposes? In this case, it's particularly important to determine whether the thieves essentially got lucky by infecting a random system with unusually valuable information, or if someone involved in the theft was operating on insider information.

Given that Utah is one of many states facing a massive budget shortfall, losing money to scammers doesn't look particularly competent, but once the particulars are known, the hole should be easy to plug. I suspect this was a one-off operation rather than the sign of a new wave of attacks, as it employed a degree of sophistication we just don't see in the bombardment of Nigerian scam e-mails that come pouring in, typed in capital letters.

That's cold comfort for the state employees who are about to come under a bright light, but the other 49 probably hopefully don't have a much to be concerned about.

Original here

Mozilla Bespin tries taking coding to the cloud

by Stephen Shankland

Mozilla Labs on Thursday unveiled a new open-source project called Bespin, a Web-based programming environment its developers hope will combine the speed and power of desktop-based development with the collaborative benefits of cloud computing.

Bespin 0.1 is only an "initial prototype framework that includes support for basic editing features," according to the site, but Mozilla has high hopes for the project. "We're particularly excited by the prospect of empowering Web developers to hack on the editor itself and make it their own," said Ben Galbraith and Dion Almaer in Mozilla's Bespin announcement.

Generally speaking, cloud computing moves tasks that once were on machines directly in front of a person to the Internet. Among the advantages for cloud-based applications are a more naturally shared environment and data that can be accessed from any networked machine. However, Web-based applications typically lack the responsiveness, polished user interfaces, and performance possible with local applications.

There are some intriguing possibilities here beyond the obvious ideas about a browser-based programming application. For example, what about integration with open-source software repositories? If it's flexible enough, Bespin could essentially act as a source code viewer that repositories such as SourceForge or Google Code could employ.

Mozilla set the following goals for Bespin:

• Ease of Use -- the editor experience should not be intimidating and should facilitate quickly getting straight into the code
• Real-time Collaboration -- sharing live coding sessions with colleagues should be easy and collaboratively coding with one or more partners should Just Work
• Integrated Command-Line -- tools like vi and Emacs have demonstrated the power of integrating command-lines into editors; Bespin needs one, too
• Extensible and Self-Hosted -- the interface and capabilities of Bespin should be highly extensible and easily accessible to users through Ubiquity-like commands or via the plug-in API
• Wicked Fast -- the editor is just a toy unless it stays smooth and responsive editing files of very large sizes
• Accessible from Anywhere -- the code editor should work from anywhere, and from any device, using any modern standards-compliant browser
A screenshot of Bespin in action.

A screenshot of Bespin in action.

(Credit: Mozilla)
Stephen Shankland covers Google, Yahoo, search, online advertising, portals, digital photography, and related subjects. He joined CNET News in 1998 and since then also has covered servers, supercomputing, open-source software, and science. E-mail Stephen.

Original here

What Programming Language Should I Learn?

Posted by robdiana in Programming

As I do my professional and personal work, I am always looking for the best tool for the job. In software development, there are several programming languages that can be used for a wide variety of reasons. I am often asked by people new to software development what is the best language to learn. They get confused when I ask them what they plan on doing. The reason is that people think there is going to be a best language for everything. However, everyone knows that there is no silver bullet. On the other hand, there are some languages which are better suited or more widely used in specific areas. So, given that idea, I came up with a list.

Enterprise Software Development - Java is typically used in this space as people are moving many administrative applications to an intranet.

Windows Development - C# should be used for any Windows development, this includes anything interface with the Microsoft Office Suite. Don’t tell me about POI for Java, I have used it, but the native libraries kick POI’s ass.

Rapid web prototyping and anything WordPress - PHP is really good for rapid prototyping what a web site should act like. It may even qualify as v1.0 for your site. It may not be a good long term solution and there are better options for large-scale development. It is also the main language for anything related to WordPress.

Web Prototype with a backbone - Python has quickly gained acceptance as the “next step” after PHP. Many current web applications use Python extensively. Adoption will continue as more services natively support Python like Google’s AppEngine.

General Web Development - (X)HTML, CSS and Javascript must be in your toolbox for any significant web development. If you try to remain standards compliant (which you should) then you need to look at the XHTML standards.

Data Integration - XML and JSON are the main data interchange formats on the web and in corporate development. With XML, there are various syndication formats (likely the subject of another post) and other business format standards to review.

Databases - SQL is critical to almost any application. If you learn standard SQL, then you can translate this to almost any database product on the market especially the popular engines like Microsoft SQLServer, Oracle, DB2, MySQL.

Toolbox - Every programmer should be able to do more than just program in one language. In addition, there are many scripting tools that can be part of your toolbox which can make you extra productive. Cygwin is a Unix shell that you can install on Windows, and I can not live without it. Unix scripting is very powerful when dealing with batch processing of files or even just interacting with the file system. Perl, the Pathetically Eclectic Rubbish Lister, is another language that can be used for web development, but it really shines when dealing with file and text processing.

I know I have ignored various tools and languages, but this is really just a starting point. In software development, it is always helpful to keep learning new things and new concepts. If you really want to stretch your mind, start working in Artificial Intelligence and programming in LISP, or do some logic programming in Prolog. If you feel really adventurous take a look at Standard ML. I am not sure what it is really useful for, but it is a completely different language than most.

Original here

Microsoft Offers $250,000 Bounty For Worm Authors

Beset by malicious worms after failing to convince enough server administrators to take its out-of-band Security Bulletin, MS08-067, seriously, Microsoft (NSDQ: MSFT) is taking computer security to the streets: It has formed a cybersecurity posse to dismantle the Conficker/Downadup worm's infrastructure and has offered a $250,000 reward for information leading to the arrest and conviction of those responsible for the outbreak.

Microsoft warned last October that a vulnerability in its Server service could be exploited by a worm. Cybercriminals heard that warning and made the threat real, infecting as many as 9 million computers by mid-January. At that time, Qualys CTO Wolfgang Kandek estimated that between 25% and 30% of vulnerable systems remained unpatched.

And the problem continues more or less unabated today. Symantec said in the past five days it has seen an average of almost 500,000 infections per day with W32.Downadup.A and more than 1.7 million infections per day with W32.Downadup.B.

Jose Nazario, manager of security research for Arbor Networks, in a blog post on Thursday, called Conficker/Downadup a "savage Windows worm."

The total number of machines infected at any given time varies as a consequence of disinfection efforts. But rest assured that the number represents a very large botnet.

So it is that on Thursday, Microsoft announced a partnership with technology companies, academic organizations, and Internet infrastructure companies to fight the worm in the wild. Its partners in this worm hunt include ICANN, Neustar, VeriSign, CNNIC, Afilias, Public Internet Registry, Global Domains International, M1D Global, AOL, Symantec, F-Secure, ISC, researchers from Georgia Tech, Shadowserver Foundation, Arbor Networks, and Support Intelligence.

Together, the coalition is working to seize Internet domains associated with the worm.

"The best way to defeat potential botnets like Conficker/Downadup is by the security and domain name system communities working together," said Greg Rattray, ICANN's chief Internet security adviser, in a statement. "ICANN represents a community that's all about coordinating those kinds of efforts to keep the Internet globally secure and stable."

In a phone interview, Kevin Haley, director of security response at Symantec (NSDQ: SYMC), said that there had been a lot of independent efforts to deal with the worm. The time was right, he said, to tackle it as a community.

According to Symantec, researchers have reverse-engineered the algorithm used to generate a daily list of 250 domains that the worm depends on to download updates. Armed with that knowledge, the coalition is taking control of the domains registered through coalition partners and using them to log and track infected systems. The group also is investigating domains overseen by registrars that aren't part of the coalition, though it's not clear how much leverage can be applied in such cases.

The worm won't be entirely stopped by such tactics; it also includes a peer-to-peer update mechanism. But it's a start.

Perhaps in recognition of the difficulty of getting help from registrars outside the coalition, particularly in countries with a tradition of tolerance for cybercrime, Microsoft said that residents of any country are eligible for its $250,000 reward. In many parts of the world, that kind of money will buy just about anything.

The last time the security community acted in unison like this was during the spring and summer of 2008, when several dozen companies and organizations came together to deal with the DNS vulnerability identified by security researcher Dan Kaminsky. But that was a bug fix rather than a worm-hunting posse.

Haley doesn't expect this sort of community policing of the Internet to happen more frequently, nor would he rule out further actions of this sort. "The groups that stepped in filled a void," he said. "As long as this is effective, we'll continue to look for opportunities."

Original here

10 Ways Microsoft's Retail Stores Will Differ From Apple Stores

Brennon Slattery

microsoft to open retail store

Artwork: Chip Taylor
Microsoft announced plans to open retail stores, hoping to boost visibility of many of its products and its brand. The move seems to be an effort to mimic the success that Apple has had with its retail stores. The news is just too tempting not to have some fun with. So here are some yet-to-be-officially-revealed details about the Microsoft stores.

1) Instead of Apple's sheer walls of glass, Microsoft's stores will have brushed steel walls dotted with holes -- reminiscent of Windows security.

2) The store will have six different entrances: Starter, Basic, Premium, Professional, Enterprise, and Ultimate. While all six doors will lead into the same store, the Ultimate door requires a fee of $100 for no apparent reason.

3) Instead of a "Genius Bar" (as Apple provides) Microsoft will offer an Excuse Bar. It will be staffed by Microsofties trained in the art of evading questions, directing you to complicated and obscure fixes, and explaining it's a problem with the hardware -- not a software bug.

4) The Windows Genuine Advantage team will run storefront security, assuming everybody is a thief until they can prove otherwise.

5) Store hours are undetermined. At any given time the store mysteriously shuts down instantaneously for no apparent reason. (No word yet on what happens to customers inside).

6) Stores will be named Microsoft Live Retail Store with PC Services for Digital Lifestyle Enthusiasts.

7) Fashioned after Microsoft's User Account Control (UAC) in Vista, sales personnel will ask you whether you're positive you want to purchase something at least twice.

8) Xbox 360 section of the store will be organized in a ring -- which will inexplicably go red occasionally.

9) DreamWorks will design a scary in-store theme park ride called "blue screen of death."

10) Store emergency exits will be unlocked at all times so people can get in anytime they want even if the front doors are locked.

Original here

Sources: Windows 7 moving toward 2009 release

by Ina Fried

Microsoft is moving forward with plans to launch Windows 7 this year, although the company still refuses to publicly commit to that goal.

PC industry sources in Asia and the U.S. tell CNET News that they have heard things are on track to launch by this year's holiday shopping season, which has been Microsoft's internal target for some time.

Microsoft is also putting the finishing touches on a program to offer Vista buyers a free or low-cost update to Windows 7. That program could kick off as early as July, sources said.

The company has run such "technology guarantee" programs in the past, typically allowing each PC maker to set the exact rules, but essentially offering buyers after a certain time to get a free upgrade to the next version. (TechArp has a post with even more details on Microsoft's planned Windows 7 Upgrade Program.)

In an interview at the Consumer Electronics Show in January, Microsoft senior VP Bill Veghte cautioned that the release still could be pushed into 2010, depending on customer feedback.

"I'm telling them that it could go either way," Veghte said in that January interview. "We will ship it when the quality is right, and earlier is always better, but not at the cost of ecosystem support and not at the cost of quality."

That remains the company's official position, although the wheels are spinning toward a release in time for Windows 7 machines to be sold this holiday season, PC industry sources tell CNET News.

The response to test versions of Windows 7 has been in stark contrast with the issues that dogged Windows Vista, which was a much more fundamental update to the operating system. Although Windows 7 adds things like an improved taskbar and snappier performance, the operating system shares most of the same underpinnings as Windows Vista. (Click on the video at right to hear me talk Windows 7 on CNET Editors' Office Hours.)

Microsoft has reiterated that it plans just a single beta for Windows 7. That beta launched in January and Microsoft this week stopped offering downloads of the test version. The company has said it will have a near-final "release candidate" version, but has not said when that will come.

Earlier this month, Microsoft confirmed that it plans to sell at least six distinct versions of Windows 7, although it also said it will focus its efforts around two editions--Windows 7 Home Premium and Windows 7 Professional. (By way of comparison, Microsoft announced the different versions of Vista in February 2006 before ultimately making the code available to business customers in November 2006).

For those that can read Chinese, here is ZDNet Taiwan's earlier report on the subject.

ZDNet Taiwan's Agnes Kuang contributed to this report.

During her years at CNET News, Ina Fried has changed beats several times, changed genders once, and covered both of the Pirates of Silicon Valley. These days, most of her attention is focused on Microsoft. E-mail Ina.

Original here

Microsoft: 10,000 patents, $9 billion annually in research...and we get Vista?

Preston Gralla

In the last two weeks, Microsoft has touted its technological prowess, noting that it spends $9 billion annually in research and development, and was just awarded its 10,000th patent. Kudos to Microsoft for that...but shouldn't we get more than Vista out of it?

Last week, Steve Ballmer spoke at the Democratic Caucus retreat, and he noted that despite the economic downturn, Microsoft will spend $9 billion annually for research and development. Here's what he told the caucus:

Despite the tough economy -- I might even say because of the tough economy -- our company will continue to invest more than US $9 billion a year in R&D, because we think it's that R&D spending that will cause us to remain strong.

You can read his entire speech here.

Then this week, Microsoft made much of the fact that the company was just awarded its 10,000th patent. The Seattle Tech Report notes that it was U.S. Patent No. 7,479,950, for "Manipulating association of data with a physical object." It's for Microsoft's Surface table-top computer. The report adds that Microsoft was issued the fourth most patents of any U.S. company.

Here's the patent.

In addition, ars technica reports that

the Institute of Electrical and Electronics Engineers (IEEE) ranked Microsoft's patent portfolio first across all industries in terms of its power and influence for the second year in a row.

What does this all mean? Clearly, Microsoft has long committed itself to research and innovation, and it plans to stay that way, possibly for as long as the company lives. For that, it should be held up as a model.

But all that begs the question: With $9 billion spent annually on R & D, and 10,000 patents, couldn't the company do better than Vista? I'm one of Vista's few fans, but even I recognize that it's got plenty of shortcomings.

I'm also a fan of Windows 7. But is it $9 billion worth of operating system? Ten thousand patents worth?

I know that I'm being simplistic here. Microsoft has a tremendously large technology portfolio, and its research and patents are spread out among them. In addition, the nature of research is that that much of what is learned does not go directly into products, and never will. So I hope that Microsoft continues spending the way it has on research.

But I also suspect that there's a missing link somewhere between research and actual products. Microsoft needs a better pipeline between its researchers and those who are on the front lines designing, coding, and releasing products. I'd like to see the company continue its research spending. But I'd also like to see that research turn into better operating systems.

Original here

Door shutting on Windows 7 beta

by Ina Fried

The clock is ticking for those that want to play around with the Windows 7 beta.

Microsoft issued a late reminder on Monday that people had only until midnight Pacific time to start downloading the operating system.

Those who started their download in time have until 9 a.m. PST Thursday to finish the process, Microsoft has said. Those who went to the site on Tuesday were able to get a product key, but not the code itself.

"We're sorry, but downloads are no longer available," Microsoft says when users click through from the download page.

The betta fish, the unofficial mascot of the Windows 7 beta.

(Credit: CNET News)

Although the beta version will cease being available to the general public, members of Microsoft's MSDN and TechNet developer programs will continue to have access to the code.

Microsoft CEO Steve Ballmer announced the Windows 7 Beta at the Consumer Electronics Show in January. After a slight hiccup, Microsoft made the code available on January 10.

The software maker has said the next test version of Windows 7 will be a near-final "release candidate" version, although it has not said when to expect that to arrive. Officially Microsoft has said that the final version of Windows 7 will come by the end of January 2010, although the company has been aiming to get it done in time to be on PCs that ship for this year's holiday-shopping season.

During her years at CNET News, Ina Fried has changed beats several times, changed genders once, and covered both of the Pirates of Silicon Valley. These days, most of her attention is focused on Microsoft. E-mail Ina.

Original here

Microsoft, Red Hat team up on patent-free interoperability

by Matt Asay

For years, Microsoft has insisted that open-source vendors acknowledge that its patent portfolio is a precursor to interoperability discussions. Monday, Microsoft shed that charade and announced an interoperability alliance with Red Hat for virtualization.

The deal includes several key components, all related to virtualization:

  • Red Hat will validate Windows Server guests to be supported on Red Hat Enterprise virtualization technologies.
  • Microsoft will validate Red Hat Enterprise Linux server guests to be supported on Windows Server Hyper-V and Microsoft Hyper-V Server.
  • Once each company completes testing, customers with valid support agreements will receive coordinated technical support for running Windows Server operating systems virtualized on Red Hat Enterprise virtualization, and for running Red Hat Enterprise Linux virtualized on Windows Server Hyper-V and Microsoft Hyper-V Server.

Pretty straightforward, as interoperability should be, and driven by customer demand for Microsoft technologies running alongside Red Hat's, according to Mike Neil, general manager of Virtualization Strategy at Microsoft. The top Linux vendor partnered with Microsoft: this is a major win for customers.

Crucially, Red Hat's interoperability deal with Microsoft does not include any patent covenants, the ingredient that torpedoed Novell with the open-source community:

The agreements establish coordinated technical support for Microsoft and Red Hat's mutual customers using server virtualization, and the activities included in these agreements do not require the sharing of IP. Therefore, the agreements do not include any patent or open source licensing rights, and additionally contain no financial clauses, other than industry-standard certification/validation testing fees.

Red Hat has long argued that patent discussions only cloud true interoperability, which is best managed through open source and open standards.

While Red Hat has flirted with such interoperability before by joining with Microsoft in the somewhat toothless Vendor Interop Alliance, this is its first direct interoperability initiative with Microsoft.

What most people don't know is that Red Hat had been discussing interoperability initiatives with Microsoft for a year before Novell and Microsoft tied the knot, but Microsoft ultimately derailed the talks by trying to introduce a covenant not to sue over patents, similar to what it ended up negotiating with Novell. Red Hat rejected this unnecessary inclusion, left the bargaining table, and Microsoft connected with Novell to use interoperability as an excuse to attack open source.

Monday, Red Hat and Microsoft have together demonstrated that interoperability can exist independent of back-room dealings over patents. Microsoft has increasingly been forced to open its stance on patents by the European Commission, anyway, proving Red Hat's resolute stance against patents was the right one. But this announcement suggests that Microsoft is maturing in its views on how to interact with open-source vendors.

It also suggests that Red Hat is maturing in its realization that it must interoperate with the old world of proprietary software even as it attempts to forge a new one of open-source software. Red Hat has long depended upon proprietary software: Red Hat Enterprise Linux's success has derived from its support for Oracle and other proprietary vendors.

Both Red Hat and Microsoft on Monday lowered their guns long enough for customers to win. They did so without encumbering interoperability with patents, which will be critical to ensuring that Microsoft can lower its guard further to welcoming open-source solutions to the Windows fold as a full partner.

Follow me on Twitter at mjasay.

Matt Asay is general manager of the Americas and vice president of business development at Alfresco, and has nearly a decade of operational experience with commercial open source and regularly speaks and publishes on open-source business strategy. He is a member of the CNET Blog Network and is not an employee of CNET. Disclosure.

Original here

How Not To Make A Commercial Linux Distribution


I have nothing against commercial Linux distribution. As a matter of fact, my first Linux experience was a commercial version of SUSE 7 almost nine years ago. I remember it had 6 CDs in a very professionally made CD pack, and SUSE did a very good job at making the installation process as user friendly as possible at that time. (Before SUSE decided to go evil). Its safe to say that I thought that the experience was good enough for me to justify paying for a Linux distribution.

Enter iMagic OS

Its not everyday that you see an announcement of a new commercial Linux distribution. We obviously see a lot of Linux distribution popping up every few days, which is essentially just a fork of some popular distribution out there. So what’s wrong with iMagic OS that its worth talking about?

-> The Name: Lindows anyone? One of the lessons we have learned from the Lindows saga is that you don’t name an OS similar to a popular OS that most Linux users hate. I think very few will disagree with me that most die hard Linux users are not fond of Windows or even Mac OS. To make matters worse, they have a version which is named iMagic OS X (I kid you not! ) and iMagic OS Pro.

iMagic OS

-> Ubuntu Clone: If repackaging an Ubuntu release with your custom skin and pre installed applications is all it takes to make a commercial Linux Distribution. I want to do it too. So why not Linux Mint? In every sense of imagination Linux Mint is not only a much superior looking Ubuntu fork but, out of the box, it has enough custom options that it won’t be an overkill to declare it to be better than Ubuntu itself. iMagic OS is nothing more than skin-job of Ubuntu, and from the looks of the screenshot (I am not going to pay for it and find out), it isn’t a very good job either.

-> Three versions – Three price category: It seems like the “OS X” and the “Myth” versions are based on Ubuntu (and seems to be discontinued, yet available for order). The “Pro” version seems to be based on Kubuntu and is the “flagship” product. The price ranges from $79.99 to $49.99. One of these versions comes with CrossOver Pro (big brother of Wine) to run MS products.

iMagic OS Pro $79.99

iMagic OS X $69.99

iMagic OS Myth $49.99

If Bill Gates knows about this, he would be proud, even though it falls short of 6 editions of Windows 7.

-> License agreement: It has a license agreement that will make you blush. Not once but twice, it mentions explicitly that you are can only install iMagic OS in no more than 3 computers. You have to agree in these terms (here & here) in order to purchase this OS. According to its Wikipedia entry, “It features a registration system that when violated, prevents installation of the OS, as well as new software and withdraws updates and support.”

Don’t get me wrong. I have nothing against commercial Linux distribution, even though some of you might do. But if you are going to make a commercial Linux distribution please by all means do not make it look like or name after every proprietary OS cliché out there and against everything which Linux and free software stands for.

Original here

World of Goo Linux Version is Ready!

The Linux version of World of Goo is finally ready for download! It’s available exclusively from our site, in three different packages depending on what your computer likes. (tar.gz, deb, rpm)

If you already got the game from our site, you can use your same download link to get the linux version in addition to your Windows and Mac versions. A huge thanks to Maks Verver for getting this running on linux, and the beta testers for making sure it runs smoothly! The full game and demo versions all available here.

Update: Yes, still DRM free! And works on 64 bit systems too.

Also, thanks to Ken Starks for helping to get the word out- he’s giving away 10 complimentary copies of the game to the first people he finds that have blogged about World of Goo on Linux. He also runs an interesting non-profit to build computers for kids who can’t afford them.

Update 2: We’ve been getting a lot of emails from people who purchased the game on Steam and other places who want access to the Linux version. At first we were simply sending people download links, but the volume of requests has increased significantly and we simply can’t keep up, so for now, we’re sorry, but the Linux version will only be available for free to those who purchased the game via our website.

Update 3: Happy Valentines Day! Exactly one year since we put out our first preview, special thanks to those of you who have been with us from the beginning!

Update 4: It’s only been 2 days since the release of the Linux version and it already accounts for 4.6% of the full downloads from our website. Our thanks to everyone who’s playing the game on Linux and spreading the word. Here are a couple of nifty stats:

  • About 12% of Linux downloads are of the .rpm package, 30% are of the .tar.gz package, and 57% are of the .deb package.
  • More copies of the game were sold via our website on the day the Linux version released than any other day. This day beat the previous record by 40%. There is a market for Linux games after all :)
Original here

Linux Version of Chrome To Use Gtk+

posted by Thom Holwerda

A major complaint about Google's Chrome web browser has been that so far, it is still not available on anything other than Windows. Google promised to deliver Chrome to Mac OS X and Linux as well, but as it turns out, this is a little harder than they anticipated, Ben Goodger, Google's Chrome interface lead, has explained in an email. It has also been revealed what toolkit the Linux version of Chrome will use: Gtk+.

The decision to use native user interface toolkits on each platform has made it all the more difficult to deliver the Mac and Linux versions of Chrome. Several people wondered why Google didn't just use Qt from the get-go, which would've made the whole process a whole lot easier. Goodger explains that Google "[avoids] cross platform UI toolkits because while they may offer what superficially appears to be a quick path to native looking UI on a variety of target platforms, once you go a bit deeper it turns out to be a bit more problematic." Your applications end up "speaking with a foreign accent", he adds. In addition, Goodger claims that using something like Qt "limits what you can do to a lowest common denominator subset of what's supported by that framework on each platform."

As for the Linux version, Google initially thought that a Windows clone would be acceptable, since Chrome itself is already such a fast application. However, the people working on the Linux version of Chrome made a case for using Gtk+ instead, and Google went with that option. Since Chrome is open source, it could still be possible that a Qt version will be developed independently of Google, of course.

When it comes to the Mac version, Goodger explains that the plan there has been to develop a native version all along. "A Windows-clone would most definitely not be acceptable on MacOS X," Goodger says, "where the APIs for UI development are highly evolved and have many outstanding features. So that's always been the plan there."

The Mac version is coming along nicely, and Google hopes to deliver both the Linux and Mac versions somewhere in June.

Hopefully, they will also implement something like Firefox's NoScript extension because according to some users, the security model is still lacking.

Original here

Labels want $13 million from Pirate Bay as trial starts

The trial of The Pirate Bay's backers kicked off Monday in Sweden as the prosecutor laid out an opening statement and the labels asked for $13 million in fines. Meanwhile, the defendants were tweeting from the dock.

By Nate Anderson

Labels want $13 million from Pirate Bay as trial starts

The Pirate Bay's "spectrial" got underway in Sweden Monday morning as prosecutors laid out the charges. Appearing before a packed house of bloggers, press, and people dressed as pirates, prosecutor Hakan Roswall made his opening statement, charging The Pirate Bay with aiding in massive copyright infringement and profiting from its actions.

Three Pirate Bay defendants and Carl Lundstom, a Swedish businessman who used to run Rix Telecom and is accused of being a Pirate Bay investor, were in the dock listening. Roswall painted the group as businesspeople out to make serious money from their operations, and he detailed the site's genesis and growth since being launched back in 2004.

Those who understand what The Pirate Bay is and how BitTorrent works won't find much new or shocking in Roswall's summary of the case; the question is simply whether creating a search engine and tracker service that traffics mainly in copyrighted content is illegal in Sweden or not.

The music labels did provide a bit of new information, however—specifically, the amount of money they want from The Pirate Bay. It turns out to be over $13 million (117 million kronor).

For a trial taking place in Sweden and broadcast only in Swedish, one of the remarkable aspects of the "spectrial" is the interest and involvement of people from around the world. With Swedish media outlets making the audio stream of the trial available, bilingual webheads have been translating and summarizing the day's action on the Web and—in a remarkable show of commitment to the cause—though hundreds of tweets.

In addition, The Pirate Bay defendants are themselves blogging, tweeting, and holding press conferences (one was held on Sunday; Swedish TV4 was banned for past "bad behavior"). They are intent on seeing the trial as mere spectacle and sideshow, a last gasp of the absurd from some dying industries, and one that will be demolished by people power.

In an editorial released this weekend "via the internets," The Pirate Bay wrote that the way this trial differs "from most earlier trials is that everything in and surrounding it will whirl round and round in diverse channels of communication; to be discussed, reinterpreted, copied and criticized. Every crack in their appeal will be penetrated by the gaze of thousands upon thousands of eyes on the internets, in all the channels covering the trial. Old cliches from the antipiracy lobby wont stick. You won’t be able to say stuff like, 'you can’t compete with free' or 'filesharing is theft' without a thousand voices making fun of you."

Despite the glib tone and tough words, the defendants face up to two years in jail and potentially massive fines. They may consider the trial to be little more than spectacle, but if so, it's a spectacle with real consequences.

The defendants claim not to be worried. Tweeting from within the trial today, The Pirate Bay's programmer Peter Sunde wrote, "How the hell did they think this was going to be something else than EPIC FAIL for the prosecution? We're winning so hard."

Ludvig Werner, the boss of IFPI's local Swedish chapter, had a somewhat different perspective: The Pirate Bay is about keeping money out of creators' hands and putting it into Pirate Bay pockets. "Copyright exists to ensure that everyone in the creative world—from the artist to the record label, from the independent film producer to the TV programme maker—can choose how their creations are distributed and get fairly rewarded for their work," he said in a statement.

"The operators of The Pirate Bay have violated those rights and, as the evidence in Court will show, they did so to make substantial revenues for themselves. That kind of abuse of the rights of others cannot be allowed to continue, and that is why these criminal proceedings are so important for the health of the creative community."

Original here

Court Tweets, Pirate Flags and Free Candy

Written by Ernesto

As reported earlier today, during today’s court session, social media came alive as prosecutor Håkan Roswall did a tedious presentation of the history of the tracker, various companies, revenue streams, ad sales and how he will “prove” it is all connected.

It was remarkable to see how thousands of people were following and contributing to an ongoing stream of information on the Internet. Through live blogs (in Swedish and translated), Twitter, live audio from inside the court and live video from outside, the coverage was massive.

The hash tag #spectrial was the most searched term on Twitter, The Pirate Party’s servers went down and it was nearly impossible to get access to, which collects the various streams of information.

One of the defendants even contributed, as Peter Sunde (aka brokep) wrote on Twitter: “Might this be the first twitter from within a court case? It must be a #spectrial.” This might indeed be one of the first tweets from a defendant in court.

As the court went in recess for lunch, the gathering outside grew, in spite of the cold February day in Stockholm. Pirate Party flags marked the street corner, a band played and candy was handed to passers-by while being told that “sharing is caring”.

Pirate Music Outside the Court

In the crowd we found Christian Engström, Vice Chairman of the Swedish Pirate Party who are currently heading towards getting seats in the European Parliament in June’s elections.

“This is a political trial,” he told TorrentFreak. “Firstly, the trial itself is political because the prosecutor wrote a memo in 2005 and said that it wasn’t possible to prosecute from the evidence. This trial only occurs because of political pressure from the United States on the then Minister of Justice.”

The trial, as Engström explains, is more than just a prosecution of The Pirate Bay. It is a question of the future of communication.

“Should the Internet be a place where everyone can communicate or should it not? That’s the question of this trial, and no court can answer that question. Even if The Pirate Bay would be freed all the way through the court system, the problem isn’t solved. The Copyright Lobby will demand more restrictions and tougher laws and the only way to protect social media culture in the long run is to work politically.”

Pirate Bay Supporters (thanks Rick)

When asked if the trial had any significance at all, Engström told TorrenFreak that he finds it incredibly fun as a spectacle.

“Hollywood knows how to stage a show, I’ve got to hand them the credit for that. And I think that’s very positive because it means that for the following weeks, there will be lots of media focus on these important issues.”

After the lunch break, the proceedings continued as well as the coverage online. On Twitter Sofia did an outstanding job translating the audio feed into English, but she was just one of many. It is truly remarkable how many people committed themselves to covering the trial. For now we still live in a society where information is open and free.

Original here

The Pirate Bay Trial - First Day in Court

Written by Ernesto

The day started this morning at 08:30, with Pirate Bay founders Gottfrid Svartholm Warg (aka Anakata), Peter Sunde Kolmisoppi (aka Brokep) and Fredrik Neij (TiAMO) arriving at the court with the S23K bus. The bus will operate as their press-center in the weeks to come. Outside the court were several Pirate Bay supporters waving Pirate Flags.


pirate bay bus

The trial began roughly half an hour later. Prosecutor Håkan Roswall read out the charges that can be best summarized as “commercial copyright infringement”. The plaintiffs are Warner Bros, MGM, EMI, Colombia Pictures, 20th Century Fox, Sony BMG and Universal. Lundström’s lawyer pointed out that the prosecutor may have drawn up some charges incorrectly. Interestingly, Lundström is the only one of the defendants with two lawyers, one of which is a copyright expert.

Fredrik, Gottfrid and Peter stated their defense. They all pleaded not guilty.

Roswall then went on to present the claims of the media outfits, and described how The Pirate Bay works, with a little bit of history. He went on till the lunch break, but meanwhile Rick Falkvinge of the Pirate Party couldn’t resist accessing The Pirate Bay site from his seat in the courtroom.

The prosecution said that TPB was aimed at Swedish users until late 2004, when Fredrik had contact with Carl Lundström. They say Lundström helped them develop the project by donating funds and resources to enable the growth of the site.

The prosecution suggested that The Pirate Bay was a commercial organization, with Carl Lundström as a shareholder and financier of the company.

They also said that The Pirate Bay investigated the possibility of moving to Argentina after concerns over changes in Swedish copyright law during 2005. The prosecution claimed there were plans with Carl Lundström to set up a company in British Virgin Islands.

Discussion ensued over the advertising on The Pirate Bay site, and the involvement of one Daniel Oded and companies Random Media and Transworld Advertising.

Lots of Press (thanks Rick)

pirate bay bus

Following the lunch break, proceedings continued with prosecutor Håkan Roswall failing to start up his computer. For several minutes, listeners of the live audio could hear mouse-clicks as Roswall, who earlier claimed to be an expert on computer crimes, tried to get his PowerPoint presentation on the screen. He was eventually ordered by the judge to stick to his papers and continue.

Information was presented about various movie, music and game downloads co-ordinated by The Pirate Bay before the raid in 2006. Roswall further discussed the total number of seeds and peers on the tracker, all part of the evidence that was previously gathered by the plaintiffs.

During the afternoon, Peter Sunde sent a message: “How the hell did they think this was going to be something else than EPIC FAIL for the prosecution? We’re winning so hard.” Peter points out that the prosecutor is having difficulty working out the difference between megabits and megabytes.

The case was adjourned around 4pm, and will continue tomorrow morning.

Original here

Why Facebook Is for Old Fogies

By Lev Grossman

Facebook is five. Maybe you didn't get it in your news feed, but it was in February 2004 that Harvard student Mark Zuckerberg, along with some classmates, launched the social network that ate the world. Did he realize back then in his dorm that he was witnessing merely the larval stage of his creation? For what began with college students has found its fullest, richest expression with us, the middle-aged. Here are 10 reasons Facebook is for old fogies:

1. Facebook is about finding people you've lost track of. And, son, we've lost track of more people than you've ever met. Remember who you went to prom with junior year? See, we don't. We've gone through multiple schools, jobs and marriages. Each one of those came with a complete cast of characters, most of whom we have forgotten existed. But Facebook never forgets. (See the best social networking applications.)

2. We're no longer bitter about high school. You're probably still hung up on any number of petty slights, but when that person who used to call us that thing we're not going to mention here, because it really stuck, asks us to be friends on Facebook, we happily friend that person. Because we're all grown up now. We're bigger than that. Or some of us are, anyway. We're in therapy, and it's going really well. These are just broad generalizations. Next reason.

3. We never get drunk at parties and get photographed holding beer bottles in suggestive positions. We wish we still did that. But we don't. (See pictures of Denver, Beer Country.)

4. Facebook isn't just a social network; it's a business network. And unlike, say, college students, we actually have jobs. What's the point of networking with people who can't hire you? Not that we'd want to work with anyone your age anyway. Given the recession--and the amount of time we spend on Facebook--a bunch of hungry, motivated young guns is the last thing we need around here.

5. We're lazy. We have jobs and children and houses and substance-abuse problems to deal with. At our age, we don't want to do anything. What we want is to hear about other people doing things and then judge them for it. Which is what news feeds are for.

6. We're old enough that pictures from grade school or summer camp look nothing like us. These days, the only way to identify us is with Facebook tags. (See pictures of a diverse group of American teens.)

7. We have children. There is very little that old people enjoy more than forcing others to pay attention to pictures of their children. Facebook is the most efficient engine ever devised for this.

8. We're too old to remember e-mail addresses. You have to understand: we have spent decades drinking diet soda out of aluminum cans. That stuff catches up with you. We can't remember friends' e-mail addresses. We can barely remember their names.

9. We don't understand Twitter. Literally. It makes no sense to us.

10. We're not cool, and we don't care. There was a time when it was cool to be on Facebook. That time has passed. Facebook now has 150 million members, and its fastest-growing demographic is 30 and up. At this point, it's way cooler not to be on Facebook. We've ruined it for good, just like we ruined Twilight and skateboarding. So git! And while you're at it, you damn kids better get off our lawn too.

Original here

Google, the great destroyer of value?

by Matt Asay

n a recent series entitled "The Future of Newspapers," Wall Street Journal managing editor Robert Thomson made some provocative (but insightful) comments about the Web's effect on journalism and the newspaper business.

One comment in particular stands out:

Google devalues everything it touches. Google is great for Google, but it's terrible for content providers, because it divides that content quantitatively rather than qualitatively. And if you are going to get people to pay for content, you have to encourage them to make qualitative decisions about that content.

Google Page Rank supposedly makes qualitative distinctions between content by measuring quantitative links to content, but in reality it doesn't work that way--not enough of the time, anyway.

I can see this from my own posts: sometimes I want to find a previous post of mine among the thousands that I've previously written. So I start digging through Google using keywords that I think will unearth the post. What I end up finding much of the time are my most popular posts related to those keywords, and often not the actual content I'm seeking. Given that some of my best content hasn't necessarily been the most linked-to content, I struggle to find it.

Even so, Thomson points out one area in which the Web actually has the potential to accelerate revenue potential for content, reminding his audience that the "beauty of the Web is that you can repurpose (content) many times" and therefore "generate revenue several times over." The key is figuring out how to monetize that content, repurposed or otherwise.

While I think advertising is one way to manage monetization of content, I think there's something more profound and more closely linked to the abundance of Web content. I don't know what that is, but I suspect someone smart will unearth it soon. It needs to take into account the short shelf-life of content--even good content--but also the critical importance of original source material, as Nick Carr recently wrote.

Perhaps we can figure out ways to put a premium on original content--journalism--and then pay lower rates for add-on commentary like this blog?

Follow me on Twitter at mjasay.

Matt Asay is general manager of the Americas and vice president of business development at Alfresco, and has nearly a decade of operational experience with commercial open source and regularly speaks and publishes on open-source business strategy. He is a member of the CNET Blog Network and is not an employee of CNET. Disclosure.

Original here

Woman Sues Microsoft Over XP Downgrade Charge

Elizabeth Montalbano, IDG News Service

A woman has filed a class-action lawsuit against Microsoft over a US$59.25 charge for downgrading her Windows Vista PC to XP.

In a suit filed in the U.S. District Court for the Western District of Washington in Seattle, Los Angeles resident Emma Alvarado is asking that Microsoft return the fee she paid for downgrading a Lenovo PC with the Windows Vista Business OS preinstalled to Windows XP Professional. Alvarado purchased the PC on June 20, 2008, according to the suit.

Alvarado also is inviting others who have paid fees to downgrade to XP to join the suit (PDF) and is requesting refunds for them as well.

Many customers who purchased PCs with Vista installed opted to downgrade to XP because they weren't happy with Vista's "numerous problems," according to Alvarado's suit.

"As a result, many consumers would prefer to purchase a new computer preinstalled with the Windows XP operating system or at least not preinstalled with the Vista operating system," according to the filing.

The suit goes on to accuse Microsoft of using its "market power to take advantage of consumer demand for the Windows XP operating system" by requiring people to buy Vista PCs and then charging them to downgrade to the OS they really want.

This action violates Washington state's Unfair Business Practices Act and the Consumer Protection Act, according to the suit.

Microsoft spokesman David Bowermaster said the company has not been served with the lawsuit, so it would be premature to comment about it.

When Microsoft released Vista to consumers on Jan. 30, 2007, it gave people the option to downgrade to XP if they weren't satisfied with the new OS.

As a result of overall dissatisfaction with Vista, Microsoft had to extend the amount of time it allowed original equipment manufacturers and custom system builders to sell PCs with XP preinstalled. The company also is facing a class-action suit in the same court over the "Windows Vista Capable" sticker program that let customers know a PC could run Windows Vista. Customers said they found the program misleading.

While the damages that could be awarded in the suit would likely not be a large sum for a multibillion-dollar company, the suit brings up a larger question of whether Microsoft will allow Windows 7 users to downgrade to XP.

Microsoft so far has not said publicly whether it will, and no one from the company was available for immediate comment Friday. Vista, being the OS released before Windows 7, would be the logical choice for a downgrade from Windows 7. However, given customers' dissatisfaction with Vista, Microsoft could offer an XP downgrade as well.

Al Gillen, an analyst with research firm IDC, said it would be a "very risky thing" for Microsoft to do to eliminate downgrade rights with Windows 7. He said it would alienate Microsoft's customer base to not continue giving customers an option if they're not happy with a new version of the Windows client.

Original here

Facebook's New Terms Of Service: "We Can Do Anything We Want With Your Content. Forever."

By Chris Walters

This post has generated a lot of responses, including from Facebook. Check them out here.

Facebook's terms of service (TOS) used to say that when you closed an account on their network, any rights they claimed to the original content you uploaded would expire. Not anymore.

Now, anything you upload to Facebook can be used by Facebook in any way they deem fit, forever, no matter what you do later.* Want to close your account? Good for you, but Facebook still has the right to do whatever it wants with your old content. They can even sublicense it if they want.

You hereby grant Facebook an irrevocable, perpetual, non-exclusive, transferable, fully paid, worldwide license (with the right to sublicense) to (a) use, copy, publish, stream, store, retain, publicly perform or display, transmit, scan, reformat, modify, edit, frame, translate, excerpt, adapt, create derivative works and distribute (through multiple tiers), any User Content you (i) Post on or in connection with the Facebook Service or the promotion thereof subject only to your privacy settings or (ii) enable a user to Post, including by offering a Share Link on your website and (b) to use your name, likeness and image for any purpose, including commercial or advertising, each of (a) and (b) on or in connection with the Facebook Service or the promotion thereof.

That language is the same as in the old TOS, but there was an important couple of lines at the end of that section that have been removed:

You may remove your User Content from the Site at any time. If you choose to remove your User Content, the license granted above will automatically expire, however you acknowledge that the Company may retain archived copies of your User Content.

Furthermore, the "Termination" section near the end of the TOS states:

The following sections will survive any termination of your use of the Facebook Service: Prohibited Conduct, User Content, Your Privacy Practices, Gift Credits, Ownership; Proprietary Rights, Licenses, Submissions, User Disputes; Complaints, Indemnity, General Disclaimers, Limitation on Liability, Termination and Changes to the Facebook Service, Arbitration, Governing Law; Venue and Jurisdiction and Other.

Make sure you never upload anything you don't feel comfortable giving away forever, because it's Facebook's now.

(Note that as several readers have pointed out, this seems to be subject to your privacy settings, so anything you've protected from full public view doesn't seem to be usable in other ways regardless.)

Oh, you also agree to arbitration, naturally. Have fun with that.

Update: Several Facebook groups have formed to protest the new TOS:
"People Against the new Terms of Service (TOS)"
"FACEBOOK OWNS YOU: Protest the New Changes to the TOS!"
"Those against Facebook's new TOS!"

Update 2: Facebook founder Mark Zuckerberg has posted a response on the Facebook blog. A crude summary: "trust us, we're not doing this to profit from you, it's so we are legally protected as we enable you to share content with other users and services." His point, I think, is that there are interesting issues of ownership and rights clearance when you're dealing with content shared in a social network:
Still, the interesting thing about this change in our terms is that it highlights the importance of these issues and their complexity. People want full ownership and control of their information so they can turn off access to it at any time. At the same time, people also want to be able to bring the information others have shared with them-like email addresses, phone numbers, photos and so on-to other services and grant those services access to those people's information. These two positions are at odds with each other. There is no system today that enables me to share my email address with you and then simultaneously lets me control who you share it with and also lets you control what services you share it with.

Update 3: I just found this clarification posted earlier this afternoon on The Industry Standard. It was emailed to them by a Facebook representative and seems to confirm that your privacy settings trump all else:
We are not claiming and have never claimed ownership of material that users upload. The new Terms were clarified to be more consistent with the behavior of the site. That is, if you send a message to another user (or post to their wall, etc...), that content might not be removed by Facebook if you delete your account (but can be deleted by your friend). Furthermore, it is important to note that this license is made subject to the user's privacy settings. So any limitations that a user puts on display of the relevant content (e.g. To specific friends) are respected by Facebook. Also, the license only allows us to use the info "in connection with the Facebook Service or the promotion thereof." Users generally expect and understand this behavior as it has been a common practice for web services since the advent of webmail. For example, if you send a message to a friend on a webmail service, that service will not delete that message from your friend's inbox if you delete your account.

Original here

News from The Pirate Bay Press Conference

Written by enigmax

Just hours ago The Pirate Bay and Piratbyrån held a joint press conference at the Museum of Technology in Stockholm. It was broadcasted live on the web and Pirate Bay co-founders Peter Sunde and Gottfrid Svartholm spoke at length. Here is a breakdown of some of the key points.

The joint press conference was held mainly in Swedish and there was very little English. The media present had applied for invitations, and some representatives from the media had already been banned from attending by The Pirate Bay. Those in attendance were told that they should be courteous, which they were.

Sitting at the top table from left to right were Rasmus Fleischer of Piratbyrån, Sara Sajjad of Piratbyrån, Gottfrid Svartholm Warg (aka Anakata), Peter Sunde (aka Brokep) and Magnus Eriksson of Piratbyrån. Fredrik Neij (TiAMO) and the fourth defendant Carl Lundström were not at the table.

Spectrial Press Conference (pic credit)


First off, they said that the whole case can be best described as a theater play and that all the people involved are potential actors. They vowed to make a reality TV-show of the whole trial, in fact Rasmus Fleischer has written the prologue for the show already. It’s a Spectrial, and they are happy to play along. As soon as the cameras stopped flashing, the panel took questions from the media that was present.

The Pirate Bay said that they feel that their overall case would not end quickly, implying legal appeals and were defiant that no matter what happens to them, the site will continue. “What are they going to do? They have already failed to take the site down once. Let them fail again,” said Gottfrid. It isn’t the site facing the courts noted Peter Sunde. “It has its own life without us,” he said.

The Pirate Bay team said that although they face huge financial claims, they weren’t going to be intimidated, with Gottfrid declaring, “I already have more debt in Sweden than I will ever be able to pay off. I don’t even live here. They are welcome to send me a bill. I will frame it and put it on the wall.”

Peter went on to explain there was no basis for the massive financial claims. “It does not matter if they require several million or one billion. We are not rich and have no money to pay,” he said. “They won’t get a cent.”

When asked if it was ok to download media without paying for it, Peter deemed the question to be “uninteresting” and said he was tired of hearing it.

A member of the media then posed this question: “Do you feel like defendants, or defenders of technology?”

Peter responded: “I think it is something in between actually. We have a personal liability for this, we have a personal risk which has some impact on our feelings. But definitely it’s not defending the technology, it’s more like defending the idea of the technology and that’s probably the most important thing in this case - the political aspect of letting the technology be free and not controlled by an entity which doesn’t like technology.”

Gottfrid added that the prosecutor of the case seems to focus a lot on the individuals in the case. “At least one fourth of the evidence is character assassination of the people involved,” he said.

Peter went on to explain that when he was arrested the police didn’t immediately start questioning him about site, rather his motivation. “When I had my only hearing with the police the first question was if I wanted to explain my ideology and my politics, not if I was involved in The Pirate Bay, which kind of sets the tone for all of this.”

The site’s finances were brought up, with the pair saying they started it and keep it going through advertising revenue, although the pair don’t make any money themselves.

A reporter from the BBC asked what the on-going maintenance costs of The Pirate Bay amount to. Peter responded, “So, the costs for Pirate Bay, I don’t actually don’t have any numbers for it. We use quite a lot of bandwidth and we have to buy new servers every other week.”

Gottfrid said that the tracker itself uses an average of 600mbits of bandwidth which increases further at weekends. He also revealed that their current hardware has to be replaced once a year and is currently estimated to be worth $120,000, therefore it is depreciating at the rate of $10,000 each month.

Towards the end of the conference the pair were asked for their assessment of the way they have been handled by the press.

“Well, that’s a very interesting question,” said Peter. “There are some members of the press who we don’t like so we didn’t invite them today and they are very mad at us.” Peter didn’t mention them by name at this point, but they were Aftonbladet, Metro and TV4. He explained the problem he has with these publications;

“They are just interested in doing something spectacular instead of actually discussing the issues. The media that are not invited today are basically the media that have not been negative, but lying instead and keeping things from the public.”

“We don’t have a problem with negative press,” Peter continued. “There are a lot of people in this room who don’t like us and we don’t really care about that as long as they discuss the issues. But I would say that most of the press have been very good towards us actually, in discussing more and more the issues surrounding The Pirate Bay instead of focusing on us as persons, which is what we actually want.”

The trial starts tomorrow and of course, TorrentFreak will keep everyone updated, but in the meantime, a thought-provoking comment from Peter;

“I do not believe The Pirate Bay will be a major player in five years. But I think BitTorrent technology will improve. File sharing will always exist. I think people will tire of the debate.”

Original here

Do We Need a New Internet?


Two decades ago a 23-year-old Cornell University graduate student brought the Internet to its knees with a simple software program that skipped from computer to computer at blinding speed, thoroughly clogging the then-tiny network in the space of a few hours.

The program was intended to be a digital “Kilroy Was Here.” Just a bit of cybernetic fungus that would unobtrusively wander the net. However, a programming error turned it into a harbinger heralding the arrival of a darker cyberspace, more of a mirror for all of the chaos and conflict of the physical world than a utopian refuge from it.

Since then things have gotten much, much worse.

Bad enough that there is a growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over.

What a new Internet might look like is still widely debated, but one alternative would, in effect, create a “gated community” where users would give up their anonymity and certain freedoms in return for safety. Today that is already the case for many corporate and government Internet users. As a new and more secure network becomes widely adopted, the current Internet might end up as the bad neighborhood of cyberspace. You would enter at your own risk and keep an eye over your shoulder while you were there.

“Unless we’re willing to rethink today’s Internet,” says Nick McKeown, a Stanford engineer involved in building a new Internet, “we’re just waiting for a series of public catastrophes.”

That was driven home late last year, when a malicious software program thought to have been unleashed by a criminal gang in Eastern Europe suddenly appeared after easily sidestepping the world’s best cyberdefenses. Known as Conficker, it quickly infected more than 12 million computers, ravaging everything from the computer system at a surgical ward in England to the computer networks of the French military.

Conficker remains a ticking time bomb. It now has the power to lash together those infected computers into a vast supercomputer called a botnet that can be controlled clandestinely by its creators. What comes next remains a puzzle. Conficker could be used as the world’s most powerful spam engine, perhaps to distribute software programs to trick computer users into purchasing fake antivirus protection. Or much worse. It might also be used to shut off entire sections of the Internet. But whatever happens, Conficker has demonstrated that the Internet remains highly vulnerable to a concerted attack.

“If you’re looking for a digital Pearl Harbor, we now have the Japanese ships streaming toward us on the horizon,” Rick Wesson, the chief executive of Support Intelligence, a computer consulting firm, said recently.

The Internet’s original designers never foresaw that the academic and military research network they created would one day bear the burden of carrying all the world’s communications and commerce. There was no one central control point and its designers wanted to make it possible for every network to exchange data with every other network. Little attention was given to security. Since then, there have been immense efforts to bolt on security, to little effect.

“In many respects we are probably worse off than we were 20 years ago,” said Eugene Spafford, the executive director of the Center for Education and Research in Information Assurance and Security at Purdue University and a pioneering Internet security researcher, “because all of the money has been devoted to patching the current problem rather than investing in the redesign of our infrastructure.”

In fact, many computer security researchers view the nearly two decades of efforts to patch the existing network as a Maginot Line approach to defense, a reference to France’s series of fortifications that proved ineffective during World War II. The shortcoming in focusing on such sturdy digital walls is that once they are evaded, the attacker has access to all the protected data behind them. “Hard on the outside, with a soft chewy center,” is the way many veteran computer security researchers think of such strategies.

Despite a thriving global computer security industry that is projected to reach $79 billion in revenues next year, and the fact that in 2002 Microsoft itself began an intense corporatewide effort to improve the security of its software, Internet security has continued to deteriorate globally.

Even the most heavily garrisoned military networks have proved vulnerable. Last November, the United States military command in charge of both the Iraq and Afghanistan wars discovered that its computer networks had been purposely infected with software that may have permitted a devastating espionage attack.

That is why the scientists armed with federal research dollars and working in collaboration with the industry are trying to figure out the best way to start over. At Stanford, where the software protocols for original Internet were designed, researchers are creating a system to make it possible to slide a more advanced network quietly underneath today’s Internet. By the end of the summer it will be running on eight campus networks around the country.

The idea is to build a new Internet with improved security and the capabilities to support a new generation of not-yet-invented Internet applications, as well as to do some things the current Internet does poorly — such as supporting mobile users.

The Stanford Clean Slate project won’t by itself solve all the main security issues of the Internet, but it will equip software and hardware designers with a toolkit to make security features a more integral part of the network and ultimately give law enforcement officials more effective ways of tracking criminals through cyberspace. That alone may provide a deterrent.

This is not the first time a replacement has been proposed for the current Internet. For example, modern Windows and Macintosh computers already come equipped to support a new Internet protocol known as IPv6 that would fix many of the shortcomings of the current IPv4 version. However, because of cost, performance and compatibility questions it has languished.

That has not discouraged the Stanford engineers who say they are on a mission to “reinvent the Internet.” They argue that their new strategy is intended to allow new ideas to emerge in an evolutionary fashion, making it possible to move data traffic seamlessly to a new networking world. Like the existing Internet, the new network will almost certainly have no one central point of control and no one organization will run it. It is most likely to emerge as new hardware and software are built in to the router computers that run today’s network and are adopted as Internet standards.

For all those efforts, though, the real limits to computer security may lie in human nature.

The Internet’s current design virtually guarantees anonymity to its users. (As a New Yorker cartoon noted some years ago, “On the Internet, nobody knows that you’re a dog.”) But that anonymity is now the most vexing challenge for law enforcement. An Internet attacker can route a connection through many countries to hide his location, which may be from an account in an Internet cafe purchased with a stolen credit card.

“As soon as you start dealing with the public Internet, the whole notion of trust becomes a quagmire,” said Stefan Savage, an expert on computer security at the University of California, San Diego.

A more secure network is one that would almost certainly offer less anonymity and privacy. That is likely to be the great tradeoff for the designers of the next Internet. One idea, for example, would be to require the equivalent of drivers’ licenses to permit someone to connect to a public computer network. But that runs against the deeply held libertarian ethos of the Internet.

Proving identity is likely to remain remarkably difficult in a world where it is trivial to take over someone’s computer from half a world away and operate it as your own. As long as that remains true, building a completely trustable system will remain virtually impossible.

Original here