Followers

Tuesday, August 26, 2008

Cloud computing: A catchphrase in puberty

How Google and Amazon will take your money and step on your dreams

By Ted Dziuba

Fail and You It's been called a lot of things: utility computing, grid computing, distributed computing, and now cloud computing. You can come up with any CTO-friendly name you like, but they all mean the same shit: Renting your quickly depreciating physical assets out because your software company is out of ideas for computer programs.

Amazon's EC2 was likely the brainchild of a mid-level ops director who overbought for a data center and had to come up with a way to save his own ass. Use a free, open source project like Xen for virtualization, give it a sunshine-up-the-ass name like Elastic Compute Cloud, and start pulling in all those venture capital dollars like Cisco and Sun did during the first dotcom catastrophe. Fuck me, give that man a raise.

Click here to find out more!

Unfortunately, Bezos and company are a day late and a buck short. This time around, we're working with substantially less money and substantially more developer incompetence.

A Cloud Is Easier To Draw On A Whiteboard Than A Grid

EC2 is very popular with the Web 2.0 crowd, which is strange, considering the hurdles that these Javascript all-stars need to overcome. The first, and presumably most difficult, is that Amazon wants money in exchange for their services. That's a stark realization for a budding young social network developer: Web 2.0 runs on cash, not hugs. Who would have thunk it?

Once you're past that, there's the matter of reliability. In my experience with it, EC2 is fairly reliable, but you really need to be on your shit with data replication, because when it fails, it fails hard. My pager once went off in the middle of the night, bringing me out of an awesome dream about motorcycles, machine guns, and general ass-kickery, to tell me that one of the production machines stopped responding to ping. Seven or so hours later, I got an e-mail from Amazon that said something to the effect of:

There was a bad hardware failure. Hope you backed up your shit.

Look at it this way: at least you don't have a tapeworm.

-The Amazon EC2 Team

Datacenter hardware will bend you over your desk every now and then - no matter who owns it. If it's yours, though, you can send some poor bloke down to the server room in the wee hours of the morning and cattle-prod constant status updates out of him. As a paying EC2 customer, all you're entitled to is basic support, which amounts to airing your grievances on a message board and hoping that somebody at Amazon is reading. Being the straight-up gangster that I am, I luvz me some phone-screamin', and I just can't get that kind of satisfaction from Amazon.

Of course, I could pay more for extended support, but it would be nice if the fucking thing just worked.

What You Looking At, Google? You Want A Piece Of This?

While I'm running my mouth off here, I might as well take a swing at Wonka's Chocolate Factory.

Google App Engine launched with great fanfare from the Python community. "Finally," they said, "somebody has figured out how to make Python scale." The thought is that any developer will be able to run his Twitter-Facebook mashup on the same framework that Google uses to run their apps. Infinite, magical scalability that you don't have to think about, data storage that you don't have to manage, and a language that's easy to program. Sounds great!

That's all well and good, but something tells me that the Google search engine (you know, the thing that makes money) isn't written in Python, making this just a proper beat off for the web programming community. I have further evidence. I have yet to see a program more impressive than a task and time manager running on the Engine. Killer app, indeed.

Google App Engine offers a developer all of the things that he would look down his nose at an ops manager to provide: data storage, web hosting and caching. Web developers are too busy to worry about the app to figure out why the database is running slow. No, it couldn't be a grotesquely complex query anywhere in my code. It's a database problem. The DBA must have fucked something up in the config. Yeah, that's it. If those DBAs weren't always down at the pub, we could get some real work done around here.

I do have to give both Google and Amazon some credit, though. Both noticed that the only ones to make any real money off of the California gold rush were the outfitters who sold mining equipment.

Cloud Computing's Next Form: Green Tech

As time goes on and venture capitalists get pitched, this technology will continue to change names to mask its stagnation. The next time around, it will be pitched as a "green" technology. Why ruin the environment with your data center? You can run a social media website and still love the earth.

Energy-efficient computers powered by sunshine. This will be an instant hit. There will be greenhouse gas output dashboards with neat little Ajax widgets. You'll have calculators to figure out how much to pay for carbon offsets each month. Don't believe me? Follow the money. "Green" technology is the most efficient, modern way to capitalize on liberal guilt. You also get to pass it off as altruism. Combine that with a web development community that runs on self-satisfaction and you've got a recipe for profit. Best of all, you can squeeze money out of an investor for this by making him feel ashamed to be a person of means.

What started as a noble cause has finally finished its devolution into a racket.

No matter what the name, you, the developer, will still be dealing reliability and accountability. Using someone else's infrastructure for your application will forever be a business risk, but it sounds so much less so with a cuddly name. Your CTO will fall for the next cycle pretty easily. The compunction he feels for his latest data center build-out will outweigh the downsides of an external dependency.

Al Gore even said so.

Original here

Mozilla: Web apps faster with Firefox 3.1

Firefox 3.1 will run many Web-based applications such as Gmail faster through incorporation of a feature called TraceMonkey that dramatically speeds up programs written in JavaScript, Mozilla said Friday.

JavaScript has been very broadly used to add pizzazz or flexibility to Web pages over the years, but in recent years, it's also become the plumbing for many rich Internet applications. However, because JavaScript has been hobbled by pokey performance, Web-based applications often struggled to work as responsively as "native" software running directly on PCs, and programmers writing Web applications have often turned to other options, such as Adobe Systems' Flash and Flex.

Now Mozilla hopes to change the balance of power in JavaScript's favor.

"TraceMonkey is a project to bring native code speed to JavaScript," said Mike Shaver, Mozilla's interim vice president of engineering, adding that JavaScript performance nearly doubles compared to Firefox 3.0, based on the SunSpider test of JavaScript performance. That speeds up many basic tasks, but it also brings image editing and 3D graphics into JavaScript's abilities, he said.

On Thursday, Mozilla programmers built TraceMonkey into the latest developer version of the open-source Web browser, and it will appear in the next released test version, which likely will be the first beta of Firefox 3.1, Shaver said. Firefox 3.1 is due in final form by the end of the year, though Mozilla is willing to let the schedule slip a bit, if necessary.

TraceMonkey dramatically improves the speed of many JavaScript operations.

TraceMonkey dramatically improves the speed of many JavaScript operations. (Click to enlarge.)

(Credit: Mozilla)

JavaScript execution speed can make surfing the Web snappier, so naturally, it's a key part of the resurgent browser wars between Microsoft's Internet Explorer, Mozilla's Firefox, Apple's Safari, and Opera. "We're as aware as anybody that the market is competitive again," Shaver said.

The SunSpider JavaScript test shows a boost of 83 percent, according to programmer and JavaScript pioneer Brendan Eich, who has worked on TraceMonkey and blogged about it on Friday. However, that speed test is an artificial benchmark that is an imperfect reflection of actual JavaScript applications such as Yahoo's Zimbra e-mail software.

Another illustration of TraceMonkey speed is a video of photo editing. Contrast and brightness adjustments take about 100 milliseconds instead of more than 700.

Shaver discussed TraceMonkey on his own blog too.

TraceMonkey explained
TraceMonkey's name is a cross between SpiderMonkey, Mozilla's current engine for interpreting JavaScript code, and a technique called tracing developed at the University of California at Irvine by Andreas Gal and others. Gal is TraceMonkey's lead architect, Shaver said.

TraceMonkey is what's called a just-in-time compiler, one type of technology that solves the problem of converting programs written by humans into instructions a computer can understand.

Most software that runs on people's computers is already compiled in advance into what's called a binary file, but JavaScript usually is interpreted line by line as it runs, a slower process. "We're getting close to the end of what you can do with an interpreter," Shaver said.

A just-in-time compiler, though, creates that binary file on the fly as the code arrives--when a person visits a new Web page, and the browser encounters JavaScript, for example.

TraceMonkey concentrates only on translating selected high-priority parts of software, though. By tracing and recording JavaScript program execution, TraceMonkey finds loops of repeated activity where programs often spend a lot of time. These loops of actual software behavior then are compiled into native instructions the computer can understand.

In contrast, some compilers translate the entire program, a burdensome process that involves mapping all possible paths the computer can take through the code and trying to figure out which are most important. Tracing technology, based on the actual execution of the program, concentrates only on the areas that actually occupy the computer.

"It lets us focus our optimization energy on the parts of the program that matter most," Shaver said.

That concentration means that TraceMonkey doesn't require a lot of memory or a slow-loading plug-in, Shaver said. And it also means that it's good for mobile devices, one of Mozilla's main focuses for browser development.

There's still a lot of work to be done in improving Web-based applications, though. Mozilla's next priority is improving the DOM--the document object model elements of Web browsers that are in charge of drawing and manipulating the Web page overall.

Although TraceMonkey currently is built into the new developer version of Firefox 3.1, it's disabled by default to begin with. "We did that because we want to get wider feedback," Shaver said.

Also in Firefox 3.1
Other significant changes will arrive in Firefox 3.1, Shaver said.

One is support for threading by JavaScript programs. Threads are instruction sequences, and newer multicore processors are able to run multiple threads simultaneously. Software support for that will mean JavaScript programs can execute some tasks in the background better, Shaver said.

Another is the built-in ability to play music encoded with the Ogg Vorbis format and video encoded with the Ogg Theora format. These formats, while not nearly as widely used or as supported as rivals such as MP3, are free from proprietary constraints such as patents, Shaver said, and therefore can be added to an open-source project such as Firefox.

"We're excited to bring unencumbered, truly open-source video to the Web," Shaver said. The support also works on all operating systems Firefox supports.

Mozilla will start encouraging Firefox users more actively to move to the current version soon. In about the next two weeks, Firefox 2 users will start getting messages to upgrade to version 3, Shaver said.

Currently, when copies of Firefox 2 check Mozilla servers to see if there's an update, the servers don't say to move all the way to version 3, so users must manually update.

"We're looking at doing that in the next two weeks," Shaver said. "The majority of users are still on Firefox 2."

Original here

Nerrivik - Beta 1 of Amarok 2.0 released!


The Amarok team is proud to announce the first beta version of Amarok 2, codenamed Nerrivik, released after days of hard work during this year's Akademy in Belgium. It contains a considerable amount of improvements over the previous alpha versions, bringing Amarok one step closer to the 2.0 release.

Thanks again for sending bug reports, patches and giving feedback. Amarok's development is heading towards 2.0 with an incredible rate, let's keep it that way!

Please be aware that the database scheme changed since alpha 2 and you will need to delete collection.db in Amarok's settings folder (often in ~/.kde4/share/apps/amarok or ~/.kde/share/apps/amarok) if you have used any previous version of Amarok 2.

The highlights of this first beta version are the scripting interface, AFT, new artwork and of course many bugfixes. The scripting interface has matured and script authors are encouraged to explore the new possibilities QtScript offers. The scripting interface might still change but those changes will be minor. Amarok File Tracking (AFT) helps you keep your playcounts, ratings and a lot more file related information that Amarok keeps in its database, even if you move files around in your file system. It has now been brought back to Amarok 2 and will be improved even more in future releases. To find out more about AFT you can read Jeff Mitchell's blogentry about it. The interface was improved a lot by new artwork provided by Nuno Pinheiro and a new splash screen by Wade Olson.

Some of the changes since alpha 2:

Features

  • Inline editing of tracks in the Collection is now possible.
  • Album moves can be undone
  • Grouped albums can be moved in the playlist by draggin the album header
  • Track moves in the playlist can now be undone
  • Gapless playback.
  • New "fuzzy" bias type, which matches values loosely.
  • Collection Setup automatically expands to show selected directories. (BR 123637)
  • Tag editing and file deletion for MTP devices
  • Add toolbox to context view
  • Allow selecting multiple playlist items.
  • Implement "Move to collection" functionality in file browser.
  • Saving/loading of biased playlists.
  • Improved script console
  • Set items in directory selector to partially checked when relevant. patch by Sebastian Trueg
  • Album is now added to the playlist when clicked in Albums applet.
  • Trigger play/pause when middle-clicking systray icon. (BR 167162)
  • New start flag --multipleinstances allows to run multiple instances of Amarok.
  • Full cover support for Nepomuk collection
  • Search local collection for albums to show in the album applet when playing non local content
  • Context view state is saved on exit and restored on start up.
  • New functions available to the scripting interface, under Amarok.Info.

Changes

  • New filename scheme widget in the Organize Collection dialog.
  • New laylout of the main toolbar using the new graphics.
  • Greatly reduced memory usage when using dynamic playlists.
  • Reworked layout and more intuitive interface in the Guess Tags from Filenames dialog.
  • New artwork by Nuno Pinheiro and Wade Olson
  • Better zooming animation in the context view
  • Better usage of the available space in the context view.
  • Show url in the playlist if track has no name. patch by Edward Hades (BR 167171)

Bugfixes

  • Fix crash when dragging media from an external source (or the file browser) to the playlist (BR 169035)
  • Fix crash when opening the setting dialog (BR 169215)
  • Many fixes to the behavior of the playlist when dragging things around.
  • Don't pop up multiple cover search dialogs when cancelling search in the Cover Manager (BR 167462)
  • Amarok would not respect the user's changes in the cover search dialog.
  • Amarok would submit tracks to lastfm reguardless of whether the user chose to enable scrobbling.
  • OSD translucency works now. (BR 166567)
  • Use name based sorting of tracks without a track number (fixes sorting in shoutcast and cool streams services)
  • Don't try to scan the whole $HOME on first startup.
  • Don't pop up the OSD after changing Amarok settings. (BR: 168197)
  • Fix crash when exiting while collection scan was running. (BR 167872)
  • Automatically re-authenticate connection if the Ampache server has logged us out. (BR 166958)
  • Status bar now allows shrinking main window beyond it's width and does not enlarge main window by itself. Patch by Daniel Molkentin (BR 166832)
  • Submit tracks to Last.fm also when playing Last.fm Radio. (BR 164156)
  • Check if the file is writable before allowing the tags to be edited in SqlMeta. ( BR 122797 )
  • Properly insert items dragged from the collection view. (BR: 166609)
  • Don't remove all the tracks in the group when removing the first. (BR: 167251)
  • Only increment playcount if we've played more than half of the song. (BR 121587)
  • Added protection against endless looping when a playlist contains only unplayable tracks.
  • Missing default playlist does not produce error message now. (BR 167385)
  • Fixed playlist bias drop-down box showing multiple empty and duplicate entries. (BR 167153)
  • Fixed the "Toggle Main Window" shortcut. (BR 167218)
  • Script manager now could stop scripts which use qt bindings.
  • Fix crash when calling GetCaps from the DBus Player interface
  • Update album applet on track change. (BR 167256)

Packages are available through your package manager for most Linux distributions and trough the KDE-on-Windows installer on Windows.

If you want to help us get Amarok 2.0 ready for final release drop by in #amarok or #rokymotion on freenode. We will appreciate any kind of help from translating, documentation writing through writing code and promoting.

Enjoy beta 1 and watch as the wolf is growing :-)
The Amarok 2 FAQ addresses some of the questions you might have about Amarok 2.

Original here

That password-protected site of yours - it ain't

By Dan Goodin in San Francisco

It's one of the simplest hacks we've seen in a long time, and the more elite computer users have known about it for a while, but it's still kinda cool and just a little bit unnerving: A hacker has revealed a way to use Google and other search engines to gain unauthorized access to password-protected content on a dizzying number of websites.

While plenty of webmasters require their visitors to register or pay a fee before viewing certain pages, they are typically more than eager for search engine bots to see the content for free. After all, the more search engines that catalog the info, the better the chances of luring new users.

Click here to find out more!

But the technique, known as cloaking, has a gaping loophole: if Google and other search engines can see the content without entering a password, so can you. Want to read this forum (http://forums.inkdropstyles.com/index.php?showtopic=4227) from the InkDrop Styles website? You can, but first you'll have to enter a user name and password. Or you can simply type "cache:http://forums.inkdropstyles.com/index.php?showtopic=4227" into Google. It leads you to this cache (http://209.85.141.104/search?hl=en&q=cache%3Ahttp%3A%2F%2Fforums.inkdropstyles.com%2Findex.php%3Fshowtopic%3D4227&btnG=Google+Search&aq=f), which shows you the entire thread.

The technique yields plenty of other restricted forums, including those here (http://209.85.141.104/search?hl=en&q=cache%3Ahttp%3A%2F%2Fpznetworks.com%2Findex.php%3Fshowtopic%3D3355%26view%3Dgetlastpost&btnG=Google+Search&aq=f), here (http://209.85.141.104/search?hl=en&q=cache%3Ahttp%3A%2F%2Fwww.chillcorner.com%2Findex.php%3Fshowtopic%3D1156&btnG=Google+Search&aq=f) and here (http://supex0.co.uk/xforums/index.php?showtopic=16).

Those in the know have been using the trick for years, but a hacker who goes by the handle Oxy recently made this post (http://hackforums.net/showthread.php?tid=25040) that shares the technique with the world at large. It reminds us of a similar approach for accessing restricted sites that involves changing a browser's user agent to one used by search engine bots.

The hack is one example of the security problems that result from the practice of cloaking. Robert Hansen, the web security guru and CEO of secTheory (http://sectheory.com/) recently alerted us to the compromised blog (http://www.blakeross.com/) of Blake Ross, the co-founder of the Mozilla Firefox project who recently went to work for Facebook. For more than a month, unknown miscreants have been using his site to host links to sites pushing diet pills and other kinds of drugs.

Thanks the some javascript magic, users who visit the site never see evidence of the compromise, i.e. the links are cloaked. But the image below shows what happens when javascript is disabled.


We've contacted Blake about his website, but haven't yet received a response. Cleaning up the site ought to be as easy as updating his badly out-of-date version of WordPress. Addressing the shadowy world of cloaking will take a bit more work.

Original here

Thousands of personal records lost each month

Thousands of computer records containing personal information about members of the public are being lost every month – with the rate of loss increasing, new figures reveal.

By Patrick Sawer and Melissa Kite

More than 160 "significant" incidents of confidential data being misplaced by councils, central government and businesses have been reported to the Information Commissioner's Office (ICO) since last November.

Each case represents the potential loss of information about thousands of individuals.

The revelations follow the loss last week of confidential records and sensitive intelligence relating to tens of thousands of criminals. Scotland Yard is investigating how a memory stick containing the information, taken from the Police National Computer (PNC), went missing from a private consultancy firm.

In the six months between November 2007 and April 2008, the ICO was notified of 94 data breaches. In the following two months there were a further 66.

Critics say it shows that organisations have done little to improve their data protection procedures following the scandal last October, when two HM Revenue and Customs (HMRC) CDs containing the entire child benefit database of 25 million families went missing.

In fact, another set of new figures reveal that security breaches at the HMRC itself are now running at ten a day, not all of which are reported to the Commissioner. Ministers have admitted in a parliamentary answer that overall security has got worse at HMRC since the department lost the two CDs.

Parliamentary answers obtained by the Conservatives show that between 1 October 2007 and 24 June 2008 there were 1,993 security breaches at HMRC, more than ten every working day.

Before the datagate scandal, between October 2006 and September 2007 there were 2,709 breaches - around 8 per working day.

Philip Hammond, shadow Treasury secretary said: "The public will rightly ask how this Government can claim to be taking data security seriously, when the number of breaches at the Revenue has actually increased following the lost discs fiasco.

Of the incidents reported to the Commissioner, 44 occurred in the private sector. But, together, local councils, government departments and the NHS were responsible for 95 breaches, with other public sector bodies such as housing associations reporting a further 21.

The breaches include the loss or theft of laptops, loss of paper records and removable disks and breaches of website security.

Organisations are not required by law to report all losses, and the actual number is thought to be far higher.

The Information Commissioner Richard Thomas issued a stern rebuke to company chief executives and civil servants in the wake of the new figures and the latest loss of data, from the PNC.

"It is particularly disappointing that the HMRC breaches have not prevented other unacceptable security breaches from occurring," he said.

Referring to the latest incident, in which contractors at PA Consulting Group decoded previously encrypted information from the PNC and placed it on the memory stick, which was subsequently lost, Mr Thomas added: "It is deeply worrying that after a number of major data losses and the publication of two government reports on high profile breaches of the Data Protection Act, more personal information has been reported lost.

"The data loss by a Home Office contractor demonstrates that personal information can be a toxic liability if it is not handled properly and reinforces the need for data protection to be taken seriously at all levels. It is vital that sensitive information, such as prisoner records, is held securely at all times."

The missing police data contains the personal details and intelligence notes on 33,000 serious offenders, dossiers on 10,000 'priority criminals' and the names and dates of birth of all 84,000 prisoners in England and Wales. It is also understood to include the names of informers who now fear they could be at risk of reprisals.

Ministers had promised to tighten the security of confidential data and the latest loss will prove hugely embarrassing, particularly as it involved information which originated from the Home Office.

Jane Kennedy, the Treasury minister, said the breaches from HMRC arose from "a wide range of different circumstances." She added: "Such security breaches reflect potential weaknesses reported by staff and not actual thefts or losses."

Original here

Your printer is lying to you

Out of ink? Already? When Farhad Manjoo's Brother printer abruptly stopped zipping out prints, he began to wonder if the printer wasn't simply lying that it was out of toner in order to trick him into buying more before he needed it. The prints hadn't been fading at all, but the printer simply refused to go on without a new cartridge.

No fool, Manjoo turned to the web for a solution: He saved his 60 bucks and instead found a simple fix. By covering up a sensor on the side of the toner cartridge with a piece of electrical tape he tricked the printer into thinking the cartridge was full. Well, not so much trick as convince: Manjoo says his printer's been going strong ever since, eight months and hundred of pages down the road, pumping out perfect pages.

Printers of both the laser and inkjet variety are notorious not just for requiring expensive replacement cartridges but for trying to get you to replace them well before you need to. It's epidemic in the industry, to the point where class action lawsuits have filed against Epson and Hewlett-Packard over the trickery.

In his story for Slate, Manjoo helpfully digests most of the conventional and unconventional wisdom for getting extra life out of a toner or inkjet cartridge, from vigorously shaking your laser toner to de-clump it to digging into the menus to find options for overriding "cartridge check" features. Because there are so many printer models out there, Manjoo ultimately recommends a web search for specific advice, but FixYourOwnPrinter is a good place to start.

Saving a tree may be good, but saving a whole bunch of oil and keeping a cartridge full of chemicals out of the landfill: Even better.

Original here

Linux and sex battle it out in Utah

Posted by Matt Asay

Even though the data is apparently a bit screwy, I was still really proud to see Utah emerge as the top state for "Linux" searches on Google.

The data also shows that Cubans prefer "Linux" to "sex," which is almost certainly not true, but I think there may actually be something to Utah's strong affinity for Linux, at least as it relates to searches for "sex" on Google.

In Utah, we already know about sex, so we don't have to spend a lot of time searching for it. I have four kids. I should probably be searching for "birth control" before I search for "sex." :-)

With many people getting married in their early 20s, especially in Mormon-filled Utah County, we even know where to find it. Just look at how a city like Provo (comparatively many Mormons) fares compared with Salt Lake City (comparatively few Mormons).

Provo is the place for "Linux." Salt Lake City? It still wants "sex."

But why would Utah have a much stronger interest in Linux than every other state? The article I cited above suggests that it's due to Novell's presence in the state, and that may be, especially in Provo, where Novell has an R&D center. But I would then expect much the same of Raleigh, N.C., where Red Hat is based. Nope. "Sex" is still king in Raleigh.

Only if you go north to Massachusetts does Red Hat's affection for Linux over sex show through.

In fact, if you look at the data, Westford (where Red Hat's R&D center is located) shows an overriding concern with Linux, while Waltham (where Novell is based) splits its time between sex and Linux.

Humorously, if you add "Microsoft" to the mix, Red Hat Westford cares more about Microsoft than Linux, but only by a small margin, while Waltham? Let's just say it's seriously got Microsoft on the mind. :-)

So what does it all mean? Absolutely nothing. But it's still great to claim Utah as the home of Linux in the United States. We'll take the honor. It's a nice diversion from that other search term.

Original here

Red Hat, Fedora servers infiltrated by attackers

By Ryan Paul

Linux distributor Red Hat has issued a statement revealing that its servers were illegally infiltrated by unknown intruders. According to the company, internal audits have confirmed that the integrity of the Red Hat Network software deployment system was not compromised. The community-driven Fedora project, which is sponsored by Red Hat, also fell victim to a similar attack.

"Last week Red Hat detected an intrusion on certain of its computer systems and took immediate action," Red Hat said in a statement. "We remain highly confident that our systems and processes prevented the intrusion from compromising RHN or the content distributed via RHN and accordingly believe that customers who keep their systems updated using Red Hat Network are not at risk."

Although the attackers did not penetrate into Red Hat's software deployment system, they did manage to sign a handful of Red Hat Enterprise Linux OpenSSH packages. Red Hat has responded by issuing an OpenSSH update and providing a command-line tool that administrators can use to check their systems for potentially compromised OpenSSH packages.

Key pieces of Fedora's technical infrastructure were initially disabled earlier this month following a mailing list announcement which indicated only that Fedora personnel were addressing a technical issue of some kind. Fedora project and leader and board chairman Paul W. Frields clarified the situation on Friday with a follow-up post in which he indicated that the outage was prompted by a security breach.

Fedora source code was not tampered with, he wrote, and there are no discrepancies in any of the packages. The system used to sign Fedora packages was among those affected by the incursion, but he claims that the key itself was not compromised. The keys have been replaced anyway, as a precautionary measure.

"While there is no definitive evidence that the Fedora key has been compromised, because Fedora packages are distributed via multiple third-party mirrors and repositories, we have decided to convert to new Fedora signing keys," he wrote. "Among our other analyses, we have also done numerous checks of the Fedora package collection, and a significant amount of source verification as well, and have found no discrepancies that would indicate any loss of package integrity."

Assuming that Red Hat and Fedora are accurately conveying the scope and nature of the intrusion, the attacker was effectively prevented from causing any serious damage. Red Hat's security measures were apparently sufficient to stave off a worst-case scenario, but the intrusion itself is highly troubling. Red Hat has not disclosed the specific vulnerability that the intruders exploited to gain access to the systems.

Like the recent Debian openssl fiasco, which demonstrated the need for higher code review standards, this Red Hat intrusion reflects the importance of constant vigilance and scrutiny. When key components of open source development infrastructure are compromised, it undermines the trust of the end-user community. In this case, Red Hat has clearly dodged the bullet, but the situation could have been a lot worse.

Original here

Hackers Crack into Red Hat

John Fontana, Network World

Red Hat confirmed Friday that hackers compromised infrastructure servers belonging to the company and the Fedora Project, including systems used to sign Fedora packages.

In the Fedora breach, company officials said they had "high confidence" the hackers did not get the "passphrase used to secure the Fedora package signing key." Regardless, the company has converted to new Fedora signing keys.

Red Hat's Fedora project leader Paul Frields made the announcement Friday on the fedora-announce-list with the subject line "Infrastructure Report." When contacted, Red Hat officials pointed to Frields' announcements as the company's official statement.

In the Red Hat compromise, the intruder was able to sign a small number of OpenSSH packages relating to Red Hat Enterprise Linux 4 (i386 and x86_64 architectures only) and Red Hat Enterprise Linux 5 (x86_64 architecture only).

As a precaution, Red Hat released an updated version of those packages, a list of tampered packages and a script to check if any of the packages are installed on a user's system.

"This is a significant issue and they have to work to address it," says Jay Lyman, an open source analyst with The 451 Group. "These are some of the growing pains of a distribution becoming more complex. They are building more and more into their operating systems, and with that comes more complexity and more challenges. But what I think is most important here is the response."

Red Hat first hinted at a problem on Aug. 14, when Frields wrote that the Fedora infrastructure team was investigating an issue that could result in some service outages. The message was followed up on Aug. 16 saying the team was "continuing to work on the problem."

By that time, there were grumblings and rumors online and in discussion groups that internal systems may have been hacked, which indeed was the case, and was confirmed Friday by Red Hat.

In his announcement, Frields said changing the Fedora signing keys could require "affirmative steps" from every Fedora system owner or administrator, and said, if needed, those steps would be made public.

Frields also said that through checks of Fedora packages and source code that the company did not think packages had been compromised and said "at this time we are confident there is little risk to Fedora users who wish to install or upgrade signed Fedora packages."

The Fedora Project released alpha code for Fedora 10, its next version, earlier this month.

On the Red Hat side, the company issued an OpenSSH update and guidance on how users can protect themselves.

The company said it was "highly confident" that the Red Hat Network, an internal system that makes updates and patches available to its customers, was not compromised by the hacker.

The company, however, said it was issuing its alert for those who "may obtain Red Hat binary packages via channels other than those of official Red Hat subscribers."

Frields also made it clear that the affects of the intrusions on Fedora and Red Hat were not the same and that Fedora packages are signed with keys different from those used to sign Red Hat Enterprise Linux packages.

Original here

Web Scout: Spinning through online entertainment and connected culture.

Social Status: Digg's badwithcomputer talks shop

Badwithcomputer
Digg user badwithcomputer. (Photo credit: Dashiell)

We have all heard from the pioneers of social media. Interviews with Kevin Rose on Digg, Biz Stone on Twitter and Mark Zuckerberg on Facebook are a dime a dozen.

But you, online reader, were Time's person of the year in 2006. You are what keeps social media fresh and worth reading (well, some of the time).

As part of a new Web Scout series, we talk to you -- well, maybe not you, per se, but the users out there who spend hours per day contributing content and building an almost celebrity status on their platforms of choice.

First up is Digg user badwithcomputer, who has consistently been a top 10 submitter to the social news website for the past year since he opened his account.

On Digg, his "real name" is Henry Hill, which actually turns out to be a homage to the 1990s flick "Goodfellas." Outside the virtual world, the Los Angeles resident is Dashiell, a 21-year-old student at Pitzer College, a liberal arts school in Claremont.

Badwithcomputer's beginnings on Digg were much like that of anyone just getting into the site. He would submit stories he thought were interesting and nobody seemed to agree with him. His submissions would get only a couple of votes, or Diggs, and then fall off the map. Dashiell chronicles his humble beginnings in our instant-message conversation.

badwithcomputer: One day I got lucky with some video I found on Break and was instantly hooked on the inexplicable nerdy joy of seeing something I submitted become popular. I haven't looked back.

LA Times: I know what you mean.

BWC: Yeah, and that's the problem. Most users just throw their hands up and leave a million comments about how broke the system is without taking a look at their activity on the site. Consistently submit quality content that is of interest to a wide range of people and things will eventually start rolling.

Or not, and that's the often irritable problem with Digg for a lot of people. But what can you do?

You could try talking to other Diggers. Dashiell keeps in contact with top users, like Zaibatsu, the third all-time submitter, and head honcho MrBabyMan. And their chats take place through more traditional means, not using Digg's "shout" feature, an internal system for communicating and sharing stories of which Dashiell is not a fan.

BWC: I haven't talked to MrBabyMan in a while but we have each other's screen names. Every now and then i get the pleasure of a brief phone chat with Zaibatsu where we talk shop and do a cursory catchup. MakiMaki is an android from the future and I'm worried that if I talk to him he'll drain all my Digg power, like Shang Tsung style.

MakiMaki, by the way ... never sent a single shout and he is hitting the front page more than anyone else these days. Shows you how much you can rely on the shout system if you think you can register over night and just start shouting to a thousand friends.

In addition to those, Dashiell says he respects Digg users jaybol and Brian Cuban, brother of billionaire entrepreneur Mark.

Unlike many of Digg's top users, Dashiell doesn't have a slew of RSS feeds he is subscribed to. He searches for Diggable content the same way the Internet's earliest adopters found links to post on Usenet message boards: by surfing his favorite websites, which include Kotaku, Gawker, Break and Funny or Die. E-mail solicitations probably aren't the best idea for "helping" him find your content.

BWC: Sometimes I get IM's or emails from sites basically just wondering how they can get something on the front page. Every now and then i'll get a real suspect email that is just straight up offering money in exchange for a front page submission and I just forward those to abuse@digg.com.

It just comes with the territory of being on the top 50 digg users list.

Badwithcomputer is notorious for his funny, edgy, eye-grabbing headlines. We asked him to pick out some of his all-time favorites, but of all the ones he mentioned, THIS IS HOW I MAKE BREAD is the only story suitable for publication due to its lack of obscenity.

BWC: That was a great submission just because you rarely see something with all caps get popular, but it totally fit in this case.

As trends and the site's community rapidly change, Dashiell seems to be ahead of the curve. His number of front-page hits -- nearly three every day for the last 30 days -- aren't slowing down, and he will likely be a major player on the site for a very long time.

"I got 99 problems but a Digg ain't one," he wrote.

-- Mark Milian

Original here

Intel's Future: Real Transformers and Power by Wi-Fi

Sharon Gaudin, Computerworld

The intelligence gap between man and machine will largely close by the year 2050, according to Intel Corp.'s chief technology officer, who last week reiterated that point during a keynote address at the Intel Developer Forum.

At the IDF event in San Francisco, Intel CTO Justin Rattner said that the chip maker's research labs are working on human-machine interfaces and looking to foster big changes in robotics and in the ability of computers to interact with humans. He specifically pointed to work that Intel is doing on wireless power and on developing tiny robots that can be programmed to take on the shape of anything from a cell phone to a shoe or even a human.

"The industry has taken much greater strides than anyone ever imagined 40 years ago," Rattner said. "There is speculation that we may be approaching an inflection point where the rate of technology advancements is accelerating at an exponential rate, and machines could even overtake humans in their ability to reason in the not-so-distant future."

Just last month, Rattner, who also is a senior fellow at Intel, made similar comments in an interview with Computerworld, saying that perhaps as early as 2012, the lines between human and machine intelligence will begin to blur. The intelligence gap should become awfully narrow within the next 40 years, he added, predicting that by 2050, computing will be less about launching applications and more about using systems that are inextricably woven into our daily activities.

In that same vein, Rattner talked about programmable matter during his IDF speech. He explained that Intel researchers are working to figure out how to harness millions of miniature robots, called catoms, so they could function as shape-shifting swarms.

"What if those machines had a small amount of intelligence, and they could assemble themselves into various shapes and were capable of movement or locomotion?" he said. "If you had enough of them, you could create arbitrary shapes and have the assembly of machines that could take on any form and move in arbitrary ways."

The basic idea is that the catoms, which one day should be about the size of a grain of sand, could be manipulated with electromagnetic forces to cling together in various 3D forms. Rattner said that Intel has been expanding on research work done by Seth Goldstein, an associate professor at Carnegie Mellon University.

"We're actually doing it for real," Rattner said. He added that Intel started "at the macro scale," with catoms that were "inches across." The robots had microprocessors associated with them and could attract or repel one another via electromagnetism or the use of electrostatic charges, according to Rattner. "It's programmable matter," he said.

During his speech, Rattner showed off millimeter-scale 3D catoms and said that electronics could be embedded inside the miniature robotic spheres.

Jason Campbell, a senior researcher at Intel, said in an interview that the development and use of catoms will change the way people interact with computers and other devices in significant ways.

"Think of a mobile device," Campbell said. "My cell phone is too big to fit comfortably in my pocket and too small for my fingers. It's worse if I try to watch movies or do my e-mail. But if I had 200 to 300 milliliters of catoms, I could have it take on the shape of the device that I need at that moment." For example, the catoms could be manipulated to create a larger keypad for text messaging. And when the device wasn't being used, Campbell said, he could command it "to form its smallest shape or even be a little squishy, so I can just drop it in my pocket."

Campbell envisions that each catom would have a computer processor and some form of memory. Four years ago, he thought it would take 30 to 50 years for this kind of technology to be realized. Now, though, he estimates that the time it will take is much closer to 10 years.

Both Campbell and Rattner said the biggest obstacle will be figuring out how to make the micro-bots think like a swarm. Instead of sending individual directions to each catom, one set of instructions will have to be sent to make the catoms work together, so each one takes the correct position to create the desired 3D shape. But both were optimistic that it will happen, eventually.

"Sometime over the next 40 years, this will become everyday technology," Rattner said in an interview before his speech. And could catoms actually take human form? "Sure," he said. "Why not? It's an interesting thing to speculate on."

Wireless Power

Another technology that Rattner said will change the way users deal with computers is wireless power. Imagine, he said, being able to take your laptop, cell phone or music player into a room and have them begin to charge automatically. What if it could be done in a certain area of an airport or at your office desk? No more power cords. No more need to find a place to plug in.

Working off of principles proposed by MIT physicists, Intel researchers have been working on what they're calling a Wireless Resonant Energy Link. During his keynote, Rattner demonstrated how a 60-watt light bulb can be powered wirelessly and said that doing so requires more power than would be needed to charge a typical laptop.

"Wouldn't it be neat," he said in the interview, "if we could really cut the cord and not be burdened with all these heavy batteries, and not worry if you have the charger? If we could transmit power wirelessly, think of all the machines that would become much more efficient."

Joshua Smith, a principal engineer at Intel, said in a separate interview that the company's researchers are able to wirelessly power the light bulb at a distance of several feet, with a 70 percent efficiency rate -- meaning that 30 percent of the energy is being lost during the power transfer.

Even so, "it's a big step," said Smith. Within a few years, he envisions having laptops that recharge themselves via a wireless connection if they're within 10 feet of a base station.

"You could certainly imagine integrating it into existing computer equipment," Smith added. "You'd have power hot spots in your house or office. Where you have a wireless hot spot, you could [also have a power hotspot] and get data and power there. That's the vision."

Original here

Opinion: Why Google has lost its mojo -- and why you should care

By Preston Gralla

(Computerworld) Google has gone from innovative upstart to fat-and-happy industry leader in what seems like record time. Put simply, the search giant has lost its mojo. That's good news for Microsoft, and it could affect how you use Google's cloud computing services.

Google looks as if it's on top of the world right now, holding an ever-increasing lion's share of the search market. So why do I think it's lost its mojo? Let's start with the way it treats its employees. Google's largesse has been legendary -- free food, liberal maternity and parental leave, on-site massages, fitness classes and even oil changes.

But according to a recent New York Times article, those days may be gone. Google recently doubled the price of its company-run day care, and when employees grumbled, top execs dismissed their concerns, according to the Times. The newspaper reported that Google co-founder Sergey Brin ignored the parents' concerns and complained that he was tired of employees who thought that they were to entitled to benefits such as "bottled water and M&Ms."

The article's author, Joe Nocera, concludes, "Google has shown that it thinks about day care the same way every other company does -- as a luxury, not a benefit. Judging by what's transpired, that's what Google is fast becoming: just another company."

Another example: Google employees have started deserting the company. In one of the strangest turnarounds, Sergey Solyanik, who was development manager for Windows Home Server at Microsoft before he left for Google, abandoned Google to return to Microsoft -- and he blogged about it. Solyanik is not alone; plenty of other Googlers have headed for the exits as well.

Need more evidence that the mojo is gone? Consider this: Google's stock price has plummeted about 34% from more than $740 per share in November 2007 to about $490 early last week. That's even worse than the overall market: The Nasdaq fell 16% and the Dow 17% in the same period. Once a company's stock price follows the market rather than setting its own course, its innovative days are often behind it.

Even if Google has lost its mojo, why should you care? It won't make your searches any less effective, will it?

No, your searches won't suffer. But Google has its eyes on bigger things than search, notably your IT department. It's looking to displace Microsoft with hosted services like Google Apps, Gmail and Google Docs.

When Solyanik left Google, he had this to say about Google services such as Gmail and Google Docs: "There's just too much of it that is regularly broken. It seems like every week 10% of all the features are broken.... And it's a different 10% every week -- the old bugs are getting fixed, the new ones introduced."

Worse yet, he warned that Google's engineers care more about the "coolness" of a service than about the service's effectiveness. "The culture at Google values 'coolness' tremendously, and the quality of service not as much," Solyanik said.

All this is clearly very good news for Microsoft. Microsoft has already lost the search market to Google. If Google ever gets a serious foothold in IT, Microsoft is in trouble.

So what does it mean for you? If you're thinking of making the jump to Google hosted services, look beyond the magic of the brand name. Instead, take a hard look at the services it's trying to sell you, and evaluate Google the same way you would any other vendor.

And the next time you use Gmail, Google Calendar or Google Docs, take a close look at the service's logo. You'll notice the word beta there, even though some of those services have been around for several years; Gmail, for example, was launched in 2004. If Google is really ready for IT prime time, shouldn't it move its software out of the beta cycle?

Preston Gralla is a Computerworld contributing editor and the author of more than 35 books, including How the Internet Works and Windows Vista in a Nutshell. Contact him at Preston@gralla.com.

Original here

Google's food perks on the chopping block

There's no such thing as a free dinner. A worker at Google tells us the company is taking evening meals off the menu: "Google has drastically cut back their budget on the culinary program. How is it affecting campus? No more dinner. No more tea trolley. No more snack attack in the afternoon." The changes will be announced to Googlers on Monday. Workers at the Googleplex will remain amply fed, with free breakfast and lunch -- dinner will be reserved for geeks only -- but it's still a shocking cutback.

Last year, when we aired the mildest speculation about Google cutting back on free food, commenters were outraged. Google has long milked its cafeterias for their publicity value; company executives have crowed about the company's resistance to recessions and its commitment to coddling its employees. Founders Larry Page and Sergey Brin even promised shareholders they'd add perks, rather than cut them.

In 2004, they wrote:

We provide many unusual benefits for our employees, including meals free of charge ... We are careful to consider the long term advantages to the company of these benefits. Expect us to add benefits rather than pare them down over time. We believe it is easy to be penny wise and pound foolish with respect to benefits that can save employees considerable time and improve their health and productivity.

What went wrong? For one, Google handed its restaurants over to an outside management company, Bon Appétit, which runs many Valley corporate cafeterias. The change did not go well, with Google and Bon Appétit constantly clashing — even over minor things, like whether kitchen workers could use Google's foosball tables. Star executive chefs like Nate Keller and Josef Desimone left. Desimone, who was recruited by Facebook, took many chefs with him.

The departures left Google's kitchens understaffed even as it undertook an expansion of its cafes to Alza Plaza, an office complex close to the Googleplex it acquired last year. Bon Appétit simply didn't have the staff to keep offering dinner, and Google didn't want to foot the bill to hire more.

Could this all stem from a change of heart by Google's formerly perk-crazy founders? Sergey Brin is said to have complained about employees' overweening sense of entitlement to "bottled water and M&Ms," a comment company flacks denied he made. Regardless of what Brin precisely said, it makes sense that he'd rethink his generosity. Brin has made his billions already. Spending to keep his employees motivated at startup levels won't pay off. Tightening the belt to keep profit margins high? That, and not free dinners, will preserve Brin's outlandish wealth.

The savings from cutting dinner, as well as some snacks, should be substantial. By one estimate, Google spends $7,500 a year on food per employee. But the phrase "per employee" is used loosely here. Employees often took their spouses and children out to dinner at the cafes, or wrapped up food to take home for the family — on shareholders' dime.

Fine, some Googlers abused the perk. Even then, consider the message Google is sending to employees: Go home and have dinner with your families. What will Thunder Parley, Google's self-appointed in-house food critic, say? This is a slashing of benefits Google executives can't sugarcoat.

Original here

Google: "No Trespassing" signs won't stop Street View

By David Chartier

Google has once again managed to stir up debate about the existence of privacy in our highly connected culture as it argues that the "No Trespassing" sign in your front yard is just for show.

Back in April, a Pittsburgh couple sued Google for driving up a private road, taking pictures of their house, then posting them for the world to see with Google Maps' Street View feature. At the time, Google argued that pictures and other details of the house were already on the Internet due to its previous sale listing. The company was also quick to point out that the couple compounded the attention drawn to the photos it took by filing a lawsuit instead of using Google's tools for requesting a removal of the images.

The debate centered around an "opt-in or opt-out" conundrum. In this media-rich society where anyone can snap a photo with a free mobile phone and instantly share it with the world, is it the responsibility of companies like Google to prevent unauthorized content from leaking onto its services? Or is an (ideally) effective set of content removal policies and tools enough, and we all just have to roll with the punches as they come?

More complaints surfacing against Google's persistent Street View cars appear to side with the former view, especially when Google is funding the data collection and has control over how it's done.

"It isn't just a privacy issue; it is a trespassing issue with their own photos as evidence," Betty Webb, a Humboldt County, CA, resident told The Press Democrat. Webb and residents of other counties like Sonoma are complaining now that Google's drivers have flat-out ignored over one hundred private roads, "No Trespassing" signs, and at least one barking watchdog in their quest to photograph roads and homes.

A Google spokesperson advised users upset about images that they believe shouldn't be on Street View can use the "report inappropriate image" link. "After verification, the image will be removed or a clearly identifiable face will be blurred," the spokesperson said. "If found to be inappropriate or sensitive, the image will be removed permanently. We act quickly to review and act upon imagery that users have requested to be blurred or taken down."

Another spokesperson told The Press Democrat that company policy is "to not drive on private land." Google apparently tries to hire local drivers who will know their territory and have a better chance of knowing what areas and roads are off limits to the public. One anonymous Street View driver, however, told the paper that he was simply told to "drive around" and collect images.


Some people wish Google Street View would not capture certain scenes

Google also claims that "turning around in a private driveway while photographing the exterior of a home is not a substantial intrusion."

In defending itself against the aforementioned Pittsburgh couple's lawsuit, the company has argued that "complete privacy does not exist." At various times, the company has stated that the existence of things like satellite photography means that Street View photography is just icing on the cake. Plus, UPS drivers and strangers needing to turn around are allowed to pull into people's driveways; why not Google?

Google has seen unbridled success throughout the construction of its empire on the Web, largely because of the Web's fundamental nature. It's a public forum on which individuals and businesses willingly place websites, and search engines like Google are the gatekeepers for finding those sites. Sometimes, a site owner doesn't want their creation to be indexed by Google, and there are very simple tools for turning away its indexing bots. Now site owners may still invite whomever they choose to visit the site and grant authorized access through various methods. But when these tools are put in place, Google doesn't crawl the site, and its owner can sleep a little better knowing that Google respected their digital privacy.

In the real world, things like private roads and trespassing signs serve the same purpose as the tools Google provides for turning away its indexing robots; they are opt-out mechanisms from an earlier age. Forcing people to build a private road, erect a sign, and then still use some online tools to have the pictures pulled (after already being available to the world) seems unduly burdensome on a common-sense level, and it has little to do with whether a stranger pulls into your driveway simply in order to turn around. What the courts will conclude, however, remains in doubt. A Google spokesperson reiterated for Ars the search giant's position that the Pittsburgh couple's lawsuit is "without merit" and called the couple's decision to head to court "unfortunate."

As more of these Street View complaints bring Google's practices into the spotlight, it's becoming clear that people will cling tenaciously to their privacy, whether or not Google believes it to be an illusion.

Original here

Uncovering The Dark Side of P4P

Written by Ernesto

P4P is touted as the new and improved P2P. The technology has the potential to lower bandwidth costs for ISPs and speed up downloads for P4P enabled filesharing clients. There is a dark site to this new technology though. The strong anti-piracy connections are fuel for conspiracy theorists, and Net Neutrality might be at stake.

Earlier this week, researchers from Yale University and The University of Washington presented the latest findings from their P4P research. P4P is a new technology that could make any filesharing application (including BitTorrent) cheaper for ISPs, as it tries to connect to local peers as much as possible. Local traffic is cheaper for ISPs and reduces the load on the network. In addition, P4P enabled filesharing clients will download files faster than regular clients.

In theory this is a great idea. However, P4P requires collaboration between the developers of filesharing clients and ISPs, which might be a problem. Indeed, most P2P companies TorrentFreak talked to are not that excited about the initiative, but they wont say that out loud, and play along for the time being.

There might even be a darker side to the project, as the P4P working group includes some prominent members of the entertainment industry and well known anti-piracy lobbyists. Besides that, we argue that it is likely that the technology might slow down transfers of people who are on ISPs that don’t end up supporting the technology, raising serious Net Neutrality issues.

Let’s start off by looking at the mission statement of the P4P working group, which was founded last year. One of the key objectives of the group, quoted from their official mission statement (pdf) is as follows (emphasis added).

[to] Determine, validate, and encourage the adoption of methods for ISPs and P2P software distributors to work together to enable and support consumer service improvements as P2P adoption and resultant traffic evolves while protecting the intellectual property (IP) of participating entities

It might of course be that the P4P group included this objective to cover their asses. However, we have our doubts. For now, the technical specs give no reason to believe that the new technology will support piracy filters or other anti-piracy measures. But, when you consider that the MPAA, NBC Universal and several other representatives from the entertainment industry are members of the working group, this might very well be suggested in the next phase of the project.

One might wonder, why is the MPAA involved in all this? Obviously their agenda is to stop copyright infringement, so we have no reason to believe that they will try to steer P4P in this direction as well. This would not be a big surprise really. The P4P working group was founded by The Distributed Computing Industry Association (DCIA), a collaboration of the entertainment industry, ISPs and P2P companies. The purpose of the DCIA is clear, as we can read on their website (emphasis added):

Our number one priority clearly is the elimination of copyright infringement and, because DCIA advocates the commercial development of distributed computing (as opposed for example to trying to stop it), our key strategy centers on proliferating legitimate commercial services to displace unauthorized media file sharing currently being conducted by consumers on a massive scale.

This shows the P4P working group from a whole other perspective doesn’t it? We have no doubt that the researchers involved in this have the best of intentions, and that they really want to develop a new technology that benefits P2P users and ISPs. We also believe, however, that the MPAA and other rights holders who are part of the project, will push their agenda forward sooner of later.

The DCIA collaboration is an initiative from Hollywood’s big shots and several of the larger technology corporations. Back in 2002, both sides got together and decided that it would be a good idea to start a working group to keep an eye on future technological developments. Below, we quote a paragraph from one of the original letters (pdf) discussing the matter, signed by the CEOs of the MPAA, Walt Disney, Sony Pictures, AOL Time Warner, Vivendi Universal, Metro-Goldwyn- Mayer, Viacom and News America (emphasis added).

We thus propose the establishment of a new high level working group, independent or as part of an existing process, to find technical measures that limit unauthorized peer-to-peer trafficking in movies, music and other entertainment content.

And so the DCIA was born, which later started the P4P workgroup. We will leave it up to the readers to decide whether this is a serious threat or not, we will find out sooner or later anyway.

There is one other “dark” aspect of P4P we want to mention though, something that hasn’t been reported elsewhere, even though it can have some very negative consequences for P2P users.

By looking at the latest P4P research report, we come to the conclusion that P4P might slow down the downloads of people who use non-P4P clients, or those who are on an ISP that doesn’t support P4P. This is because P4P users will be more likely to share with local peers, while regular P2P users share with everyone (note that both can be in the same swarm). This goes against Net Neutrality principles, although this depends on how one defines Net Neutrality.

Since P4P prioritizes local traffic, P4P users will share less with users who do not use the technology. This will affect both the upload and the download side, but the data in the report seems to suggest that the give and take ratio is worse when P4P is enabled, so they take more from other ISPs (relatively) than they give back (mild leeching). This is most likely facilitated by the fact that upload speeds tend to be slower than download speeds.

Let’s conclude by saying that the researchers from Yale University and The University of Washington came up with a promising technology that could potentially speed up P2P downloads, at least for some users. Getting ISPs and filesharing developers to embrace this new technology will not be easy though. ISPs will sure be motivated, as it will save them money. However, we’re not so sure that BitTorrent client developers (and others) will adopt it so easily, since it might degrade performance on non P4P ISPs.

The largest threat (as usual) might come from the anti-piracy lobby, as they will probably push for content filters or other anti-piracy measures. They haven’t done this so far, but to us this seems to be inevitable.

Original here

Human exoskeleton suit helps paralyzed people walk

Photo
«»1 of 3Full Size

By Ari Rabinovitch

HAIFA, Israel (Reuters) - paralyzed for the past 20 years, former Israeli paratrooper Radi Kaiof now walks down the street with a dim mechanical hum.

That is the sound of an electronic exoskeleton moving the 41-year-old's legs and propelling him forward -- with a proud expression on his face -- as passersby stare in surprise.

"I never dreamed I would walk again. After I was wounded, I forgot what it's like," said Kaiof, who was injured while serving in the Israeli military in 1988.

"Only when standing up can I feel how tall I really am and speak to people eye to eye, not from below."

The device, called ReWalk, is the brainchild of engineer Amit Goffer, founder of Argo Medical Technologies, a small Israeli high-tech company.

Something of a mix between the exoskeleton of a crustacean and the suit worn by comic hero Iron Man, ReWalk helps paraplegics -- people paralyzed below the waist -- to stand, walk and climb stairs.

Goffer himself was paralyzed in an accident in 1997 but he cannot use his own invention because he does not have full function of his arms.

The system, which requires crutches to help with balance, consists of motorized leg supports, body sensors and a back pack containing a computerized control box and rechargeable batteries.

The user picks a setting with a remote control wrist band -- stand, sit, walk, descend or climb -- and then leans forward, activating the body sensors and setting the robotic legs in motion.

"It raises people out of their wheelchair and lets them stand up straight," Goffer said. "It's not just about health, it's also about dignity."

EYE CONTACT

Kate Parkin, director of physical and occupational therapy at NYU Medical Centre, said it has the potential to improve a user's health in two ways.

"Physically, the body works differently when upright. You can challenge different muscles and allow full expansion of the lungs," Parkin said. "Psychologically, it lets people live at the upright level and make eye contact."

Iuly Treger, deputy director of Israel's Loewenstein Rehabilitation Centre, said: "It may be a burdensome device, but it will be very helpful and important for those who choose to use it."

The product, slated for commercial sale in 2010, will cost as much as the more sophisticated wheelchairs on the market, which sell for about $20,000, the company said.

The ReWalk is now in clinical trials in Tel Aviv's Sheba Medical Centre and Goffer said it will soon be used in trials at the Moss Rehabilitation Research Institute in Pennsylvania.

Competing technologies use electrical stimulation to restore function to injured muscle, but Argo's Chief Operating Officer Oren Tamari said they will not offer practical alternatives to wheelchairs in the foreseeable future.

Other "robot suits", like those being developed by the U.S. military or the HAL robot of Japan's University of Tsukuba, are not suitable for paralyzed people, he said.

(Editing by Jeffrey Heller and Mary Gabriel)

Original here

How to Travel at a Million Files a Minute


HOWARD BLOOM has a need for speed.

A self-described Web addict, Mr. Bloom, a 65-year-old author who lives in New York, would rather be unconscious than offline. “If my computer’s down and I can’t use the Internet, I sleep,” he says. “I do not have motivation to be alive that day.”

But Mr. Bloom’s problem is not power failures; it is slowdowns.

“The Web is agonizingly and inconsistently slow,” he says. “The whole thing needs to be faster.”

In the beginning, the Internet was fast enough for most. Checking e-mail and reading a Web site did not require scads of bandwidth and any broadband connection was more than enough.

But a funny thing happened in the last few years: Many people became power users.

People started buying albums from iTunes. They started downloading episodes of “Mad Men.” And they watched endless videos on YouTube. All this added activity calls for a faster Internet.

When that day comes, there will be much rejoicing. But until then, there are things users can do to make sure their connections are as fast as possible. Here are a few ways:

GET THE RIGHT CONNECTION Consumer broadband is split between two competing technologies: digital subscriber line, or DSL, and high-speed cable. Depending on how many providers are in your area, you may have a choice. Generally speaking, cable is faster.

But — and there is always a “but” — cable speeds can be affected by how many people are online at once and even by how close your connection is to your local broadband source.

There is also a new competitor to cable and DSL. Verizon has been introducing a new fiber optic service called FiOS, which is faster than DSL. It rivals, and may exceed, cable’s fastest speeds. At this point, FiOS is only available in some areas (for a listing, see verizon.com/fios).

Even with cable and DSL, there are choices of speeds. Most service providers offer more than one tier of Internet access. Pay a higher monthly fee, and you will have faster maximum download and upload speeds.

There is a catch. What is advertised as a maximum may rarely be reached. Many Internet service providers do not configure their services to the maximum speed because they are configuring them for shared networks. Their aim is to make sure most users are satisfied, not to cater to the small group that wants the fastest possible speeds.

For example, raising your maximum download threshold from 1.5 megabits per second to 10 megabits per second, which are the tiers offered by Time Warner Cable, should have a positive effect over all on your browsing and downloading.

UPDATE YOUR COMPUTER The second link in an Internet connection is your computer itself. Web sites today demand more from your PC than they did even two years ago, so some upgrades may be in order.

The single most effective thing you can do to improve your computer’s performance is to add more random access memory, or RAM. If your machine has fewer than 2 GB of RAM, you should add more. Fortunately, adding RAM is not terribly expensive (an additional gigabyte costs around $40) and is fairly easy to do (you might need a small screwdriver and, at most, five minutes). For an instructional video of how to install RAM, go to cnettv.cnet.com and search for “adding RAM.” The first search result will show you how. Aside from hardware upgrades, some other tweaks can eke out some extra speed from your PC. The more programs you leave open and running, the more your computer has to keep track of, so close applications you are not using.

If you’re connecting to the Internet through a wireless connection, limiting the number of users who share the wireless signal can help.

It also just may be a good time to invest in a new computer. If your PC’s central processing unit is running at anything less than 1.5 gigahertz, you will have a tough time keeping up with graphics-heavy Web sites.

For Windows users, viruses can also be a major bottleneck for online connections. If you don’t have it already (and you really should have it already), antivirus software from companies like McAfee or Symantec will keep your PC clear of unwanted programs.

TWEAK YOUR BROWSER Another player involved in Internet speed is the browser you use to navigate the Web. Choosing the right browser has become pretty simple: Most experts recommend Firefox, which you can download free from mozilla.com/firefox.

Firefox’s open-source architecture means it has been tested and tweaked by far more people than proprietary browsers like Internet Explorer from Microsoft. Firefox also uses less of your computer’s memory, freeing it up to handle other tasks. (Microsoft says it will release an upgrade in August that will increase the speed of Explorer.)

But Firefox’s real advantage is its collection of user-generated add-ons. These are small, free modifications to the Firefox browser that can do many things (like change the browser’s appearance, help manage content and integrate third-party search features).

If you’ve ever noticed that a site is slow to load because of graphics-heavy ads, you can install the Adblock plug-in, which eliminates ads from your browser (blocking ads has benefits beyond improving speed — cleanliness and tranquillity are two that come to mind).

Sites that use a lot of animation (known as Flash animation) can also be slow; Firefox has another plug-in, called Flashblock, that allows you turn the Flash portions of a site on or off. For these reasons, Macintosh users may also want to download Firefox. While Apple’s Safari browser is quick (and far less susceptible to viruses), it does not work with any of these add-ons.

Another simple thing to do is to periodically clear your browser’s cache (what Microsoft calls Temporary Internet Files). Frequently visited pages are stored there for quick access, but things can also get bogged down.

More advanced users may want to adjust some of the more esoteric settings of their browsers. To simplify the process, the Web site Broadband Reports has a “tweak tester” that will suggest settings to modify (dslreports.com/tweaks). Speedguide.net offers a similar tweaker, called TCP Optimizer, as a download for Windows 2000 or XP users (speedguide.net/downloads.php).

Extending beyond browser settings, Google has a free application in beta, or test mode, for Windows users called Google Web Accelerator, available at webaccelerator.google.com. The application aims to speed up the way pages load by employing a variety of strategies (compressing data, storing frequently visited pages and others). Using Web Accelerator means that all your surfing is routed through Google’s computers, but if you’re comfortable with that (all of your surfing already goes through your I.S.P., remember), the application should improve your site browsing (it will not, however, improve download speeds).

All these applications cost nothing, and are based on proved methods. Enough free downloads are available that anyone charging for a miracle remedy should be looked at askance.

“I do not recommend any software that claims to boost Internet speed, especially ones that cost money,” says Justin Beech, founder of Broadband Reports. “They are selling snake oil.”

Original here