Followers

Monday, August 4, 2008

Historic Victory for Net Neutrality

Comcast tried to stop it. Telecom-funded politicians tried to discourage it. Big Media tried to de-legitimize it. But nothing could stop the people-powered movement to hold Comcast accountable for illegally blocking Internet content. Today, the FCC issued a punishment that has Network Neutrality opponents cringing and the rest of us popping champagne.

In a landmark decision, FCC Chairman Kevin Martin and Commissioners Michael Copps and Jonathan Adelstein approved a bipartisan “enforcement order” that would require Comcast to stop blocking and publicly disclose its methods for manipulating Internet traffic.

Tests by the Associated Press and others showed that Comcast blocked users’ legal peer-to-peer transmissions by sending fake signals that cut off the connection between file-sharers. Today’s decision follows a months-long FCC investigation, launched in response to a complaint from Free Press and Public Knowledge urging the federal agency to stop Comcast’s blocking.

In response to the victory, Josh Silver, Free Press executive director, said: “Comcast’s history of deception and continued blocking shows brazen contempt for the online consumer protections established by the FCC. We commend Chairman Martin and Commissioners Copps and Adelstein for standing up for internet users and working across party lines to protect free speech and the free market.”

Martin Stands with the People

Traditionally a friend to industry, even Martin couldn’t deny Comcast’s culpability.

In his statement this morning, Martin compared Comcast’s blocking practices to allowing the post office to discriminate against mail “Would you be OK with the post office opening your mail, deciding they didn’t want to bother delivering it, and hiding that fact by sending it back to you stamped ‘address unknown – return to sender?’” he asked. “Or if they opened letters mailed to you, decided that because the mail truck is full sometimes, letters to you could wait, and then hid both that they read your letters and delayed them?”

Despite the cable giant’s Hail Mary effort to shame and pressure Martin into bending to their will, Martin stuck by the wisdom that allowing ISPs to block and discriminate against online content is bad for America.

Today he stands with the public. Activists, bloggers, consumer advocates and everyday people have shown a relentless effort in lobbying the FCC to punish Comcast. The people who use and love the Internet have successfully brought Comcast to justice for trying to stifle consumer choice on the open Internet.

The Fight Continues

This precedent-setting victory sends a powerful message to phone and cable companies that breaking Net Neutrality rules will not be tolerated. And it marks a milestone in the fight to preserve a free and open Internet – and the first time the FCC has enforced the peoples’ right to see and hear what they want on the Internet without blocking or slowing down content.

This victory is monumental. But the fight to safeguard Net Neutrality is far from over.

Commissioner Copps recognized the struggle ahead, and called for the FCC to adopt a principle that commits the FCC to a policy of network openness. “A clearly stated commitment of nondiscrimination would make clear that the Commission is not having a one-night stand with net neutrality, but an affair of the heart and a commitment for life,” he said in a statement.

Already, more than 1.6 million people have contacted the FCC and Congress to protect Net Neutrality. The calls, petitions and e-mails must not stop. Now is the time to flood our policymakers with the message that we demand an open and free Internet now and always.

Original here

Majority of banking websites found insecure

A new study from the University of Michigan has found that more than 75 percent of banking websites are not completely up to snuff when it comes to security.

The study looked at 214 financial institution websites and focused on both design flaws and improper security practices. None of these flaws represent catastrophic security issues, but many could allow for easier access to your password and user name should a malicious hacker come calling.

The flaws studied included the following:

Insecure Login System

Nearly half of the banks examined had "secure" login systems on insecure web pages which did not use the SSL protocol. Failure to use SSL, the study says, allows for the possibility of an attack that would allow for the interception of login details if a user was accessing the site wirelessly, called a "man in the middle" attack. The study notes that most banks secure the internal portions of their site, but many leave the login page unsecured.

Putting Contact Info on an Insecure Page

The biggest flaw of the bunch (55 percent failing the test): A similar attack to the above could simply let a hacker change the phone number listed on the contact info page, redirecting customers to a phony call center ready to snap up their user name and password.

Redirecting Outside the Bank Without Warning

When users are directed to third party services (like, say, bill payment sites), the bank doesn't warn them of the change. A user may not know if what he's seeing is trustworthy or not.

Using Social Security Numbers or Email Addresses as User IDs

These are simple things to guess or find out, especially email addresses. Banks should allow users to create a custom user name, as well as have a policy on weak passwords, but 28 percent of banks tested did not.

Emailing Secure Information Insecurely

Things like password resets and financial statements should be sent securely: Passwords, for example, should never be sent as plain text, yet 31 percent of banks failed this test.

The full study (10 pages, PDF link) can be reviewed here. Specific sites failing the various tests were not revealed. Also note that the study was performed back in 2006 (the results are only being published now), so things may have improved since the original analysis.

Original here

The biggest military hacker of all times did his work over 56k modem

Gary McKinnon, a British computer expert, claims he's just fascinated with UFOs. Using his home computer and a modem — how WarGames! — he infiltrated military networks and accessed thousands of computers trying to find evidence of alien contact. Now caught and having lost an appeal with the British courts, he's awaiting extradition to the United States to stand trial, accused of the "biggest military hack of all time." The full list of his computer-exploiting prowess:

Using his own computer at home in London, McKinnon hacked into 97 computers belonging to and used by the U.S. government between February 2001 and March 2002.

McKinnon is accused of causing the entire U.S. Army's Military District of Washington network of more than 2,000 computers to be shut down for 24 hours.

Using a limited 56-kbps dialup modem and the hacking name "Solo" he found many U.S. security systems used an insecure Microsoft Windows program with no password protection.

He then bought off-the-shelf software and scanned military networks, saying he found expert testimonies from senior figures reporting that technology obtained from extra-terrestrials did exist.

At the time of his indictment, Paul McNulty, U.S. Attorney for the Eastern District of Virginia, said: "Mr. McKinnon is charged with the biggest military computer hack of all time."

If found guilty, McKinnon could be jailed for 70 years and fined as much as $1.75 million.

Original here

Homeland Security: We can seize laptops for an indefinite period

Posted by Declan McCullagh

The U.S. Department of Homeland Security has concocted a remarkable new policy: It reserves the right to seize for an indefinite period of time laptops taken across the border.

A pair of DHS policies from last month say that customs agents can routinely--as a matter of course--seize, make copies of, and "analyze the information transported by any individual attempting to enter, re-enter, depart, pass through, or reside in the United States." (See policy No. 1 and No. 2.)

DHS claims the border search of electronic information is useful to detect terrorists, drug smugglers, and people violating "copyright or trademark laws." (Readers: Are you sure your iPod and laptop have absolutely no illicitly downloaded songs? You might be guilty of a felony.)

This is a disturbing new policy, and should convince anyone taking a laptop across a border to use encryption to thwart DHS snoops. Encrypt your laptop, with full disk encryption if possible, and power it down before you go through customs.

Here's a guide to customs-proofing your laptop that we published in March.

It's true that any reasonable person would probably agree that Customs agents should be able to inspect travelers' bags for contraband. But seizing a laptop and copying its hard drive is uniquely invasive--and should only be done if there's a good reason.

Sen. Russell Feingold, a Wisconsin Democrat, called the DHS policies "truly alarming" and told the Washington Post that he plans to introduce a bill that would require reasonable suspicion for border searches.

But unless Congress changes the law, DHS may be able to get away with its new rules. A U.S. federal appeals court has ruled that an in-depth analysis of a laptop's hard drive using the EnCase forensics software "was permissible without probable cause or a warrant under the border search doctrine."

At a Senate hearing in June, Larry Cunningham, a New York prosecutor who is now a law professor, defended laptop searches--but not necessarily seizures--as perfectly permissible. Preventing customs agents from searching laptops "would open a vulnerability in our border by providing criminals and terrorists with a means to smuggle child pornography or other dangerous and illegal computer files into the country," Cunningham said.

The new DHS policies say that customs agents can, "absent individualized suspicion," seize electronic gear: "Documents and electronic media, or copies thereof, may be detained for further review, either on-site at the place of detention or at an off-site location, including a location associated with a demand for assistance from an outside agency or entity."

Outside entity presumably refers to government contractors, the FBI, and National Security Agency, which can also be asked to provide "decryption assistance." Seized information will supposedly be destroyed unless customs claims there's a good reason to keep it.

An electronic device is defined as "any device capable of storing information in digital or analog form" including hard drives, compact discs, DVDs, flash drives, portable music players, cell phones, pagers, beepers, and videotapes.

Original here

How to Create The Ultimate Windows XP Installation CD/DVD


Kudos to my friend Rado for writing such a great guide on how to create the ultimate Windows XP Installation CD/DVD. This tutorial will guide you through the process of creating an unattended Windows installation CD with the latest hotfixes, drivers, DirectX, IE7, WMP11, Office 2007 and any other software that you would like to include on the CD. Remove useless components, apply tweaks and system hacks for the highest possible performance and productivity.

Don’t miss the video version of this tutorial where you can actually see the whole process of creating a fully customized, unattended installation CD and the result tested and running in VMware Machine.

Requirements:

- nLite 1.4
- Net Framework 2.0
- WinXP CD

Steps:

1. Copy the entire content of your WinXP CD in some local folder.

2. Install .Net Framework 2.0 and nLite.

3. Choose your preferred language on the first screen:

Screenshot 1

4. Tell nLite where are your WinXP files. Select “Browse” and navigate to the local folder you created in step 1:

Screenshot

5. Choose what you want to do. You can choose only one operation, all of them, or any combination. For example, you can choose to create an ISO and skip the rest:

Screenshot

6. Service Pack: Slipstream a Service Pack into the installation. Just download SP2 for WinXP and nLite will do the rest. If you integrate SP2 you will not need SP1 because SP2 supersedes it.

Screenshot

7. Hotfixes, Addons and Update Packs: Add hotfixes and/or update packs to your installation. Any addons for nLite that you add here will be installed silently during Windows Setup also.

Screenshot


How to get all hotfixes after SP2:

* Use RyanVM Post SP2 Update Pack.
* Install WinXP. Run Windows Update and write down all the required hotfixes. Download them manually from Microsoft.
* Windows Updates Downloader - let this program download all hotfixes for you.
* Information about the hotfix releases could be found here: MSFN, TheHotfixShare and SoftwarePatch

Sreenshot

* Windows Media Player 11: Download WMP11 Integrator and the WMP11 installer. Use WMP11 Integrator to slipstream WMP11 into your WinXP CD before making any changes with nLite. Once WMP11 is slipstreamed you can proceed with nLite.

* Internet Explorer 7.0: Just download IE7 and slipstream the .exe with nLite.

* Office 2007: Work in progress!

* Addons: Download more than 350 addons for nLite from WinAddons. These .cab addons will be installed silently during Windows Setup.

Video Tutorial : Slipstream Windows Media Player 11

Video tutorial: Slipstream Internet Explorer 7 with nLite

Video tutorial: Integrate addons with nLite

8. Drivers: Integrate drivers into the installation. Browse to some .inf files and nLite will do the rest. Thanks to http://driverpacks.net/ you can create an installation CD with drivers for almost any piece of hardware. During setup Windows will only use the drivers required and will ignore completely the others. Unused drivers won’t be copied on your hard drive.

Screenshot

Video tutorial: Integrate drivers with nLite

9. Components: Select the components you want to remove from the installation. Make sure to read the short info before removing components especially those in red. Check the Components Removal Example to get an idea about the most important components you should not remove.

Screenshot

By clicking on Advanced you will be given the opportunity to keep some specific files. For example you can remove “Command Line Tools” but you can preserve ping.exe, ipconfig.exe, etc. which are part of this component by adding them to the Keep Box:

Screenshot

Power users could ignore the Compatibility Wizard:

Screenshot

10. Unattended: Set personal settings in advance so you don’t have to during the installation like Users, CD-Key, Regional settings, etc. Here you can also add Windows themes:

Screenshot

Video tutorial: Unattended installation CD with nLite

Adding themes: you’re using some neat Windows theme and want it on your CD as well? Here is an example how to slipstream Luna Element 4 and set it as a default theme:

Video tutorial: Integrate themes with nLite

11. Options: You can pretty much ignore the General tab and go directly to Patches:

* Maximum unfinished simultaneous connections (TCP/IP patch): Set it to 100 or 1000 for max P2P performance.

* USB Port Polling Frequency (Hz): Increase for smoother USB mouse movement. Not for wireless mice or any other USB device, use with caution! Works on Logitech MX, MS IntelliMouse Explorer 3, Razer Viper and possibly others.

* Unsigned Themes Support (Uxtheme Patch): Set it to enable and you will be able to use a 3rd party themes (from DeviantArt for example).

* SFC (Windows File Protection): Set it to disable to stop the automatic recovery of replaced or deleted system files and folders. Although it might sounds like a useful feature, it’s highly recommended to disable it. The duration of your installation will be reduced drastically.

Video tutorial: Patch Windows with nLite

12. Tweaks: this is pretty much self-explanatory. Apply your favorite registry tweaks and configure Windows Services. Once Windows is installed all your tweaks will be applied, no need of post-install tuning.

Screenshot

It’s possible to configure the services as well.
Here you will find an excellent Windows Services Guide.

Screenshot

13. Bootable ISO: we’re almost ready. Once created just burn on CD the ISO or test it in a virtual machine. You can burn the ISO with nLite or your favorite CD/DVD burner like Nero for example. It’s recommended to use rewritable media (CD-RW or DVD-RW) to avoid media loss in case you’re not happy with your WinXP copy and want to create another one.

screenshot

Video tutorial: Create a Bootable CD with nLite

Video tutorial: Burn ISO files with Nero

Video tutorial: Test ISO in VMware Machine 5.5 – shareware

Video tutorial: Test ISO in Virtual PC 2004 – freeware

Video tutorial: Test ISO in VirtualBox 1.5 – freeware

I hope you enjoyed the guide, once again, I’d like to thank Rado @ WinAddons for allowing me to post it here and coming up with such neat stuff.

Original here

A practical experience: Fedora vs Ubuntu

Linux is out there. In the case of some highly specialized distributions, Linux is WAY out there. Thankfully there are a number of solid disto’s that make installing and using Linux as your every-day OS fairly painless.

So … Which Linux distribution is right for you?

That ultimately depends on your particular needs. There are countless variations from which to choose. Distro Watch is a good resource for looking over the different distributions available. I’ve found that two distributions in particular seem to fit my needs. Perhaps they’ll fit yours as well. Ubuntu and Fedora are both stable and mature distributions targeted at the desktop. Both distributions ship with a similar array of pre-configured software packages, and both Fedora and Ubuntu default to the Gnome desktop with Compiz.

Ubuntu is based on Debian GNU/Linux. Debian is my favorite distribution it’s rock solid and fairly universal; running on just about every architecture. There is no proprietary software shipped with Debian and that’s the reason I don’t use it on my notebook: I require proprietary drivers for my Wi-Fi and audio.

Ubuntu takes the Debian core and makes it much easier to install and configure. In many cases Ubuntu installs properly with almost no user configuration. The synaptics package manager provides easy access to a wide variety of software, including proprietary and closed source apps which cannot be shipped as part of the distribution due to licensing and copyright issues.

Fedora (formerly Fedora Core) is based on RedHat, which was my first (successful) experience with Linux. RedHat also gave us the RPM (RedHat Package Manager) which made installing software relatively painless for the first time. Fedora doesn’t provide simplifed access to a repository of third party and proprietary software.

What’s the difference?

If you want to listen to MP3 audio, you’re going to need access to a proprietary codec. The package manager in Ubuntu will allow you to download and install these codecs. With Fedora you can either purchase the codecs from Fluendo, or find them on your own.

All that being said, I find that Both Ubuntu and Fedora provide a stable and aesthetically pleasing desktop with the most useful application pre-installed and ready to use. What made the difference for me was that Ubuntu is somewhat less cohesive than fedora. In the same way that Linux as a whole is less cohesive than BSD even though both are free and open source. That’s not necessarily a negative statement, just an observation. RedHat Enterprise Linux works closely with Fedora to determine what community software is stable and mature enough to be included in the fully supported commercial OS.

That relationship doesn’t officially exist between Ubuntu and Debian, although I’m sure popular applications and drivers do make their way to Debian from the Ubuntu community.

So? You didn’t really answer the question, Ubuntu or Fedora?

If you want an easy to use desktop that can replace Windows,You probably want to use Ubuntu. If on the other hand, your looking for a stable and secure desktop and can live without proprietary media codecs, then I suggest Fedora.

Original here

7 Uses of GParted Live


I’ve been using GNU Parted to slice and dice my disk in preference to the fdisk for almost as long as I’ve been using Linux. We all fill up our hard-drives from time to time, but thanks to Gnome GParted, rearranging disk partitions isn’t as terrifying as it used to be. In fact, armed with a GParted Live CD, there’s a swathe of disk space fiddling jobs I can tackle without gnawing my fingers to the bone:

  1. When you’ve filled up your root partition, you can’t resize it while you’re booted from it. Reboot your machine from the GParted Live CD, and tinker even with your root partition.
  2. In the olden days, I used to keep /home in a separate partition, so that I could change distro’s (or install a new release from CD without giving all my trust to the new Update Manager upgrade button) by wiping the root partition without touching any of my personal files in /home. Use the GParted Live CD to shrink your root partition, and create a new /home. Don’t forget to move the contents of your old /home directories before changing /etc/fstab!
  3. When your VMWare virtual disk fills up, power it down and run vmware-vdiskmanager -x 12Gb Vista.vmdk to allocate some more space to the disk. In order to add the new space to an existing disk partition, boot VMWare into GParted Live and allocate the new unused space.
  4. When you’ve persuaded a friend to try Linux, as long as you promise they can still keep Windows around in case they decide to go back: don’t give them a slow Live CD, make some room for a new partition at the start of their drive for a full install.
  5. When you’re friend asks you to put things back how they were, delete the Linux partition, and add the freed space back to their main Windows partition.
  6. Stealing back some space from that unused Vista partition, to make room for keeping more mp3’s in Linux.
  7. Recycling the wasted disk space from a crashy old version of Windows ME into a bigger swap partition for Ubuntu.

Joking aside, it’s insanely helpful to have a GParted Live disk in your pocket when something like this comes up.

Hopefully, it goes without saying that you need to have current backups of all the partitions you want to move or resize before you let lose with any partition manager.

Original here

State of the LinuxWorld

Linux is beginning to find its legs as the foundation in many different technologies and in the process is fueling a feedback loop that is helping accelerate the operating system's popularity.

As more and more people contribute from areas such as mobile, data center power management, and real-time technologies, innovations are coming rapid fire and when folded into the Linux kernel provide benefits across a wide spectrum.

For example, power management features for the data center are being tapped to help extend battery life in Linux-based mobile devices.

The evidence of the cooperation will be on display at next week's LinuxWorld conference in San Francisco.

(Disclosure; IDG, the parent company of both Network World and PC World, also operates LinuxWorld.)

The conference is expected to draw 10,000 attendees to nearly 100 sessions and 200 exhibitor booths. In addition, there is a mini-conference on Mobile Linux, the Linux Garage that will highlight the latest embedded-Linux gadgets, an install fest to benefit San Francisco-area schools, an open source voting demonstration and the annual Penguin Bowl that will pit teams dedicated to mobile Linux and server Linux.

"When you look at how people use technology -- embedded systems, mobile computing, mobile internet devices, servers, super computing -- in almost every aspect of technology Linux is emerging as the dominant platform," says Jim Zemlin, CEO of the Linux Foundation.

Of course, Windows still enjoys healthy unit-shipment leads on servers and client systems.

But Zemlin says as Linux use has increased it is fueling a positive feedback loop due to its community development roots.

"When a Wall Street trading application developer uses real-time Linux or when the Defense Department is creating real-time technology for robust embedded defense systems, that same technology gets contributed back to the Linux kernel and it might benefit mobile phone developers by offering the tools to create more stability."

While the feedback loop isn't new, Zemlin says it is getting rocket fuel from the growing legions of Linux developers.

In the past two years, he says, 3,200 developers have contributed to the Linux kernel. In one year alone, 1,762 unique kernel contributions were logged and there are 2,000 lines of code written every day.

The Linux kernel has a release every two and a half months and a new Linux distribution release every six months.

"We are seeing this incredibly unique cross pollinization of innovation," Zemlin says.

Bill Weinberg, an analyst and consultant with LinuxPundit, and the chair of the LinuxWorld Mobile conference, says the discussion goes beyond just Linux as a platform. "We've had a lot of hand-wringing around fragmentation in the past," he says.

This year, Weinberg has added a track on applications, which has been an historical weak spot for the operating system. "How do you create applications for mobile and embedded Linux, how do you to go to market with Linux systems, how are they received by the eco-system, how do ISVs actually make money with apps, and how do operators roll out new services and deploy apps to support their business models," said Weinberg.

Motorola will talk about the LiMo (Linux Mobile) Foundation, which began 18 months ago, and Intel will detail its mobile Atom Processor and Moblin.org, which is focused on creating Internet-centric mobile applications. A panel will convene to discuss how the two can interact and interoperate.

Weinberg also is augmenting the discussion with a track to cover cross-over topics such as virtualization in embedded systems. He says virtualization provides the functional separator that allows embedded application developers choice of platform depending on what they are trying to accomplish.

"There is no single platform that has a single code base that covers as many different kinds of applications and niches as Linux does," says Weinberg.

Some analysts say Linux has without a doubt become a more mainstream solution.

"Linux is expanding its presence in other workloads as it continues to hold down key success areas in Web and infrastructure roles," says Al Gillen, an analyst with IDC.

"Customers are increasingly using it for business-critical workloads."

Original here

Ext3, ReiserFS & XFS in Windows thanks to coLinux

If you ever: needed to access your ext3, reiserfs or XFS partitions from Windows, wanted to use one of your favorite file systems via FUSE, or had an idea to mount an image of your hard drive, then this article is for you. This is a how-to, describing what to do, if you want Windows to handle file systems in a similar way as Linux does.

For our task we will use coLinux. coLinux is a modified linux kernel that can be executed as an application or a service in the Windows environment. The web page of the project is http://www.colinux.org/.

The installation procedure was tested on the stable version of the coLinux project (as of 28.06.2008). Installation was performed on the Windows 32-bit architecture. As far as other operating systems are concerned, there might be some modifications needed.

In short, we install coLinux on a windows machine, assure access to disk partions, and export all the mounted file systems using Samba.

Windows Vista users have to run the commands (e.g., cmd.exe, setup) via the context menu “run as …”. It is important, that while doing that, they are in the administrators mode.

coLinux's terminal

Get ready for the installation

  1. Download the program from the projects website (v0.7.3), and
    install it in the C:\coLinux directory
  2. Edit the connection settings of the virtual ethernet card installed by the coLinux (TAP Win32 Adapter V8 (coLinux)).
    In the TCP/IP settings, set: IP address: 192.168.37.10 Subnet Mask: 255.255.255.0
  3. Download Ubuntu-7.10.ext3.2GB.7z image from the the project’s webpage, and extract it to C:\coLinux
  4. The swap file comes together with the image so the next step can be omitted
  5. (Optional) If really needed, this command will create a 128MB swap: fsutil file createnew c:\coLinux\swapp128.fs 134217728 (one also has to run mkswap in Linux, and make sure that there is a corresponding line in fstab)
  6. Change of filenames:Ubuntu-7.10.ext3.2gb.fs -> ubuntu.fsswap128.fs -> ubuntu-swap.fs
  7. Copy example.conf to ubuntu.conf
  8. Edit of ubuntu.conf, by inserting these in proper places:cobd0="c:\coLinux\ubuntu.fs"cobd1="c:\coLinux\ubuntu-swap.fs"mem=32eth0=slirpeth1=tuntap
  9. Create ubuntu-start.cmd with the following content:colinux-daemon.exe -t nt @ubuntu.conf
  10. Runs ubuntu-start.cmd
  11. Login as root with the default “root” password
  12. Change the root password (passwd)
  13. Run: editor /etc/network/interfaces and add:auto eth1iface eth1 inet staticaddress 192.168.37.20network 192.168.37.0netmask 255.255.255.0broadcast 192.168.37.255
  14. ifup eth1
  15. ping 192.168.37.10 (from Linux to Windows) it should work now
  16. editor /etc/apt/sources.list (replace gutsy with hardy in all the paths)
  17. aptitude update
  18. aptitude safe-upgrade
  19. aptitude install samba openssh-server mc fuse-utils
  20. apt-get clean
  21. editor /etc/fuse.conf remove # at the beginning of the line user_allow_other
  22. editor /etc/ssh/sshd_config change PermitRootLogin to “no”
  23. Add a new user:adduser user1 (from now on, you can ssh to this account)
  24. adduser user1 fuse (this allows the user to use FUSE)
  25. /etc/init.d/ssh reload
  26. Check if you are able to establish a ssh connection with 192.168.37.20 in Windows
  27. Halt Linux: halt

How to mount the file system?

  1. Search for the partition you want to mount.http://colinux.wikia.com/wiki/Partitions.In my case, it is \Device\Harddisk2\Partition2.
  2. Edit ubuntu.conf again, and insert the following# partition to be mountedcobd2="\Device\Harddisk2\Partition2"
  3. Run ubuntu-start.cmd
  4. Login as root
  5. mkdir /media/codb2
  6. Add:/dev/cobd2 /media/cobd2 ext3 defaults 0 0 to fstab (it is an ext3 partition in my case).
  7. mount /media/cobd2

How to share file systems via Samba?

  1. Should you want to share a whole file system with user1, you must give him read and write permissions.(chown, chmod…).
  2. After setting the permissions, add at the following, at very end of /etc/samba/smb.conf:[my data]path = /media/cobd2valid users = user1read only = no
  3. Add the user to the password’s database of samba:smbpasswd -a user1 (samba does not use the system accounts by default)
  4. /etc/init.d/samba reload
  5. Type the address in Windows \\192.168.37.20, and login as user1 using the password generated by smbpasswd. You can map this file system to a “letter” in your OS

the view of an exported file system

FUSE-based file systems

To use the FUSE file systems (e.g., sshfs, encfs, …), you have to mount them using the allow_other option (which should be enabled in /etc/fuse.conf).How to mount:encfs $WHAT $WHERE -- -o allow_othersshfs $SERVER:$PATH $WHERE -o allow_other

More details

  1. To make, connecting to coLinux, easier add the IP address to WINDOWS\system32\drivers\etc\hosts by adding:192.168.37.20 colinux
  2. If you don’t want to start coLinux manually, it can be installed as a service:colinux-daemon.exe --install-service colinux @ubuntu.conf
  3. When running coLinux as a service, you can use the following commands to control it:net start colinuxnet stop colinux
  4. You can also set this service to start automatically. By doing so, you will get access to the mounted file systems immediately after the OS starts (provided that the disk was was mapped to be automatically mounted). One can access coLinux via a console (all of them are installed in C:\coLinux) or via ssh.
  5. If you want to have the same data under Windows and Linux, you have to sync the UID numbers (in /etc/passwd) of the user, who handles the files under Linux and coLinux.

Service settings

Stop the service

Performance

This solution does not provide an outstanding performance, but the benefits compensate for it. I’ve obtained transfers of up to 5 MB/s on an Athlon XP 2000+, SATA disk and an ext3 file system. Any ideas, on how to improve these results, are welcome.

Original here

10 icons sets to customize your GNU/Linux desktop

BWS_Icons


Tango mine


gOS icons

gOS icons


Mac_OS_X_Leopard_for_Debian


Adrix


KDEmod


icomity


Yellow crystal clear


iOS

Any icon set you’d like to see here? Don’t hesitate to share!

Original here

KDE 4.1 Pushes Cross-Platform Support, UI

By Darryl K. Taft

An Introduction to AIR

AIR (Adobe Integrated Runtime) is a wrapper around a set of technologies that enables developers to build rich Internet applications that deploy on the desktop. Applications are created using a mixture of JavaScript, HTML, and Flash. The resulting application is delivered to end users in a single package and rendered using the WebKit HTML engine.

AIR applications don't look like what I think of when I think "web application". To me they look and feel like desktop applications. This surprised me a little because when I first heard about AIR I assumed that it would be similar to Prism from Mozilla. I thought that it would be yet another way for established websites to deliver their content directly to users' Desktops. In a way this is true, especially when you see AIR applications like the eBay Desktop, but developers have not restricted themselves to simple website-to-the-desktop applications. The AIR Marketplace, a central repository of AIR applications, has applications for Time Tracking, eLearning, Video conferencing, micro-blogging, news reading, media playing, and on and on and on.

Because AIR applications are built using existing standards (HTML, JavaScript, Flash), they are cross-platform by default (well, almost — more on that later). AIR's goal is to be a true write once, run anywhere environment.

Getting AIR

The Linux alpha version of AIR can be downloaded from http://labs.adobe.com/technologies/air/.

The download is a file with a '.bin' extension. To install it, you first have to make the file executable. This can be done by opening a terminal window and navigating to where you downloaded the file and then issuing the following command:

chmod +x adobeair_linux_a1_033108.bin

Once the file is executable, start the simple installation process from the command line with:

./adobeair_linux_a1_033108.bin

The installer should run and ask you to accept the AIR license agreement.

The Adobe AIR license agreement

After clicking on "I Agree" the installer will ask you where you would like it installed. By default it suggests to install itself to your /opt directory. Assuming you keep the default, after the installation has finished there will be a new "/opt/Adobe AIR" directory that contains the "Adobe AIR Application Installer" program and other miscellaneous files. That's right, you just installed an installer. On Ubuntu, after installation I also had a new entry in my Applications > Other menu.

AIR install completed

Using AIR

Now that AIR is installed, you can download and install AIR applications. These applications are delivered in the form of .air files. These files are really .zip files containing everything the AIR Application Installer needs to install the application on to your computer.

AIR users on MacOS and Windows can install applications by clicking on handy "Install Now" buttons on the web pages of the applications they want to install. The alpha version of AIR for Linux can't make use of this feature, so you have to manually download the .air file and then open it with the AIR Application Installer. Firefox can be configured to automatically open .air files with the AIR Application Installer, so for many applications, this isn't a problem because there is usually a direct link to the .air file somewhere on the install page. Some application web sites don't provide the link though, leaving you to do a "View Source" on the web page and search for ".air" to find the file to download.

During the installation of an AIR application, the Installer will first display an "Are you sure?" message letting you know the name of the application, the filesystem and network permissions the application will have, and whether or not the identity of the publisher of the application can be verified.

AIR Application Install Warning

Click on the install button and the installer will ask where you want the application installed (/opt is the default), whether or not to start the application after installation is complete, and if you want it to create a Desktop launcher for the application.

AIR Application Install

After you click on the Continue button the installer takes the cross-platform .air package and translates the app so that it will run on your system as if it was a traditional application. This install process basically involves the AIR Application Installer unzipping the .air file to the location you specified and then creating a binary executable that loads and runs the application.

Example Applications

Adobe provides links to several example applications, with full source code, at http://labs.adobe.com/technologies/air/samples/. These applications are good examples of the type of things that AIR can do. My favorites of the bunch are MapCache, RoadFinder, PixelPerfect, and ScreenBoard.

MapCache is a Desktop version of Yahoo Maps. The nifty feature it has is the ability to drag the map you are currently looking at to the desktop where it is saved as a .png file. Just grab the "Drag Me" button and drop it on the Desktop.

AIR Example Application: MapCache

The RoadFinder application is not very useful, but it is fun. What it does is show you both Google Maps and Yahoo Maps side by side. When you drag or zoom on one, the other does too. By doing this you can readily see how these two map services differ. This can be especially useful if you are trying to find an address in a new development and either Google or Yahoo Maps has not been updated with the street you are looking for.

AIR Example Application: RoadFinder

PixelPerfect is a fun little application whose sole purpose is to act as a Desktop measuring stick. You can drag the edges to make it any size you wish, or click in the top left corner to get a drop down menu that lets you set it to a few standard sizes. You can drag it around your screen to measure whatever you want . . . ok, it's not very useful, but I could see it used by developers when prototyping user interfaces.

AIR Example Application: Pixel Perfect

The ScreenBoard app is my favorite of the example applications. What this application does is draw a transparent rectangle over your entire screen. You can then draw on the screen using a variety of virtual "markers". You can use the ones supplied, or create your own. Transparency support in your window manager is required for this application to work because without it, you'll just see a big black rectangle covering your entire desktop.

AIR Example Application: ScreenBoard

Issues with the Linux alpha

The MacOS and Windows versions of the AIR runtime are further along in their development than the Linux version, so there are several issues present that I hope will be addressed before it is formally released.

One very annoying issue I came across was in the sound output of AIR that caused Rhythmbox, Banshee, and other media players on my system to not be able to play sound, or even crash if there was an AIR application running when they were launched. I was able to work around this to an extent by starting my music player first and then to launch my AIR application(s). When I did that, my AIR applications would not play any sound, but at least I could listen to my music.

Another issue I ran into was that there are several applications, the eBay Desktop I mentioned earlier being one of the more prominent examples, that refuse to install or run with the Linux version of the AIR runtime. This is what I meant when I said that AIR is not fully cross platform. As the Linux version progresses towards its official release, compatibility will increase to the point where all AIR applications will run, but right now some of the more advanced ones are off-limits. It's annoying, but that's the nature of alpha-level not-yet-feature-complete software.

Probably the ugliest issue I ran into was that if you are not running a compositing window manager like Compiz, then AIR applications will be surrounded by an ugly black rectangle at best, and will be completely unusable at worst. Here are some examples of the same application first with compositing and then without.

With Compositing: An AIR application with compositing turned on.

Without Compositing: An AIR application with compositing turned off.

Most — but not all — AIR applications are like the one above. They do not use the default window manager's windows. Instead they use their own custom skins. Some of these skins rely heavily on transparency to look good and be functional, and without a compositing window manager you're out of luck. I don't know that there is a way around this, so if you would like to use AIR, you should use a compositing window manager. On Ubuntu, I just enabled "Normal" under System > Preferences > Appearance > Visual Effects and transparency worked fine.

Here is where you enable compositing in Ubuntu.

Final Thoughts

There are a lot of application frameworks out there. You have the traditional ones like QT and GTK, newer ones like Mono (aka .Net), and now there's a new breed of frameworks like AIR that aim to integrate our Desktops with the Internet in ways that would have been unheard of just a few years ago.

The choice of whether or not to develop using AIR is an exercise best left to developers. I imagine that for web developers who have never programmed in C or any of the other traditional computer languages will find that AIR provides an easy road to building Desktop applications by using languages and technologies they already know. This is a very good thing, in my opinion.

For the rest of us, I think AIR applications are well worth checking out. They can be a mixed bag at times, especially because of the alpha nature of AIR on Linux, but the best of them are every bit as good as their traditional Desktop counterparts.

Original here

Setting up LAMP on FreeBSD

By Martin Münch

Setting up a LAMP server is a common task for systems administrators, and FreeBSD is one of the most reliable and stable operating systems available. You can swap out the L in LAMP with F for FreeBSD to build a fast and reliable Web server.

In this article I assume FreeBSD is already installed. If not, make sure you download the latest stable production version of FreeBSD and run the installer. I recommend choosing the MINIMUM option at the installer screen to quickly install only the most basic and necessary things.

To install applications on FreeBSD, use the ports files. Ports are plain text files that know where to download source code, so that the software will be compiled on your computer. This way you can change settings (including or excluding specific modules) as you want, and the software will fit perfectly to the specifications of your computer. First, you have to make sure that the latest ports files are installed. If you've never installed the ports, issue portsnap fetch extract in the shell; otherwise, issue portsnap fetch update. This will download the latest ports files. After a bunch of messages that show you what files have been downloaded, you're ready to go.

Apache

Next you need to compile and install Apache, the Web server itself, using command like those below. After changing to the right location (the first command), the second command brings up a configuration screen where you can change settings. You might want to enable IPv6 support or activate the proxy module, but the standard settings are usually fine. After you have accepted the settings, Apache will automatically be compiled and installed. The last three lines make sure Apache and the required modules start automatically with the operating system:


cd /usr/ports/www/apache22/
make config install distclean
echo 'apache2_enable="YES"' >> /etc/rc.conf
echo 'apache2ssl_enable="YES"' >> /etc/rc.conf
echo 'accf_http_ready="YES"' >> /etc/rc.conf && kldload accf_http

Once Apache is installed properly, you must configure your server. First, enable SSL support and create the certificate and key files. The SSL key file is your private file for changing the password and restoring certificates. The SSL certificate file is the certificate itself, which will be used to assure visitors' Web browsers that your server is the server they want to talk to. By default, the SSL certificate file is /usr/local/etc/apache22/server.crt, and the SSL key file is /usr/local/etc/apache22/server.key. You can check or change this by searching for SSLCertificateFile or SSLCertificateKeyFile, respectively, in /usr/local/etc/apache22/extra/httpd-ssl.conf. Since version 2 of Apache, the main configuration file is divided into several extra files in /usr/local/etc/apache22/extra/. This makes it easier to find specific options and reduces the size of the main configuration file. If you don't find an option in the main configuration, you should check the extra files.

Now you need to change to the right location and generate the key file. With that key, you can generate a certificate-signing request, which tells a certificate authority to sign your key. You can either send a request to an authority such as VeriSign, or sign it yourself. If the certificate is signed by a professional authority, it will cost money, but assure visitors that this Web server definitely belongs to you and not somebody else. Self-signing the certificate will cause a warning to appear in visitors' browsers when they enter your site that the certificate is self-signed, but will cost nothing at all. The following code shows you how to self-sign the certificate:


cd /usr/local/etc/apache22/
openssl genrsa -des3 -out server.key 1024
openssl req -new -key server.key -out server.csr
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
chmod 0400 server.key server.crt

The key and certificate files are generated and in the right place with the proper permissions. However, you still need to configure some things. You have to make sure the server administrator's email address is set correctly by searching for ServerAdmin in /usr/local/etc/apache22/httpd.conf. DocumentRoot specifies where the Web documents are located; set it to /srv/www/01 on your server. Letting users host their own private Web content can cause some harm, so disable it by commenting out Include etc/apache22/extra/httpd-userdir.conf. Finally, enable SSL support by activating Include etc/apache22/extra/httpd-ssl.conf. In /usr/local/etc/apache22/extra/httpd-default.conf, disable ServerSignature to prevent the server from showing more information than it has to. Make sure the server-status and the server-info sections in /usr/local/etc/apache22/extra/httpd-info.conf are commented out. The less information others have about the Web server, the better it is for the security staff.

In usr/local/etc/apache22/extra/httpd-vhosts.conf, set the directory for every SSL connection to the server. Note that lawrencium is the name of the server in this example; you should change this to the name of your own server:


NameVirtualHost *:443

ServerName lawrencium
ServerAlias lawrencium.ipc.net
DocumentRoot /srv/www/02/

Order allow,deny
Allow from all

SSLEngine On
SSLCertificateFile /usr/local/etc/apache22/ssl.crt/server.crt
SSLCertificateKeyFile /usr/local/etc/apache22/ssl.key/server.key
AllowOverride None
Order Deny, Allow

You now have one directory (/srv/www/01) for all connections on port 80, and one directory (/srv/www/02) for all connections on port 443.

PHP

At this point, the Web server is ready to serve static documents. However, most Web sites contain dynamic PHP content, such as forums, chats, and galleries.

PHP installation is quick and easy. Compile and install the PHP package itself and the PHP extensions and make sure that the Apache module is compiled when you install PHP v5:


cd /usr/ports/lang/php5
make config install distclean
cd /usr/ports/lang/php5-extensions
make config install distclean

To make Apache serve PHP sites, you have to tell it how to handle PHP files. Add the following entries to /usr/local/etc/apache2/httpd.conf directly after all the LoadModule lines:


AddType application/x-httpd-php .php
AddType application/x-httpd-php-source .phps

Add index.php as the directory index:



DirectoryIndex index.php index.html index.htm

PHP includes a recommended configuration file that is secure for most purposes. Disable allow_url_fopen (which allows you to operate on remote FTP/HTTP sites just like on local files), because it can become harmful when used incorrectly:

cp /usr/local/etc/php-ini-recommended /usr/local/etc/php.ini

MySQL

PHP is now installed and configured. However, most PHP applications use databases as well. MySQL, a database system, is stable, open source, and doesn't cost a penny.

Compile and install MySQL with SSL support and add an entry to /etc/rc.conf to start the MySQL server automatically with the operating system:


cd /usr/ports/databases/mysql51-server
make install WITH_OPENSSL=yes
make distclean
echo 'mysql_enable="YES"' >> /etc/rc.conf

Set a root password (p3Df1IsT in the commands below). Note that because you're specifying the password on the shell, it is stored in the shell history (e.g., ~/.bash_history or ~/.histfile, depending on which shell you used), so for security reasons clearing the shell history is a good idea, especially if the root account is shared:


/usr/local/etc/rc.d/mysql-server start
mysqladmin -u root password p3Df1IsT
mysql -u root -p
rm /root/.history

Now remove all anonymous accounts by typing the following commands at the MySQL command prompt after you've logged in. The fourth command gives you a list of users without passwords; you can either set each password or delete the users. The last command changes the name of the default root account to mmu002. Changing the root account to an account of your choice is a good idea in case someone wants to try to get your root password. Typically a cracker tries the user name root and some default or dictionary passwords. In this case the default root account does not exist, which makes it a lot harder to break in. Be sure to choose a name not everybody could guess; things like your name or your dog's name are bad examples:


use mysql
DELETE FROM user WHERE user="";
FLUSH PRIVILEGES;
SELECT * FROM user WHERE Password="";
UPDATE user SET user='mmu002' WHERE user='root';

FreeBSD doesn't create a MySQL configuration file by default, so you have to do this yourself by creating /etc/my.cnf, which changes the default port to 29912. The server allows connections made only from 127.0.0.1 (i.e., localhost). The last command shows only databases the user actually has read and write access to; without this option, MySQL would show all users all databases:


[client]
port=29912
[mysqld]
port=29912
bind-address=127.0.0.1
skip-name-resolve
safe-show-database

This article could end here, but it would be unforgivable to not mention phpMyAdmin in an article about LAMP.

phpMyAdmin

phpMyAdmin makes database administration a lot easier. It is used so frequently that it's almost a standard. You need to install it and set the links. In the commands below, we set up http://localhost/phpMyAdmin to access phpMyAdmin (that is, we link the installed phpMyAdmin directory in wwwroot), then use a configuration skeleton as the default configuration, and make sure the secret passphrase (which will be used to encrypt passwords), the root user, and the root password are set corresponding to your MySQL options:


cd /usr/ports/databases/phpmyadmin
make config install distclean
ln -s /usr/local/www/phpMyAdmin /usr/local/www/apache22/data
cd /usr/local/www/phpMyAdmin && cp config.sample.inc.php
config.inc.php
vim config.inc.php
$cfg['blowfish_secret'] = 'kJ76Fgeak98h6thjd6';
$cfg['Servers'][$i]['controluser'] = 'root';
$cfg['Servers'][$i]['controlpass'] = 'p3Df1IsT';

Your new multifunctional FreeBSD server is now installed, configured, secured, and ready to go. When managing a server, keep a few things in mind. First, keep the server up-to-date. FreeBSD offers great tools to keep the FreeBSD kernel, the FreeBSD user space, and all installed applications on it up-to-date and secure. An obsolete server is a security risk. Second, make sure you read the configuration files and the man pages when changing settings, reconfiguring applications, or if you just want to know what a specific command or file is there for.

Your server can now host static Web pages and dynamic Web pages, such as forums, chats, and picture galleries, securely, and you have phpMyAdmin to help you configure the databases that often play a central role in Web hosting.

Martin Münch studies computer science at the University of Tromsø, Norway.

Original here

IBM Prepares to Fight off Microsoft

John Fontana, Network World

IBM/Lotus Thursday hit back at Microsoft's boast that it plans to steal 5 million Notes customers this year by detailing a new 300,000-seat licensing deal with an Asian company and strong interest in Notes from emerging markets.

Last week, Microsoft's COO Kevin Turner told financial analysts that his goal is to have the company's messaging and collaboration software displace 5 million Notes seats this year. Turner also said Microsoft has replaced 8 million seats of Notes in the past two years.

It was another shot in a messaging and collaboration war that has been going on between the two for nearly 20 years. In the late 1990s, the two jousted using e-mail seat-count numbers that were often inflated if not outright dubious.

"It is very difficult to tell what Microsoft is talking about when they talk about numbers of seats or costs because they shove so much into their environment, but I do know we have been engaging against them and winning," says Bob Picciano, general manager of Lotus Software.

IBM/Lotus seems to be doing a better job of integrating current messaging and collaboration tools with next-generation tools like social networking.

In June at the Enterprise 2.0 conference, the two squared off on stage around social software (Lotus Connections vs. SharePoint) with IBM/Lotus showing its Connections tools as "the clear winner across the board," according to Mike Gotta, an analyst with the Burton Group who moderated the session. Gotta in his blog later chastised Microsoft, saying it "did a poor job of showing and explaining why business and/or technical decision-makers should consider SharePoint as a credible solution to meet the social computing needs of an organization."

A month later Microsoft's Turner lit into IBM/Lotus, which is now on the offensive and detailing what it calls strong fiscal second-quarter sales of Notes/Domino 8. The platform, which shipped a year ago, features a modular client architecture that can be customized as the front end for component-based applications.

The company says an Asian firm, which executives said would be named at a later date, will license 300,000 seats of Notes, as well as Lotus Symphony, IBM's open source suite of productivity applications.

IBM/Lotus says the deal is its largest ever in Asia.

IBM also listed a number of foreign companies that chose Notes over Microsoft, including Max New York Life, Reliance Industries, Vedanta, and Aviva in India; GD Development Bank, Johnson Electric, HKG Environ Protect, CED, DL Cosco Shipyard in China; Affin Bank and Trakando in Singapore; and Russian Railways in Russia.

It did not provide seat numbers.

IBM/Lotus also reported that in the fiscal second quarter it recorded its largest client win in North America: 150,000 seats in a "big six" accounting firm.

Like the Asian deal, IBM would not name the company, but IBM executives said Lotus Notes, Sametime, Connections, IBM Lotus Quickr and WebSphere Portal were picked over the Microsoft collaboration portfolio that included Exchange and SharePoint.

The battle is heating up as Microsoft's SharePoint is garnering the lion's share of coverage despite a number of issues corporate users face when considering the platform.

IBM/Lotus has been feeling the heat from SharePoint.

In May, the company released IBM Lotus Quickr Content Integrator, which provides wizards and templates for moving content in mass to Quickr from SharePoint sites. Lotus is betting the tool will help keep users on its content management platform and away from Microsoft, which could use SharePoint as the hook to get users to switch to its entire portfolio of messaging and collaboration tools.

As part of its most recent announcement IBM/Lotus noted other companies that have recently picked Lotus Notes and other Lotus software over other competitors, including Colgate-Palmolive, Ineos of Belgium, the U.S. Federal Aviation Administration, NutraFlo, Dutch Railways, Rohm Haas, Imerys and the Salvation Army.

New Lotus Notes 8 customers were listed as CFE Compagnie d'Enterprises of France, Virginia Commonwealth University, Winsol International, the U.S. General Services Administration, the U.S. Internal Revenue Service, Standard Insurance, New York Life, Kentucky Baptist Convention, Verizon, Publishers Printing, Hyatt Hotels, Union Pacific and Nationwide Insurance.

Original here

New technique to compress light could open doors for optical communications

Optics researchers succeeded previously in passing light through gaps 200 nanometers wide, about 400 times smaller than the width of a human hair. A group of UC Berkeley researchers led by mechanical engineering professor Xiang Zhang devised a way to confine light in incredibly small spaces on the order of 10 nanometers, only five times the width of a single piece of DNA and more than 100 times thinner than current optical fibers.
"This technique could give us remarkable control over light," said Rupert Oulton, research associate in Zhang's group and lead author of the study, "and that would spell out amazing things for the future in terms of what we could do with that light."

Just as computer engineers cram more and more transistors into computer chips in the pursuit of faster and smaller machines, researchers in the field of optics have been looking for ways to compress light into smaller wires for better optical communications, said Zhang, senior author of the study, which will be published in the August issue of Nature Photonics and is currently available online.

"There has been a lot of interest in scaling down optical devices," Zhang said. "It's the holy grail for the future of communications."

Not only would compressed light make possible smaller optical fibers, but it could lead to huge advances in the field of optical computing. Many researchers want to link electronics and optics, but light and matter make strange bedfellows, Oulton said, because their characteristic sizes are on vastly different scales. However, confining light can actually alter the fundamental interaction between light and matter. Ideally, optics researchers would like to cram light down to the size of electron wavelengths to force light and matter to cooperate.

The researchers run into a brick wall, however, when it comes to compressing light farther than its wavelength. Light doesn't want to stay inside a space that small, Oulton said.

They have squished light beyond these limits using surface plasmonics, where light binds to electrons allowing it to propagate along the surface of metal. But the waves can only travel short distances along the metal before petering out.

Oulton had been working on combining plasmonics and semiconductors, where these losses are even more pronounced, when he came up with an idea to achieve simultaneously strong confinement of the light and mitigate the losses. His theoretical "hybrid" optical fiber consists of a very thin semiconductor wire placed close to a smooth sheet of silver.

"It's really a very simple geometry, and I was surprised that no one had come up with it before," Oulton said.

Oulton ran computer simulations to test this idea. He found that not only could the light compress into spaces only tens of nanometers wide, but it could travel distances nearly 100 times greater in the simulation than by conventional surface plasmonics alone. Instead of the light moving down the center of the thin wire, as the wire approaches the metal sheet, light waves are trapped in the gap between them, the researchers found.

The research team's technique works because the hybrid system acts like a capacitor, Oulton said, storing energy between the wire and the metal sheet. As the light travels along the gap, it stimulates the build-up of charges on both the wire and the metal, and these charges allow the energy to be sustained for longer distances. This finding flies in the face of the previous dogma that light compression comes with the drawback of short propagation distances, Zhang said.

"Previously, if you wanted to transmit light at a smaller scale, you would lose a lot of energy along the path. To retain more energy, you'd have to make the scale bigger. These two things always went against each other," Zhang said. "Now, this work shows there is the possibility to gain both of them."

Even though the current study is theoretical, the construction of such a device should be straightforward, Oulton said. The problem lies in trying to directly detect the light in such a small space - no current tools are sensitive enough to see such a small point of light. But Zhang's group is looking for other ways to experimentally detect the tiny bits of light in these devices.

Oulton believes the hybrid technique of confining light could have huge ramifications. It brings light closer to the scale of electrons' wavelengths, meaning that new links between optical and electronic communications might be possible.

"We are pulling optics down to the length scales of electrons," Oulton said. "And that means we can potentially do some things we have never done before."

This idea could be an important step on the road to an optical computer, a machine where all electronics are replaced with optical parts, Oulton said. The construction of a compact optical transistor is currently a major stumbling block in the progress toward fully optical computing, and this technique for compacting light and linking plasmonics with semiconductors might help clear this hurdle, the researchers said.

Original here

Warning Sign: Metered Broadband Already a Hassle

We’ve talked before that metered access is a boneheaded idea that is bad for innovation, bad for Microsoft and Google, and ultimately bad for you. Until today, the idea seemed like an eventuality, not an immediate reality. But then NBC and TonicTV launched a new service that lets you download video from the Olympics and watch it offline. Right next to the installation instructions was this “important”note:

That’s the first warning I’ve seen about a particular service not being recommended for folks with metered broadband access. But the real bummer? That is just a taste of things to come — especially if you’re a fan of video services like Hulu.

We’re not even talking P2P throttling, just straight video consumption. In fact, P2P isn’t even a huge deal for networks anymore (but not because of that slap on the wrist the FCC gave Comcast). DSLReports writes that as of June “AT&T traffic was about 1/3 Web (non video/audio streams), 1/3 Web video/audio streams, and 1/5 P2P.” Those audio and video streams — that’s Hulu and YouTube. And as they provide more content at higher quality, those streams are only going to increase.

If metered access becomes standard, there will come a day when you spend less time watching videos, and more time counting the number of videos you watched to avoid going over your cap.

You have been warned.

Original here

MPAA: Don't limit our ability to close analog outputs

By Matthew Lasar

The Motion Picture Association of America took its crusade for selectable output control (SOC) to the next level on Thursday, responding to critics in the FCC's proceeding on the matter. The MPAA's July 31 filing takes particular exception to suggestions that the agency lift its prohibition on SOC on a two-year trial basis, and makes it clear that the group won't take kindly to other limitations, either. If consumers want to see movies on TV earlier than they appear on DVD, the MPAA says, they had better be willing to allow movie studios to remotely shut down some cable box outputs.

No trial period

In early June the FCC granted the MPAA a proceeding on its waiver request. SOC lets video distributors close down analog outputs on broadcasts to block the so-called "analog hole" that MPAA fears can be easily accessed by movie pirates. This security will, in turn, encourage Hollywood studios to partner with cable companies and release early-run studio films to TV, with the guarantee that the movies will pass only over protected digital links such as those that use HDCP.

"The Petitioners' theatrical movies are too valuable in this early distribution window to risk their exposure to unauthorized copying," MPAA wrote to the FCC. "Distribution over insecure outputs would facilitate the illegal copying and redistribution of this high value content, causing untold damage to the DVD and other 'downstream' markets."

Now MPAA also warns that a "calendar-based restriction" on SOC would be impractical, "and fail to provide the regulatory certainty" that movie studios will need to negotiate with cable companies for the fast transfer of early run movies to TV.

Vague hysteria

Ars construed from this language the possibility that MPAA wants SOC in order to limit the future ability of consumers to copy or record early run movies when they appear on TV. I even had at it with several representatives of the trade association in a recent interview, which, not surprisingly, drew different reactions across the blogosphere.

"What the MPAA is clearly trying to do here is start releasing movies on TV before they're available on DVD," declared Techdirt's Mike Masnick in a commentary on the exchange, "but wants to do so in a way that users won't be able to record on their DVRs (though, they hardly come out and say that)."

On the other hand, Content Agenda's Paul Sweeting takes me to task for making the MPAA come off as "vaguely hysterical (or worse)." Sweeting points out that at present it's pretty difficult to make permanent copies of VOD/PPV fare. He predicts that the early HD VOD offerings that the studios would like to release will similarly come with 'copy-never' or 'display only' flags.

"The use of SOC by the studios would not deny consumers a right they presumptively have, or a capability they currently enjoy," Sweeting concludes. "The issue for the studios is whether unprotected outputs could be used to record the early-release content in ways that are not currently permitted and then use that recording as the source for additional unauthorized copies."

Uncheck our authority

MPAA's latest filing does not focus on this debate, but on the conditions that various commenters have proposed for the waiver. The trade group wrangles with two parties that express concern that the cable companies not be allowed to use SOC in an unsupervised fashion, these being The Digital Transition Licensing Administrator (DTLA) and TiVo.

DTLA helps coordinate digital copy protection standards for the so-called "5C" manufacturing group (Toshiba, Intel, Matsushita, Sony, and Hitachi). It is skeptical of the plan and warns that SOC "cannot be left to the unfettered discretion of content owners and MVPDs [cable companies]. Such unchecked authority places far too much power in the hands of content owners, to the potential detriment of all other equally-important stakeholders."

DVR maker TiVo extends this argument to propose specific limits on the waiver. The company writes that whatever new service comes out of MPAA's proposal, it should not be able to disable any protected digital outputs approved by CableLabs, the cable industry's R&D group. "Consumer electronics manufacturers such as TiVo have made significant investments and brought innovative devices to market in reliance on the standards created by CableLabs," TiVo suggests.

MPAA says that there is "no demonstrated public interest need" for this. "For the new business model the Waiver would make possible, Petitioners and MVPDs should have the flexibility to use the technologies that are best suited to serve the needs of their mutual customers, while balancing the need to protect their content," the trade association writes.

TiVo also asks that if the MPAA receives its SOC waiver, it be limited to a 120-day period "between theatrical release and home media release." No again, MPAA insists, arguing that different movies have different release patterns, based on their popularity. "There is no compelling need to establish an arbitrary, fixed window for the proposed new Services," MPAA writes. "In fact, there are compelling marketplace statistics that demonstrate such a regulatory limitation is unnecessary."

Down the analog hole

The MPAA's filing also responds to the comments of Public Knowledge and seven other organizations. PK's filing expressed skepticism that the "analog hole" problem really requires this waiver.

"Evidence which the MPAA has relied on in the past to demonstrate the dangers of the 'analog hole' is unreliable and inapposite," the groups charge. "In the complete absence of evidence, there is no reason to believe that additional, costly, restrictive technologies are needed."

MPAA answers, in so many words, that the fears of its own member studios make the need for the waiver self-evident. "The fact that almost no movies are made available to MVPDs pre-DVD release is clear and convincing evidence that the analog hole is an impediment to the early window release of high-value content," the MPAA concludes. The association has pressed its Petition for Expedited Special Relief on behalf of Paramount Pictures, Sony Pictures, Twentieth Century Fox, Universal City Studios Walt Disney Studios, and Warner Brothers.

Original here