Tuesday, October 14, 2008
Oh, don't act so surprised. A refresh of Apple's long-in-the-tooth MacBook Pro line was pretty much the only sure thing slated for today's event, and Apple certainly delivered. As for looks, you probably know the score by now: chiclet keyboard, Air-inspired aluminum stylings, and a glossy screen that's flush with a new iMac-like black bezel (there's no non-gloss option for the purists out there). What's new is confirmation of a multi-touch glass trackpad, which suspiciously rids the computer of a single mouse button and adds some new gestures like app switching. Apple's also put in some effort on slimming down the computer, bringing it down to a mere 0.95-inches thick (though at 5.5 pounds it's a hair heavier than the original), but much of the real excitement happens under the hood. There's a new internal structure, that rumored "brick" of aluminum that helps Apple make the new Pro thin, strong and leaves room for the real goodies: the specs. Apple's using NVIDIA's new 9400M GPU + chipset 1-2 punch for integrated graphics, supplemented by 9600M GT switchable discreet graphics chip for heavy lifting, and pumping out those graphics over a Mini DisplayPort connector, if you'd like to supplement the LED backlit screen. As expected there's an SSD option, with drive accessible underneath the battery. The 15.4-inch base model retails for $1999, with a 2.4GHz Core 2 Duo processor, 2GB of DDR3 RAM and both GPUs. Step up to $2499 and you get a faster CPU, 4GB of RAM and a 320GB HDD. The 17-inch MacBook Pro comes in a similar configuration with a 2.6GHz processor, starting at $2799, but sans the redesign and GPU love. Hit the jump for a breakdown of the configurations.
Shares of Apple are on the rise after Bernstein Research upgraded the Mac maker and said a new MacBook priced at $900 would broaden the company's potential notebook customer base by 50 percent in terms of both units and revenue.
"We are upgrading Apple to Outperform - while reducing our target price from $175 to $135," analyst Toni Sacconaghi wrote in a research note to clients. "We believe that the stock is overly discounted, that Apple's short-term financials are likely to remain relatively healthy despite economic weakness, and that the company's longer term growth story remains intact."
Sacconaghi turned a particular focus to Mac growth, which he said is the "biggest wildcard among Apple investors today." He said that even if the global PC market remains flat in 2009 and Apple's share gains slow by 25 percent, the company would still see approximately 13 percent Mac growth.
"We feel confident that Apple will be a share gainer, as the company continues to expand distribution and purchase intention remains high," the analyst wrote. "Perhaps most importantly, we expect Apple to lower price points to address a much broader market at some point over the next year."
To this end, Sacconaghi pointed to a recent internal analysis which revealed that a MacBook priced at $900 would expand Apple's addressable notebook market by nearly 50 percent on a revenue basis, and 67 percent in terms of units. Should rumors of a $800 MacBook prove true, it would broaden the company's addressable market by 69 percent in terms of revenue, the study found.
While such moves would undoubtedly pressure gross margins, the analyst notes that the company already factored this into its forecasts when it guided gross margins down 150 basis points for fiscal 2009 even given the expected positive impact from iPhone sales.
"Apple's cost structure has high variable costs, creating less earnings downside risk than many investors may realize," he added. "Given its extensive use of contract manufacturing, Apple's COGS (Cost of Goods Sold) are nearly entirely variable, and operating expenses relative to gross margins are low; the upshot is that Apple's earnings per share suffers less to a given revenue reduction than many of its peers."
In the short term, Sacconaghi said predicting Apple's share price and direction may prove difficult given a number of factors, which could lead to fluctuations between $75 and $135. In particular, he said the company's upcoming revenue guidance for the December quarter could apply new pressure on shares. The Street is looking for sales just shy of $11 billion for the three-month period, but given the company's traditional practice of providing conservative estimates, management could wind up guiding $1 billion below expectations.
Also complicating matters is the difficult compare that exists between the December quarter of 2007 and the December quarter of 2008, namely expectations of a more than 20 percent fall-off in iPod revenues, a tougher consumer spending environment, and the absence of software revenue generated by last year's Leopard launch.
Looking a bit further down the line, the Bernstein analyst said he's confident Apple's secular growth story remains in tact. He expects Macs to continue to grow at least 9-10 percent annually, and said Apple TV holds the potential to "act as the centerpiece of the digital home, and could ultimately morph into a capable set-top box replacement."
In the meantime, he believes the company holds a " unique opportunity" to convert its iPod install base -- estimated at 120 to 130 million -- to iPhones.
Shares of Apple were trading up $8.40 (or 8.69 percent) to $105.20 amid a broader market upswing.
An update to Best Buy's inventory system, noted over on our Backpage blogs (RSS), includes six new models with prices inline with today's offerings. There is, however, a question of whether Best Buy is making assumptions, as is sometimes the case, or acting on advance knowledge from Apple.
I've long been an admirer of OpenOffice.org, the free, open-source office suite that's a serious alternative to pricey products such as Microsoft Office. It strikes me as a no-brainer to at least try it when you're in the market for an updated productivity suite, because it costs you nothing but your time.
But I'm amazed when I run across people who are hesitant to give it a try, even when they're just as hesitant to shell out big bucks for Microsoft's product.
To be clear: If you are considering buying a commercial office suite, don't do it until you have given OpenOffice.org a shot. That's particularly true of the new version, which was released today. OpenOffice.org 3.0 is a significant upgrade and, again, is completely free.
If, after using it for a while, you don't think it meets your needs, you can always uninstall it and buy a commercial product. But I suspect most users will find it's more than adequate for their needs, and the price can't be beat. Taking the time to give it a test drive could save you a bundle.
The final version of OpenOffice.org 3.0 is available at the main OpenOffice.org site. The are versions for Windows, Macintosh, Linux and Unix users. You can read the release notes for details about what's new.
This is a particularly interesting release for Macintosh users. In the past, running OpenOffice.org on the Mac required use of a Unix shell called X11. This is the first version of the suite that runs natively on the Mac. (There has long been a separate native-Mac project called NeoOffice, based on the OpenOffice.org source code, but it tends to lag behind in features.)
If you are a cross-platform user who works in more than one operating system, you'll appreciate that OpenOffice.org 3.0 has nearly identical interfaces in the Windows, Mac, Unix and Linux flavors. For example, here's how the Windows version of Writer, the word processor, looks in Vista:
And here's what it looks like in Leopard (OS X 10.5.5):
OpenOffice.org launches from a single "Welcome" screen in the Mac. The single-window model is available in Windows, too, but there are individual shortcuts for the word processor, spreadsheet, presentation manager, database and other tools as well. From this one window, you can start any kind of document, or launch an existing one.
Version 3.0 can open dozens of document types, including the newer OpenXML formats used by Office 2007 in Windows and Office 2008 on the Mac, such as .docx from Word. However, while OpenOffice.org 3.0 can read these formats, it can't write to them. Instead, it can save to the previous Office formats, such as the older Word's .doc. This makes it a great choice for opening Office 2007/2007 documents that may be sent to you, even if you don't have Microsoft's newer suite.
Past versions of OpenOffice.org have had some compatibility issues with complex Office documents, particularly the spreadsheet and presentation manager. I've only been playing with OpenOffice.org 3.0 for a couple of days at this writing, but so far I've not yet found any big issues with it not rendering Office documents properly.
However, Mac users should note that OpenOffice.org 3.0 won't open or write documents generated by Apple's iWork Suite. It's one of the few common formats not supported here.
For the most part, version 3.0 is snappy and robust. It seems to be a little faster on the Windows platform than on the Mac, which may have to do with the fact that this is the first native-Mac release. Still, on the Mac, it launches faster than most of the Office 2008 applications.
The polish and new capabilities of OpenOffice.org 3.0 make this a winner. Even if you're a die-hard user of a commercial suite, you owe it to yourself to give this new release a try. You literally have nothing to lose.
Update: OpenOffice.org's Web site is overwhelmed by demand for the software. If your browser times out when you try to access the site, keep trying.
If you think there are a lot of phishing scams cramming your e-mail in-box now, just wait--fraudsters have more tricks up their sleeve.
That's the message from McAfee Security Journal, due out Monday. Most of the articles deal with ways in which scammers use social engineering --not hacking--to dupe people into downloading malicious software to their computers or giving out their personal information, passwords, and bank account details to malicious Web sites.
One of the more interesting articles is titled "Vulnerabilities in the Equities Markets."
There have been headlines about people scamming the equities market by circulating false news in the hopes that stocks will move up or down (the false report that Apple's Steve Jobs had a heart attack being just the latest). What about investors losing or winning based on security news events?
It's already happening, writes Anthony Bettini, a senior manager at McAfee Avert Labs.
He notes that Microsoft's stock price tends to go down on "Patch Tuesday," the day it issues its monthly batch of security fixes, and when it issues an advance notification of the security bulletins for the month. Then on "Exploit Wednesday," which is the day after "Patch Tuesday," there is, on average, an uptick in the stock price.
"This is probably because institutional investors or market makers feel Microsoft was oversold the day before because of the bad news and that, in reality, Microsoft's value as an investment was only negligibly affected," he writes. "Note that this trend has been consistent during the past three years and continues today."
There's nothing really scary with that. But the notion that stock price fluctuations are occurring after vulnerability and patch announcements could give rise to more serious threats. "What would happen if a person built up a short position in a major software company and posted a handful of vulnerabilities with exploits to the Full Disclosure mailing list?" Bettini writes, before speculating on the legal consequences of such an action.
"It is possible people are already using zero-day threats for financial gain, not simply for embedding them within password-stealing Trojans but for taking short or options positions in equities and derivatives," he writes. "It's clear that spammers have figured out ways to profit from securities markets: we have received lots of penny-stock spam."
Another article in the McAfee Security Journal deals with the prevalence of spam and phishing attempts that piggyback on news events to grab the attention of people. For instance, malware writers exploited the broad interest in the Olympic Games to distribute e-mails that dropped malicious software on the recipient's computer that creates a back door for remote attacks, according to an article titled "A Prime Target for Social Engineering Malware."
There also has been a jump in the number of malicious programs posing as updates or software from security vendors, writes Elodie Grandjean, a virus researcher for McAfee Avert Labs in France. The programs lure people into downloading malicious software that instead of protecting the computer infects it with malware and interferes with legitimate security software actions. Such "scareware" has prompted Microsoft and the attorney general of Washington to file lawsuits.
Ben Edelman, assistant professor at the Harvard Business School, writes about the problem of incorrectly typing a Web address. "Typosquatting" is the practice of registering domains that are very close to popular Web site domains in order to get traffic from people who make a spelling error or typo in the URL address bar. The Web sites that appear when you make such a wrong turn on the Internet could have malware on them, but more likely are just making money off ads.
The most popular domain for typosquatting, spawning 742 offshoots, is "freecreditreport.com," followed by "cartoonnetwork.com," "youtube.com" and "craigslist."
However, lawsuits against typosquatters are making the practice less desirable, Edelson writes. Microsoft has received more than $2 million in typosquatting settlements, he says.
The report is on McAfee's Web site.
Microsoft is expected to be handing out pre-betas of Windows 7 to devs at WinHEC and PDC soon, and it looks like it's settled on an official name for its next-gen OS -- ahem, Windows 7. Yep, the code name is the real name, which is a first for Windows. According to Mike Nash on the Vista blog, the company went with Windows 7 because it "just makes sense" as the seventh release of Windows, and MS doesn't want to come up with a new "aspirational" name like Vista -- it "doesn't do justice" to the goal of staying "firmly rooted" in the ideas of Vista. Which probably explains why it looks so much the same. Sure, call it whatever you like, just get it out the door on time, okay?
At the beginning of September, Google launched a trial version of its Chrome web browser, opening up another front in its war against Microsoft. Now that the dust has settled on the launch, it's time for an update.
Google claims that Chrome loads pages faster and more securely than rival browsers, which should encourage internet users to consider making the switch.
But take a look at the graph above (click to enlarge) - sourced from web analytics company GetClicky. It shows that after its launch to a frenzy of news coverage Chrome peaked with a 3.1% share of the browser market. Since then it's been a steady decline, down to just over 1.5%. And it looks like it will stay that way.
Over the same period, Microsoft's Internet Explorer has seen next to no dent in its market share, which remains virtually unchanged at around 57.2%.
It's still early days of course, but it seems Google has a job on its hands if Chrome is ever to rival Firefox, let alone Internet Explorer. It's possible that most consumers are simply unaware that Google's browser exists. There was a clear frenzy of Chrome-related interest early in September, but that has dwindled away now as the figure from Google Trends to the left shows.
For a time the browser was even featured on the famously minimalist Google search page which must have boosted downloads, but it has since disappeared. Quite why is anybody's guess. But if Google is serious about getting the message across, we can expect to see many more comic strips in the months to come.
Colin Barras, online technology reporter
The music industry is changing. While the record labels are desperately trying to protect the revenue stream from album sales, a new generation of artists is starting to realize that they are better off when they give away their music for free. By now, we’re all familiar with the industry’s view, but what drives these artists?
Giving away music for free might not sound like a very solid business model to most people, but it is. Most artists make most money from concerts and merchandise, not so much album sales. Even more so, the key to success are the fans, and what better way to introduce people to your music by giving it away for free?
A whole new generation of artists, most of who grew up with Napster, Limewire and BitTorrent, are starting to utilize the power of filesharing networks. This year alone, thousands of albums were released online for free, and this number is growing at an increasing rate. The possibilities are endless. Some artists use sites like Jamendo, others go for mainstream BitTorrent sites like The Pirate Bay and Mininova, and yet another group prefers niche BitTorrent communities such as What.cd.
On What.cd, one of the larger music communities with over 60,000 members, artists have found a particularly successful outlet. In fact, the free albums are particularly popular, and often among the most downloaded. The music minded members, of which quite a few are artists themselves, are very appreciative of every new album. This August a compilation CD was released with tracks from 19 artists who uploaded their music to the site. This CD, titled “The What CD” is the most active torrent of all time on the tracker.
At TorrentFreak we have now reached a point where we can no longer mention all the artists that give away their music for free. While it was a rather exceptional thing to do three years ago, it has become mainstream today. It is, however, worth talking to one of these new generation of bands and artists who decide to share their music at no cost.
The Pragmatic is such a band. Today, the 5 member band, which was founded in 2006, has released the album ‘Circles’ on BitTorrent and Rapidshare. André, one of the band members, who plays an analog synthesizer from the early 80s, explained to us why they chose to give away their music for free.
“With this first release we really wanted to try out giving it out for free and just see what happens,” he said. “Bands like Radiohead and NIN come out and release stuff for free and have success, but that’s largely because of their already established careers. They’ve built that up the traditional way and they’ve reaped the rewards of that, but their success in file-sharing is more of a perk of that status.”
“Growing up, every musician dreamed of that big shiny record deal, but I don’t think it’s relevant anymore. Labels have had to sober up and re-think what their roles are. It used to be about music, and I think file-sharing has brought that to their attention. By releasing it for free, I guess we could be losing money, but in the long run I think we’re (hopefully) making fans.”
Similar to most other people, André is part of a generation that grew up with file-sharing. It is part of the music industry now, and it exposes people to more music than they would ever hear on mainstream radio. It is probably not what the RIAA wants to hear, or will ever admit, but music is more popular than ever thanks to file-sharing. André agrees, and told TorrentFreak:
“Fans go to shows, buy merch and support bands for all the right reasons. I think that our generation grew up with an almost insatiable need for more and more music. I know I did. I’ve downloaded lots of albums I loved and bought physical versions. I’ve downloaded plenty of albums I hated and deleted. I can’t begin to count how many bands I know and love because of Napster/Soulseek/Bittorrent. File-sharing was never really about stealing music, it was about finding music you loved.”
“Labels will complain and sue their very core audience just to make a dollar. I can’t blame them, it’s the way they’ve built their company. Change scares them, especially when they don’t control it. I honestly believe that I wouldn’t be a musician today if Napster hadn’t appeared. I think Napster fostered the incredible current musical culture and nobody gives them credit for it. I find it very hard for an upcoming artist to get any exposure without being willing to promote their music on p2p networks.”
The clash between artist and labels, and the ever increasing piracy statistics are forcing the big labels to rethink their business models. Nowadays, BitTorrent has the power to promote artists based on their music, not on the advertising budget. It is hard to deny that the music labels are in a crisis, however, music itself is more alive than ever before.
TorrentFreak is proud to present the first episode of ‘TorrentFreak TV’, a recap of some of the best, most interesting or remarkable stories from the wonderful world of BitTorrent. The show is directed by none other than Andrej Preston, who some people might remember as the founder of the legendary Suprnova.org.
The plan is to release a new episode every other week. The episodes will be posted here on TorrentFreak, and in the near future on TorrentFreak.tv, with an iTunes compatible RSS feed.
We’re all very excited about this new project, and we hope to see many more episodes in the future. Below, Andrej himself (aka Sloncek) will introduce people who contributed to TorrentFreak TV.
The idea for the show came when Ernesto asked me if I was willing to produce some webisodes for him. Of course I wanted to do it, since I am trying to get an UnderGrad major in Producing. :)
The first step was to get a team together, and I knew straight ahead who I would want to be the host of it. So, I asked my friend and classmate Ashley Hardy, which did not really need to think twice about this.
Then, I needed somebody to do the introduction and graphics for the show and my good friend Micky Smeds, who is an Animation student, stepped in. He also pointed me towards the people who could create the music and soundFX needed for the show - LJUDAFABRIKEN. Just when I thought I had most of the ‘crucial’ people, I remembered I need somebody to write the script. Luckily, my friend Krista Steinberger jumped in the last second.
With the episodes I want to bring most of the TorrentFreak’s news to people who might not read it for whatever reason, or who just prefer the ‘watching’ format. The whole show is meant not to be too ’serious’, since the things it talks about can get really dull pretty fast if said the wrong way.
Like every pilot, this one has many mistakes. We learned a lot from it, but still it would be great if the viewers could send us an email and tell us what exactly is interesting to them and so on, so that we can be better with every episode.
The episode can be downloaded from Mininova as well. For tips, comments or suggestions, feel free to contact the crew at firstname.lastname@example.org.Original here
The continued improvements in lithography have been the driving force that has upheld Moore's Law. Shorter wavelengths, better lenses, and adaptive optics have all contributed to this success story, which has allowed commercial chips to be fabricated with 45nm feature sizes, with 32nm in testing. This process has succeeded beyond everyone's wildest expectations, but that does not mean that future feature reductions that rely on the same basic technology are guaranteed. This week, researchers published an adapted maskless lithography technique that may provide an alternative path to sub-32nm features.
The maskless part is pretty important. Normal lithography creates an image of the features that will be etched into a chip by imaging a negative mask. This mask is big, expensive, and can take quite a while to make. When you are just trying to get a chip design right, each little change means a new mask. You can imagine that this might increase the development cost somewhat, while also slowing circuit development.
Maskless lithography gets around the problem by writing the chip directly. This can be done by milling the silicon with a beam of electrons or ions. Electrons and ions can be focused to very small spots, meaning that features just a few nanometers in size can be written, but this is a seriously slow way to write an entire chip. Running many beams in parallel would speed the process up. Unfortunately electrons and ions are charged, so the beams interact with each other—parallel beams don't focus, and don't go where they are directed.
Light, on the other hand, doesn't interact so strongly with itself, making it the perfect candidate for parallel beam maskless lithography. Unfortunately, it doesn't focus so well—a really good lens will focus light to, at best, a quarter of the wavelength. Fortunately, there may be another option: the field of nanophotonics and something called a plasmon, and plasmon lenses.
A plasmon is created by the interaction between light and the sea of electrons in a metal. The light causes the electrons to oscillate and, when conditions are right, they oscillate as a group, creating extremely large electric fields. These fields are exactly like the light field that created the plasmon, except that they extend just a hundred or so nanometers from the metal surface. A lens, based on plasmons, can be created by a set of concentric metal rings. The fields from the plasmons in each ring act in such a way as to create a tightly focused spot of light. In principle, these lenses could focus light tightly enough to create features about five to ten nanometers in size.
There is, of course, a catch—the distance between the lens and the focal point is about 20nm. This is a bit of headache, because it means that the lens has to move accurately over a silicon wafer while maintaining a distance of just 20nm between it and the wafer. This is precisely the step forward described in the new paper.
The paper actually describes a bit more than that. The problem isn't so much maintaining the separation, but rather maintaining the separation while moving the lens with any speed. Since we only have a limited array of lenses, they have to move over the wafer in order to write the whole circuit; to write with enough speed, they need to move fast. Feedback electronics cound maintain the lens at the correct height, but their response time is too slow.
To get around this problem, the researchers used airflow over and under the lens to maintain its height. Basically, if the lens lifts too high, the air pressure under the lens falls, forcing it back down. Likewise, if it sinks too low, the air pressure increases, forcing it back up. The response time is governed by the mass of the lens, which is very light, allowing it respond very quickly.
In the actual apparatus, the airflow is provided by spinning the wafer under an array of 16 lenses at 2000 revolutions per minute. The write position is controlled by translating the lenses from the center of rotation outwards to the edge of the wafer and flashing laser light into the lenses at appropriate times—getting the sequencing right for a complicated structure must be quite tricky, but that is what computers are for. Now, since the rotation rate is constant, the speed of the wafer under the lens depends on where the lens is, which means that the write speed and control over the height vary as the lenses move outwards. The researchers showed that they could maintain the lens within 20nm of the surface over the full speed range, and importantly, keep the lens nearly flat as well (within a few millidegrees).
It is important to emphasize that there are still some big hurdles to overcome. First of all, for this to compete with the speed of commercial lithography, the number of lenses will need to be of the order of 1000. Second, they used a small glass wafer to test this on. A 12 inch silicon wafer will be quite another story. Third, the feature resolution was 145nm, which is far from ideal.
I worry about the last part the most. Once feature sizes get below 32nm, the silicon wafer doesn't look very smooth anymore. In fact, it rather resembles downtown Manhattan. Obtaining features that are smaller, or even about the same size, as the depth variations in the silicon surface is going to be very challenging for any lithographic technique.
That said, this work has a lot going for it. No nasty vacuums, no horrible discharge lamps, no complicated optics train, and no expensive mask. Instead, only a laser, a spindle, some precision (OK, a lot of precision), and a complicated computer program are required.
He’s headed to Austin Wednesday to tell the state senate about all the risky business associated with the computers Texas uses to count its votes. (As in, the ones we’ll be using Tuesday, November 4 to pick the next president.) Back in June, Wallach testified before the Texas House Committee on Elections about the dangers of ES&S, the e-voting computers used by Texas.
“All of these voting machines were vulnerable to what we call ‘viral attacks,’” Wallach tells Hair Balls.
“If you have enough access to [one computer] to be able to get out a screwdriver and monkey around without anybody looking then what you could do is you could replace the software inside the one voting machine,” he says. (So, if you hear any clanking in the booth next you, please notify an official.) Wallach says it’s more likely it would be a poll worker after or before the election who would get the type of access needed, but once one computer is corrupt, it doesn’t take long for all of them to be.
“I compromise one voting machine and then all the voting machines get brought back to the election warehouse,” he says. “Then my evil voting machine talks to the [main] machine that’s tabulating and getting all that stuff and then it hijacks that machine and now it’s evil.” And from there it’s a bad-apple-bunch scenario.
Wallach says there are ways of detecting these types of problems, but they’re not always successful. For one of his Rice classes, Wallach uses Hack-A-Vote, a fake voting computer similar to the ones used in Texas, and tells a group of students to wreak havoc on the system. Then another group of students inspects the machine for possible viruses.
“Many of the subtle hacks escape detection,” he says. These subtle hacks could result in anything from votes being deleted, added or not counted at all. To date, Wallach says there have been no reports of these kinds of problems in real elections.
“There is also no evidence to suggest the absence of an attack like this having been attempted, because if somebody was successful, you’d never know,” he says. “That’s not the sort of thing that gives you warm fuzzies.”
But hacking vulnerabilities aren’t Wallach’s only beef with voting computers. “In terms of technologies we have available today, the best technologies we have involve paper,” he says. “These electronic machines we use in the state, they generate no paper record so if they misbehave you have no way of either detecting it or correcting it.”
So, um, don’t forget to vote and once you voted, don’t forget who you voted for because this one isn’t going to remember.— Dusti Rhodes
Mr. Blurrycam reveals the updated MacBook Pro, $899 laptop model shows up in Apple inventory systems
Well, maybe -- we're not calling it official until Steve pulls the cloth off himself tomorrow morning . Still, there's no denying the similarities between this image and all those other case leaks we've seen, and the list of specs we've been given matches up as well -- that "metal and glass" enclosure now houses an NVIDIA GPU, but no FireWire 400, and video-out is apparently through a connector "more compact" than MicroDVI. We'll find out soon enough -- oh, and just to amp up expectations, Boy Genius says he's confirmed the existence of an $899 part number in Apple's retail systems. Counting down...
Update: Our source just hit us with another pic, this time from the side -- it's after the break. We're also told that there's not one, but two NVIDIA GPUs inside --
Update 2: Our source just hit us again to say that it's two full-on NVIDIA GPUs -- sounds like a hybrid SLI setup to us, which is pretty wild. Wilder still, they say the MacBook and 17-inch MacBook Pro aren't getting refreshed tomorrow, which we find hard to believe, but we'll see when we see.