Friday, February 7, 2014
Chromebox
The Asus Chromebox is a very small fanless, headless box that will go on sale in March for just $180. There’s a 1.4GHz Celeron 2955U (Haswell) CPU, paired with 2GB of RAM and 16GB of flash storage. On the connectivity side of things, there’s Gigabit Ethernet, HDMI, 4x USB 3.0, DisplayPort, 802.11n WiFi, Bluetooth 4.0, and an SD card slot. Graphics-wise, the HDMI and DisplayPort are powered by the Celeron 2955U’s integrated GPU — which won’t be playing any high-res games, but it should be more than enough for full-screen HD-video playback. AnandTech reports that there will be Core i3 and Core i7 Chromeboxes as well, which will presumably push the pricing up.
Etiketter:
Asus Chromebox,
Chromebox
Wednesday, February 5, 2014
Can sound kill you? The short answer is “yes”
150 decibels is usually considered enough to burst your eardrums. The threshold for death is usually pegged at around 185-200 dB
A passenger car driving by at 25 feet is about 60 dB, being next to a jackhammer or lawn mower is around 100 dB, a nearby chainsaw is 120 dB. Generally, 150 dB (eardrum rupture) is only achieved if you stand really close to a jet aircraft during take-off or you’re near an explosive blast.
If you actually wanted to intentionally kill someone with a sonic weapon, there isn’t a whole lot of research on how you would actually go about doing it. The general consensus is that a loud enough sound could cause an air embolism in your lungs, which then travels to your heart and kills you. Alternatively, your lungs might simply burst from the increased air pressure. (Acoustic energy is just waves of varying sound pressure; the higher the energy, the higher the pressure, the louder the sound.) In some cases, where there’s some kind of underlying physical weakness, loud sounds might cause a seizure or heart attack — but there’s very little evidence to suggest this.
So, there you have it: Sound can kill you, but not in the standing-in-front-of-a-giant-speaker-stack-at-a-gig.
A passenger car driving by at 25 feet is about 60 dB, being next to a jackhammer or lawn mower is around 100 dB, a nearby chainsaw is 120 dB. Generally, 150 dB (eardrum rupture) is only achieved if you stand really close to a jet aircraft during take-off or you’re near an explosive blast.
If you actually wanted to intentionally kill someone with a sonic weapon, there isn’t a whole lot of research on how you would actually go about doing it. The general consensus is that a loud enough sound could cause an air embolism in your lungs, which then travels to your heart and kills you. Alternatively, your lungs might simply burst from the increased air pressure. (Acoustic energy is just waves of varying sound pressure; the higher the energy, the higher the pressure, the louder the sound.) In some cases, where there’s some kind of underlying physical weakness, loud sounds might cause a seizure or heart attack — but there’s very little evidence to suggest this.
So, there you have it: Sound can kill you, but not in the standing-in-front-of-a-giant-speaker-stack-at-a-gig.
Etiketter:
Acoustic energy,
Can sound kill you,
sonic weapon,
yes
Sunday, February 2, 2014
New top-level domains could change how we surf the internet
A big name start-up called Donuts Inc. is finally getting a chance to test its great theory of the internet: URLs matter, and if Donuts just happens to get rich along the way, so be it. That’s the sales pitch, as the company rolls out the first seven of potentially hundreds of new top-level domains (TLDs), insisting that the new approach will change the way we use the internet. The new TLDs (sometimes called generic TLDs or gTLDs), which became available this week through dozens of domain sales companies, are: .bike, .clothing, .guru, .holdings, .plumbing, .singles, and .ventures. That .com of yours seems a little dull now, eh?
Etiketter:
New top level domains,
surf the internet,
TLDs,
top-level domains,
URLs
Saturday, February 1, 2014
Carbon fiber 3D printer
Carbon fiber is one of the most prized construction materials available to a parts designer. It is also among the most expensive due to the painstakingly tedious process of molding or winding it. If, somehow, carbon fiber could just be printed it would be quite miraculous. To the delight of makers everywhere, the first 3D printer for carbon fiber was unveiled this week at the SolidWorks World 2014 conference in San Diego.
The company that makes the printer, MarkForged, claims its machine can produce parts with higher a strength-to-weight ratio than 6061-T6 aluminum. 6061 with a T6 temper is certainly not the strongest aluminum flavor going — and probably not the material chosen for the bulk of Ford’s new all-aluminum truck — but it’s the most commonly used aluminum, and still pretty tough stuff. In a rather surprising move, MarkForged’s founder, Gregory Marks, has named his new creation the “Mark One.” The machine runs either a 1.75mm fused carbon filament (FFF), or a 4mm composite filament (CFF), using quick-change extruder heads, and users also have the choice of printing with fiberglass, PLA (Poly lactic acid), or nylon.
Etiketter:
3D PRINTER,
CARBON FIBER,
GREGORY MARKS,
MARK FORGED,
MARK ONE,
SAN DIEGO
Friday, January 31, 2014
How to install SteamOS in VirtualBox
Late on Friday, December 13, Valve released the first public version of SteamOS. It is essentially just a version of Debian 7.1 (Wheezy) that has Steam pre-installed.
It’s well worth installing SteamOS in VirtualBox. That’s what I did — and I’ve written a guide on how to install SteamOS in VirtualBox, if you feel like doing the same.
Grab the latest version of VirtualBox and install it (it may take some time.) Get
It’s well worth installing SteamOS in VirtualBox. That’s what I did — and I’ve written a guide on how to install SteamOS in VirtualBox, if you feel like doing the same.
Grab the latest version of VirtualBox and install it (it may take some time.) Get
SteamOSInstaller.zip
from Valve. Download ISO Creator and install it.
Extract
SteamOSInstaller.zip
into its own folder. Open ISO Creator. You can name the ISO whatever you like, just make sure you save the ISO in a sensible location. Select the folder that you extracted the zip file to. Hit Start and wait a minute or two while the ISO is created.
Now open VirtualBox. This bit is somewhat complex with lots of little steps and gotchas, so be careful. Create a New virtual machine. Give it any name. Type = Linux. Version = Debian (64 bit). Click Next. Pick any amount of memory (1GB is sensible if you’re just going to fool around; 4GB if you want to try out DOTA 2 or something). Accept the default options on the next few pages of the wizard, and choose “Dynamically allocated” when prompted. Pick a hard drive size of around 50GB.
Once you’ve created the virtual machine, select it on the right hand side and enter Settings. Click System and select Enable EFI. Click Display and select Enable 3D Acceleration. Slide the Video Memory slider up to 128MB. Click Network and select Bridged Adapter from the Attached to drop-down. Click USB and use the + icon on the right to add your USB keyboard and mouse (if applicable).
Finally, head to Storage, click the optical disc icon under Controller: IDE, then hit the optical disc icon on the right hand side (see image). Click Choose A Virtual CD/DVD Disk File, then find the ISO file that you made earlier. Click OK to return to the main VirtualBox interface.
If you receive an error at this point, it’s probably because you haven’t enabled virtualization in BIOS. Enabling virtualization is beyond the scope of this how-to, but if you Google the name of your motherboard and “how to enable virtualization” it’s pretty easy.
Start the SteamOS machine!
Now, click Start and pray. If all goes to plan, you’ll be greeted with a prompt that looks like the image above. After
2.0 Shell>
type the following: FS0:\EFI\BOOT\BOOTX640
. If you can’t type the backslash (\
) for some reason (I couldn’t), change your system’s keyboard to US layout, then use the On-Screen Keyboard app to type the \
. Press Enter, and you should be greeted with the first sign that you’re installing SteamOS.
From this screen, press Enter to begin the automated install. Don’t worry about the WILL ERASE DISK! warning — VirtualBox prevents SteamOS from changing anything on your local filesystem.
The installation process is automatic and takes a few minutes.

Once the automated install is complete the system will reboot and you’ll be greeted with the above screen. Select the second option, recovery mode. The system will boot up and you’ll end up at a Linux command prompt.
Install VirtualBox Guest Additions
So that SteamOS is actually usable as a virtualized OS (clipboard sharing, shared folders, better mouse pointer integration), you must now install VirtualBox’s Guest Additions. From the command prompt type the following commands, pressing Enter after each one.
mount /media/cdrom
sh /media/cdrom/VboxLinuxAdditions.run
This will take a few moments to install, then type
reboot
and press Enter.Almost there…
This time around, don’t touch the GRUB bootloader and your system will automatically boot into a graphical interface — SteamOS! Well, almost. You’ll be greeted with a login prompt. Keep Default Xsession selected. The username and password are both
steam
.
For some reason, the Return To Steam icon on the desktop doesn’t work; you need to click Activities in the top left, then Applications, and scroll down to Steam. The Steam app will update, and then you’ll be greeted with the usual login prompt. Log in, hit Big Picture in the top right corner… and voila! You now have a (virtualized) Steam Machine!
From this point on, you’re pretty much on your own. I haven’t explored SteamOS much yet, but to be honest it doesn’t look like there’s much to discover: Right now, I think it’s just Debian with Steam pre-installed. It’s probably a good idea to have SteamOS installed now, though, so that you can take a look at exciting features — such as local game streaming — when they’re rolled out in 2014.
Etiketter:
DEBIAN,
INSTALL STEAMOS,
STEAMOS,
VIRTUALBOX
Laptops for engineers
Along with gamers, engineers pose one of the toughest design challenges for laptop makers. Engineering applications crave memory, graphics horsepower, and large screens — all hurdles in designing stylish, lightweight laptops. The result is a necessary tradeoff between performance and convenience. While not every engineer will make the same compromises, there are a few laptops that stand out for use by engineers, depending on their specific needs.
So what is the best laptop for an engineer? Here are a two great options, one of which will get the job done for you.
Lenovo ThinkPad W540

Sager NP9570
Often the words “portable” and “mobile” are used interchangeably. Not with the Sager NP9560. This beast of a machine is essentially a portable desktop, but not what you’d call a mobile computing device. Not as well-known as the big brand names, Sager has a reputation for creating machines with amazing performance. For those who want maximum power, the Sager NP9570 is an amazing laptop. Available with CPUs up to an Intel Core i7-4960X Extreme Edition CPU (typically a desktop processor), running at 3.60GHz (4.0GHz Turbo) this laptop’s raw performance is its defining characteristic.

The 1080p display isn’t as sexy as the ultra-high-resolution versions available on other machines, but it has an unmatched three hard drive bays, a tons of ports, 7.1 channel sound, and the choice of two powerhouse 4GB or 5GB Nvidia GeForce 770M or 680M video cards with SLI. This selections makes it the top graphics performer in anything short of a full-on desktop.
The downside of this machine, not surprisingly, is the size and weight. At 12 pounds, it is a beast in more ways than one. Part of the weight is the 17-inch display that covers 90% of the NTSC color gamut, but the high-powered, near-desktop components drive most of the rest.
Etiketter:
INTERESTINTECHNOLOGY,
LAPTOPS,
LENOVO THINKPAD W540,
NEW TECHNOLOGY,
SAGER NP9570
Thursday, January 30, 2014
The very first transistor
The operation of a transistor is entirely based on a class of materials known as semiconductors, which chemists and engineers have known about since the mid-1800s. In 1833, Michael Faraday noted that silver sulfide decreased in electrical resistance when heated (metals usually increase in resistance when heated). In 1880, Alexander Graham Bell used selenium — a semiconductor that produces electricity when it’s struck by light — to transmit sound over a distance with his Photophone device.
Real analysis of semiconductors didn’t really begin until the 1920s, though, when scientists tried to work out why and how there was a class of materials that appeared to be metals, but behaved very differently than normal metals. With World War II, and the advent of radar and other radio technologies, semiconductors started to become very serious business. It wasn’t really until Bell Labs started researching semiconductors after WW2, though, that we finally started to learn about and control the properties of semiconductors.

A diagram showing the first germanium amplifier, using water as an electrolyte. When a potential is applied by the wire on the left, increased current flows across the right-hand circuit.
In specific, Walter Brattain, John Bardeen, and William Shockley of Bell Labs decided to investigate the bulk and surface properties of silicon and germanium. Through a series of experiments, the researchers discovered that by applying a small amount of electricity to the surface of a piece of germanium, it could increase the flow of electricity through a second circuit that was also connected to the piece of germanium — in other words, an amplifier. The earliest germanium amplifiers used liquid electrolytes which would dry up, or were only capable of switching at low frequencies. Then, on December 23 1947, gold contacts were used instead of an electrolyte — and thus the first transistor was born.
South Korea to spend $1.5 billion on 5G mobile network that’s ’1,000 times faster’ than 4G
South Korea, eager to keep its title as the world’s most connected nation, has announced that it will spend $1.5 billion (1.6 trillion won) on rolling out a next-generation mobile 5G network. What actually constitutes a 5G network is a bit nebulous at the moment, but the general consensus is that it would be somewhere between 10 and 1000 times faster than current 4G LTE networks, with download speeds in the region of 100-1000 megabytes per second — fast enough to download a TV episode or movie in a couple of seconds. The South Korean government hopes that by getting ahead of the curve, national chaebols such as Samsung and LG — which represent a significant portion of the country’s entire GDP — will have a competitive edge when other nations get around to rolling out 5G.
What exactly is 5G? That’s a very good question, and as it stands there’s no definitive answer. The ITU, the organization that defines telecoms standards, has only just started looking at the problem and is years away from a 5G specification. Right now, 5G is essentially a catch-all term for the next generation of very-high-speed mobile networks that are being developed by various research groups around the world. There have been very few actual real-world 5G tests. In mid-2013, Samsung set up a “5G” wireless link capable of 1Gbps (100MB/sec) over two kilometers (1.2 miles) — but there are scant few details on how Samsung actually pulled it off, and even fewer details of whether the test was actually applicable to mobile applications (it sounds like Samsung used a big van full of batteries and antennae).
If things move quickly, we might have a 5G standard by 2015 or 2016. The South Korean roadmap, following the government’s $1.5 billion investment, wants to see a trial 5G deployment in 2017, and a commercial service by the end of 2020. The main barrier to developing a 5G standard, though, is finding a suitable way of pushing gigabits of data through the air without interfering with other wireless services and without burning through the battery. (Read: World’s fastest wireless network hits 100 gigabits per second, can scale to terabits.)
Generally, as we’ve moved through the generations (1G, 2G, 3G, 4G), most of the speed boosts have come from utilizing larger swaths of bandwidth. LTE-Advanced, the fastest 4G spec out there, uses 100MHz of prime wireless spectrum (2G only used 200KHz by comparison). While we can rely on new MIMO and modulation tech to get us some of the way to 5G, we’re ultimately going to need more bandwidth — and, down in the cellular sweet spot (700-2100MHz) there just isn’t that much bandwidth available. As a result, 5G may have to use higher frequencies (30-90GHz) where there’s a lot more free space — but the millimeter wave frequencies have their own problems (they are very rapidly attenuated by obstacles) that will need to be worked around. (Read: The wireless spectrum crunch, illustrated.)

Wednesday, January 29, 2014
iOS in the Car video is improved navigation
It’s been over half a year since Apple announced iOS in the Car integrationwith over a dozen automakers, and we’ve not heard much from Cupertino about this feature since then. Today, a video leaked that shows off exactly what the interface currently looks like, and it’s quite promising. It’s still just an emulation running in OS X, but it does give us a solid idea of what we can expect from 2014′s in-dash user experience.
In this YouTube video, a developer named Steven Troughton-Smith walks us through the “iOS in the Car” interface with a build designed for iOS 7.0.3. While the build shown here is limited in scope, it does showcase some very valuable information. For example, it does support touchscreens and multiple resolutions, but it doesn’t support third-party apps or multitasking. Interestingly, it doesn’t seem to have a virtual keyboard interface either. Instead, voice recognition will serve as the main form of input. Dictation is still handled in the cloud, so hopefully Siri won’t collapse under the pressure.
Of course, these are still early days for Apple’s in-car integration. The software shown in this demonstration is unfinished, and automakers still need to ship iOS-ready models. Even so, The Verge points to a very interesting screenshot provided by an iOS developer by the name of Denis Stas. In this screenshot, he shows the emulator running with the iOS 7.1 beta, and it looks substantially different. The user interface has been thoroughly polished, and the aesthetic better matches the look of Apple’s current UI motifs. If the rest of the software has seen as much work as the user interface, a public release might be in the cards in the next couple of months.
We know that traffic, directions, music, and messaging are all going to be available with Apple’s system, but what about third-party apps? Many of us spend hours in the car every single day, so customized apps and notifications would be welcome additions. It’s also worth noting that Apple’s lackluster mapping solution can’t be swapped out for Google Maps here, and that could turn off a number of wary travelers. If Apple wants iOS in the Car to gain traction in the long run, third-party apps are absolutely a must. How long will we have to wait for Apple to take the hint?
Frankly, the competition is heating up from all sides. Microsoft and Ford have been working together for years with Ford Sync, and Google is teaming up with Audi and Nvidia to move Android directly into the car market. Apple isn’t the only smartphone game in town, and automakers are even working to cut-out the need for smartphone integration with 4G-enabled cars. Apple certainly has its work cut out for it, and each day that goes by without an iOS in-car solution makes it even more difficult for Apple to reach new users. If iOS in the car has any hope for success, Apple needs to release it as soon as humanly possible.
Wednesday, January 22, 2014
Duke’s metamaterial superlens could finally allow for long-range wireless charging
Wireless power transmission is fantastic in theory — but in practice, due to some pesky laws of physics, the range of transmission is extremely limited. Yes, you can now charge your phone by putting it on a wireless power transfer plate, but that plate still needs to be plugged into the wall, and — more importantly — you still have to remember to put your phone on the plate. What would really make wireless power transmission (WPT) useful is if you could power a device´s that’s still in your hands or pocket. By using metamaterials, Duke University has discovered a way of wirelessly transmitting power over much greater distances, bringing us tantalizingly close to a utopian society where mobile devices never run out of battery.
Wireless power transmission, or more accurately resonant inductive charging, is not a new phenomenon. If you want to read about it in detail, we have an explainer that you should read. In short, though, inductive charging works like this: You have two loops of wire, you run current through one loop, and that current induces an resonant electric field in the other loop. This process, as you can imagine, isn’t very efficient — but if the loops are “tuned” to each other (like a radio antenna), and they’re aligned within a few millimeters, you might reach a transmission efficiency of 80% or so.
The big problem, though, is distance. The effective range of a WPT system is dictated by the diameter of the loop. At distances greater than the diameter, efficiency drops off very, very quickly. As you can imagine, for mobile devices such as your smartphone, there is a very finite size on the loop diameter. This is why you have to put your phone on a plate, rather than the plate beaming power across the room to the phone in your pocket.
Enter Duke University’s breakthrough, which uses a metamaterial superlens to create a WPT system that can transmit power up to 12 times the diameter of the two coils (which are 2cm in this case). A superlens (a lens that goes beyond the diffraction limit), by using materials with properties not found in nature (metamaterials), can circumvent the diameter-transmission-distance limitation. (Read: The wonderful world of wonder materials.) In this case, the metamaterial is fashioned out of lots of little copper loops (pictured above), arranged in a 3D pattern (top photo). If you want to read the math behind this metamaterial superlens, hit up the research paper (it’s open access): doi:10.1038/srep03642 – “Magnetic Metamaterial Superlens for Increased Range Wireless Power Transfer.”
At just under a foot, we’re not yet talking about a system that can wirelessly recharge devices across a room, but it’s a big step in the right direction. If you had a 5cm loop in the back of your phone, and a matching metamaterial superlens transmission coil, the range would be 60cm, two feet. With some tweaking — different materials, better manufacturing, beamforming — perhaps the superlens can reach even farther. Who knows, maybe in a few years, coffee tables and office tables everywhere will be outfitted with wireless charging units that keep your phone, tablet, and laptop permanently charged.
Tuesday, January 21, 2014
Please, Microsoft, don’t put Windows XP to sleep on April 8 – the world isn’t ready yet!
On April 8 2014, almost thirteen years after it was first released, Windows XP will finally breathe its last breath and die — officially, anyway. From that date, Microsoft will no longer support the inveterate OS, meaning instability bugs and security vulnerabilities will go forever unpatched. With Windows XP’s desktop market share still around 30%, and many enterprises still months or years away from upgrading to Windows 7/8, these unsupported and insecure machines represent a serious risk to the health and security of the internet and other high-tech infrastructure. If just a single zero-day vulnerability is found after April 8, it will never be fixed. There’s no telling what damage cybercriminals might sow with such an exploit.
It’s important to note that the Windows XP EOL/EOS (end of life/end of support) has been a long time coming. We’ve known since June 2008 that Microsoft would withdraw paid assisted support, security updates, and non-security hotfixes for Windows XP on April 2014. There will also be no further updates to online technical documentation. While this is obviously an issue from a security perspective, the larger issue is compliance — if you manage personal data (which is basically every big company), there are industry and federal regulations (PCI, Sarbanes-Oxley, HIPAA, etc.) that you need to comply with. Using a non-supported operating system, and thus dangerously exposing your client database to hackers, is a compliance no-no.
According to Net Applications, Windows XP still had a 29% share of the desktop market at the end of December 2013. Realistically, most big western enterprises and institutions have probably already upgraded to Windows 7. The bulk of the 29% probably consists of China’s infamous love affair for pirated copies of Windows XP, and a lot of mom-and-pop desktops and netbooks. Windows 7 only came out four years ago, and the widely reviled Windows Vista came before that. When you factor in the slowing pace of the PC market, and the small performance gains from new hardware, it’s not hard to believe that there’s a bunch of Windows XP machines still floating around. (Read: PC obsolescence is obsolete.)
The other area where Windows XP still rules supreme is in legacy systems. For large institutions, such as banks, upgrading from a legacy (and often bespoke) system is time consuming, expensive, and dangerous. As a result, there are banks, airline companies, and other huge enterprises that still have back-end systems that are much older than Windows XP. Case in point: According to Bloomberg Businessweek, 95% of the 420,000 ATMs (cash machines) in the USA run Windows XP. Come April 8 2014, if a serious security flaw is found in Windows XP, the banks will be on their own to defend against increasingly high-tech criminals. (Read: ATMs running Windows XP robbed with infected USB sticks.) The banks do have plans to upgrade these machines, but it will take time — probably a few years, if not more.
It’s hard to get a fix on the total number of desktop PCs in the world, but it’s somewhere between one and two billion. At 29% of the desktop market share, a botnet of epic proportions could be fashioned if a suitable zero-day vulnerability was found. I guess we should be glad that Microsoft has an excellent reputation for taking down botnets, eh?
Anyway, the point is, if you have a friend or family member who’s still running Windows XP, help them upgrade to Windows 7 as soon as possible. In case you were wondering, Office 2003 also has the same EOL/EOS date — but unless you’re in the habit of opening random email attachments, it’s much less of a potential security risk.
Monday, January 20, 2014
Windows 9 coming in 2015
To distance itself from the Windows 8 snafu, Microsoft’s next major update — Threshold — will reportedly skip Windows 8.2 and jump straight to Windows 9. Windows 9 is expected to arrive in April 2015, with internal sources saying that Windows 9 will make good on many of the Windows 8 features that caused such cruel and unusual distress to Desktop users. The Start menu is expected to make its illustrious return, and you should be able to run Metro apps on the Desktop in windows. Microsoft is still on schedule to release Windows Phone 8.1 and a service/feature pack for Windows 8.1 at the Build conference in April.
This latest information comes from the ineffable and surprisingly handsome Paul Thurrott, who usually has pretty accurate sources when it comes to Microsoft leaks. We had previously heard about Threshold, but at the time we thought Microsoft would stick with the Windows 8 naming scheme. By moving to Windows 9, it definitely signals that Microsoft is looking to make drastic, significant changes. Windows 8 is almost completely characterized by the maligned Metro Start Screen. We would be surprised if Windows 9 did not change the primary interface in some way, so that it’s visually distinct from Windows 8 — so that users know that that it isn’t ewww Windows 8. Windows 9 might even boot straight to the Desktop, by default — at least on laptop and desktop PCs, anyway. (Read:How to bring back the Start menu and button in Windows 8.)
Windows 9 is also expected to feature Metro 2.0 — some kind of maturation of the current Metro design language that dominates the Windows 8 Start Screen and apps. It’s not immediately clear what Metro 2.0 will be exactly, but part of it appears to be the ability to run Metro apps in separate windows on the Desktop. Presumably, if Metro apps are going to be on the Desktop, they will also gain the ability to be controlled with a mouse and keyboard. (Navigating current Metro apps with your keyboard is unpleasant to say the least.) Windows 9 may also feature complete cross-platform app compatibility between Windows 9, Windows Phone 8.1, and the Xbox One — but really, it’s too early to tell at this point.
Thurrott’s other interesting tidbits revolve around April’s Build conference, which occurs a couple of weeks after the company finishes its huge internal reorganization. While the conference will be mostly focused on Windows Phone 8.1 and the Xbox One, there will apparently be a “vision announcement” for Windows 9 — something that we haven’t seen since 2003, whenMicrosoft unveiled Longhorn (which later became Vista). During Sinofsky’s rein, Microsoft’s Windows division has been incredibly secretive — this Windows 9 keynote probably won’t be quite as crazy and freewheeling as the olden days, but Microsoft hopes that it will enough to begin the process of healing the wounds left by Windows 8.
Of course, now that I mention Longhorn, it’s impossible to ignore the parallels between Vista and Windows 8. Both were victims of Microsoft's long and slow development cycle: Slow and bloated Vista arrived just as netbooks were taking off, and Windows 8 — though its heart was almost in the right place — was a couple of years too late. Hopefully the successor to Windows 8 will be as good as good as Vista’s successor. Microsoft kind of needs a miracle for Windows Phone 8.1, too — if you think that adoption of Windows 8 has been bad, it’s even more anemic on the smartphone side of the equation. The next 12-18 months will be very important for Microsoft: It must either field a compelling OS and ecosystem for smartphones and tablets, or it runs the risk of fading into consumer obscurity.
Sunday, January 19, 2014
ESA’s Rosetta spacecraft is about to complete its decade-long mission by landing on a comet
The European Space Agency’s (ESA) Rosetta spacecraft has spent the last decade winding its way around the Solar System. It has been slingshotting around planets and picking up speed to complete its mission — to be the first spacecraft ever to land on a comet. In just a few days, Rosetta is scheduled to wake up from its long sleep and get ready to make history.
Rosetta was launched way back in March of 2004 with the aim of landing on the 4 km-long comet 67P, which it will rendezvous with near the orbit of Jupiter. But why has it taken so long to get out there? Like most comets, 67P is has built up a lot of speed over the eons of swinging around the Sun. In order to successfully land on its surface, Rosetta needed to match its course and speed almost exactly. It would have been profoundly costly in fuel to do that with a rocket, so the craft has spent these last 10 years lining up four separate gravitational assists from Earth and Mars.
The length of the missions required the solar-powered spacecraft to conserve energy, so it has been in hibernation mode for the last 18 months as it closed in on its target. On January 20, Rosetta is set to wake up. If the ESA doesn’t get the all clear signal from the $1.36 billion probe in a matter of hours, the mission could be over after 10 years of planning. Once it comes online, Rosetta will get its bearings and use thrusters to control its spin to better soak up power with its solar panels.
Rosetta is scheduled to catch up to the comet in May and should begin scanning the surface shortly thereafter. The probe itself isn’t going to be making the landing — it carries a small lander called Philae for that. The landing element of the mission isn’t taking place until November 2014, when Rosetta will release Philae and allow it to float toward the surface. To prevent the lander from bouncing off, it will fire two harpoons into the surface as it lands.
Scientists believe that comets could have delivered some of the organic molecules to the early Earth that led to the formation of life (panspermia). Comets like 67P likely contain those same ingredients from the formation of the planets, picked up as they formed in the extreme outer Solar System billions of years ago. Having the chance to examine these frozen fossils more closely could tell us a great deal about how life arose and the Solar System formed.
Despite a number of flybys and the 2005 Deep Impact collision study, we still know very little about the molecules present in comets — we don’t even know what the consistency of the surface will be like when Philae lands. Before you ask, no, Deep Impact wasn’t the first real first landing on a comet — a 23,000 mph collision doesn’t count as a landing. Rosetta and Philae both contain instruments to characterize the structure and composition of the comet. They will measure the levels and forms of hydrogen, carbon, nitrogen, and oxygen locked in the icy interior.
Rosetta could open up a new era of understanding our place in the universe when it wakes up in a few days. Of course, that’s only if it does wake up.
Saturday, January 18, 2014
Dell XPS 8700 desktop with 1080p touch monitor
Desktops are dead to some, but for the rest of us they offer a great amount of performance and expansion for the dollar that simply can’t be matched by a laptop. While enthusisatic power users may prefer to build their own machine, it’s tough to beat the value of a good pre-built PC deal.
As our deal hunters have found time and again, if you happen to need a full PC setup, not just a monitor or a desktop separately, that the bundle deals offer the best value. You typically get the desktop, monitor, keyboard, mouse, and of course all the benefits of a pre-build PC like warranty and OS. Our hot deal today is an offer we haven’t come across before, pairing a high-end PC with a multi-touch monitor.
The XPS 8700 is no stranger to our deal listings here, offering a strong combination of performance, features, and design at a very competitive price. Many of the XPS 8700 deals we feature easily beat a custom-built PC for value and this one is no different. The system included in this bundle is well equipped with a quad-core Haswell Core i7, 12GB of RAM, 1TB hard drive, GeForce GT 635 graphics, Wireless-N, Windows 8, and a host of other standard features.

For a display, you get the 21.5-inch S2240T, a sleek piece of kit with 10-point multi-touch and a unique hinge that allows it to lay at the perfect angle for touch interaction. It serves up 1080p resolution on a gorgeous IPS LCDpanel, ensuring extra-wide viewing angles and rich colors. Paired with the Windows 8 OS, the open-minded users will come to enjoy the added level of interaction that touch offers (I know, it sounds crazy, but I went through it myself recently).
The XPS 8700 desktop is covered by a two year warranty that includes in-home service and 24×7 phone support that covers both hardware and software support. The monitor likely comes with its own one year warranty that includes advance exchange to avoid costly shipping bills, though Dell isn’t clear about separate warranty coverage.
This bundle normally runs $1469, but a hefty instant discount knocks that down to just $1000 with free shipping. Check this out over at Dell.com and step into a complete and powerful new computer system.
Click here to start at Dell.com. Total $469 instant savings applied automatically. This deal ends January 21 or sooner.
Friday, January 17, 2014
Soft robotic gripper
Robots are getting faster and stronger all the time, but for the power and cunning engineered behind all those servos and actuators, the subtle elegance of picking up an object is still a tricky thing in robotics. Designing a conventional grasper that pinches together and lifts items is the traditional approach, but it might not be the best. A new type of vacuum-powered gripper called the Versaball is now commercially available.
This device was originally devised a few years ago as part of a collaboration between researchers at Cornell, University of Chicago, and iRobot (with funding from DARPA). The Versaball is able to grab hold of things using a property called jamming transition. The Versaball is a serious piece of robotics gear, but the concept can be illustrated with regular household items.
Small granular materials that are packed into a rubber housing like the one at the end of the Versaball have an almost fluid consistency. If you press the bulbous shape down on something, the material flows around it. Jamming transition is applied when the air is evacuated from the rubber membrane. This causes the granular particles to be pulled closer together and act more like a solid, thus locking the object into place. It’s the same phenomenon you might have experienced with vacuum-packed goods like coffee grounds — they’re hard as a rock until you open the package. In fact, an early version of the Versaball used coffee grounds in a regular balloon.
The Versaball, sold by Empire Robotics, is much more advanced than a party balloon filled with coffee, but the principal is the same as it ever was. The super-durable rubbery bulb is pressed into an object, allowing the granular material to fill in around it and solidify when the air is pulled out (in under 0.5 seconds). The business end of the Versaball has a beanbag-like consistency and comes in three-inch and six-inch varieties. Assuming a solid grip, the Versaball can lift up to 20 pounds.
This approach is appealing because it has what roboticists call high error-tolerance. When dealing with a mechanical grasper, the operator has to very carefully align the device to pick something up. Jamming transition is just more forgiving — the head doesn’t have to be right on the money to attach firmly to something, and the surface doesn’t have to be smooth or regular. For systems relying on robot vision, this is even more desirable.
Empire Robotics sees the Versaball having applications in robots, of course, but also in assisted living, prosthetics, and handling hazardous material. The first batches of Versaball kits are expected to ship later this month.
Monday, January 13, 2014
5K TVs
With 4K UHD TVs the most visible stars of CES, including Sony’s new X-tended dynamic range models, of course someone had to up the ante with a 5K set. Toshiba was showing off this model that doesn’t have any more vertical resolution than a 4K set, but is wider, allowing full-screen display of cinematic content (which generally has a wider aspect ratio than normal wide-screen displays).
Saturday, January 11, 2014
Photosynth 3D
When Blaise Aguera y Arcas (then of Microsoft Research, but now at Google) demonstrated Photosynth at TED 2007, it became an immediate hit and has since become one of the most-watched and discussed tech demos of all time. While the original iteration of Photosynth was certainly cool, the new version — Photosynth 3D — will blow your mind.
The Photosynth project was started almost a decade ago for the purpose of “reinventing the whole enterprise of photography for ordinary people,” says Agüera y Arcas said. The first iteration of Photosynth, released in 2007, stitched together thousands of photos — cobbled together from all over the web — to create a seamless 2D image that you can explore. It was basically a clever computer vision algorithm combined with a “gigapixel” panorama builder/viewer. While the underlying tech was undoubtedly cool, it was the slickness of the interface that really wowed people. The original Photosynth video is embedded at the end of the story.
The new version, Photosynth 3D, takes that clever computer vision algorithm and incredible interface slickness into the third dimension. Photosynth can now take a bunch of photos and turn them into four different 3D views: Spin, Panorama, Walk, and Wall. The best way to demonstrate these four views is to watch the video, or to play around with some of the embedded synths below.
As you can see, all of these views are very similar to the original Photosynth, but now it’s also possible to move in space, rather than just panning and zooming a 2D plane. The 3D Panorama and Wall views are actually very similar to the original Photosynth, but the addition of 3D parallax makes it feel like you’re actually there, or that you’re watching a video.
Spin is a new mode that basically turns the panorama inwards, towards an object. Instead of turning on your feet to shoot a panorama of a scene, a Spin view is created by walking around a subject and taking dozens of photos. The Walk view, as the name implies, is basically a series of photos captured while you walk forward, and stitched together to create a 3D space. For all four modes, remember that when you stop the camera, you have full access to the original high-res images — it’s still like a gigapixel panorama in that regard. (Read: Autodesk Catch: Make a 3D print of anything, just by walking around it with a camera.)
Technologically, while the original Photosynth used computer vision to align a large number of images in two dimensions, Photosynth 3D uses the spacial gap between each image to generate 3D models of the objects in each scene. Then, depending on your position in the scene, textures (which have been cut out of the original photos) are overlaid on those objects. It’s fairly ingenious, and the new, mega-slick Photosynth viewer really adds to the experience. If you get a chance, try hitting the “c” or “m” keys while in the new Photosynth viewer; C reveals the 3D interpretation for each image, while M shows you the (scarily accurate) reconstructed path taken by the camera.
The future of Photosynth
As exciting as the original Photosynth was, we never really saw the tech come to fruition. In theory it is built into Bing Maps, allowing it to bring up geo-tagged synths, but it never really hit the critical mass required. For the most part, the Photosynth website seems to be Yet Another Gigapixel Panorama repository. (Read: Ricoh Theta: The first camera that can take spherical 360-degree panoramas.)
With these new 3D views, though, it’s easy to see the correlation between these new 3D views and competing services such as Google’s Streeview — especially when you consider that the Photosynth team moved from Microsoft Research to the Bing Maps department a few years ago. For now Photosynth 3D is just a tech preview, but hopefully Microsoft can find a way to bring it to the mass market. The tech is simply too cool to keep hidden away in the vaults. Copyright issues aside, imagine if Microsoft just left a few hundred Photosynth servers running in the background, joining up all of the photos on Flickr and Facebook to create a 3D panorama of the entire world…
Subscribe to:
Posts (Atom)