The world of Virtual Reality has had a dramatic increase in popularity in recent years. The technology that people have been waiting for has finally arrived and it comes in the form of a head-mounted display (HMD). There are many brands of HMD which range in their ability to achieve total immersion. The low-end forms of VR use a smartphone and a pair of lenses, like Google’s Cardboard:
The cheapest versions of VR use the same same lens-enclosure method of delivering VR. Users are limited to apps they can find on their phone’s app stores, which are buggy at best. Still, if you’re unsure whether or not you want to buy a more immersive HMD, this is a great way to get an idea of what you’ll be buying. The real immersion begins when the display and the technology inside is specifically designed for VR gaming.
The best VR experience while still keeping your wallet happy is from Samsung Gear VR, but it requires that you already own a recent Samsung Galaxy smartphone:
AT $60, the Samsung Gear VR has some more intricate technology than the Google Cardboard allowing for a better experience. You could also add the Gear 360, which allows for “walk around the room” immersion for $350 but if you find that price point reasonable you may be better off in the high-end territory. The Gear VR has its own app store with games designed for use with it.
If you don’t have a Galaxy Smartphone, but you do have a PlayStation, you may be interested in what Sony has been working on. Their VR HMD is the Playstation VR. At $400, the PSVR connects to your PlayStation for use with VR-enabled games. The PSVR is meant to be used with the Playstation Move Controllers which will add another $100 to your total. A Sony executive says plans to make PSVR compatible with PC may be in their future.
The high-end forms of VR include the Oculus Rift and HTC Vive:
These HMDs are designed with PC games in mind. They provide an experience far superior to the cheap options but will run at a high price of $599 for the Rift and $799 for the Vive. The Vive includes two hand controllers which allow the user to have virtual hands for interacting with VR objects. Oculus is working on a similar device, the Oculus Touch, which is available for pre-order as of October 2016.
Many companies are investing in virtual reality and creating their own devices to compete with the front-runners. It is expected that the VR market will expand much further, especially once the price point of the high-end HMDs comes down. Virtual Reality is in a state of great potential; the applications of these headsets goes well beyond gaming. The military is interesting in them for training purposes. Educators can use them to teach students. Doctors can use them to treat psychological conditions. I have no doubt that Virtual Reality will eventually become part of our everyday lives.
This year, Samsung and Apple both released a new generation of devices. If you don’t have a particular operating system preference and photography is your thing, then this article is for you.
The Samsung Galaxy S7 and S7 edge both have the same cameras with the following specifications: Dual Pixel Auto Focus 12 mp rear camera, F1.7 aperture, Records in UHD 4K resolution(3840 x 2160) @ 30fps, flash on rear camera.
Dual Pixel Auto Focus was introduced on smartphones for the first time with the Samsung Galaxy devices. All of the pixels in the camera’s sensor are allocated to phase detection and sensing light, whereas in previous smartphone cameras less of the pixels were used for phase detection and auto focus.
Aperture is the opening of the lens and it is measured in F-stops. These numbers correspond to the size of the opening in the lens. A smaller F-stop is a larger opening in the lens, and a larger F-stop is a smaller opening. With an aperture of F1.7, the 7th generation Galaxy devices have the largest smartphone aperture. This enables the camera to take in more light, resulting in better low-light photos.
The rear camera on the Samsung devices records in 4K resolution, which is the resolution that newer consumer TVs display in..
Unlike the seventh generation of Galaxy devices, the iPhone 7 and 7 Plus have slightly different features, but they also have many similarities.
The iPhone 7 camera boasts the following features. For ease of comparison, the features that can be most easily compared to the Samsung Galaxy devices have been bolded.
12 mp rear camera with F1.8 aperture Digital zoom up to 5x Optical image stabilization
Panorama (up to 63 megapixels)
Sapphire crystal lens cover
Backside illumination sensor
Hybrid IR filter Autofocus with Focus Pixels
Tap to focus with Focus Pixels
Live Photos with stabilization
Wide color capture for photos and Live Photos
Improved local tone mapping
Body and face detection
Auto HDR for photos
Auto image stabilization
Video Recording 4K video recording at 30 fps 1080p HD video recording at 30 fps or 60 fps 720p HD video recording at 30 fps
Optical image stabilization for video Quad-LED True Tone flash
Slo‑mo video support for 1080p at 120 fps and 720p at 240 fps
Time‑lapse video with stabilization
Cinematic video stabilization (1080p and 720p)
Continuous autofocus video
Body and face detection
Take 8-megapixel still photos while recording 4K video
In addition to these features, the iPhone 7 Plus also features a telephoto lens with an F2.8 aperture. 2x optical zoom and digital zoom up to 10x are also available.
The F1.8 lens is a slightly smaller aperture than the 7th generation Samsung devices, but it is a very small difference. The additional telephoto lens and optical zoom on the iPhone 7 Plus make it capable of taking better pictures at a distance.
This information about Apple devices and any further specifications can be found on their website at http://www.apple.com
Digital and optical zoom both accomplish the same job, they just do that job different ways. Optical zoom is based on the lens itself. Different parts of the lens move to zoom and focus, which is why smartphone cameras have limited optical zoom. Digital zoom is entirely computer based, so it’s very similar to zooming in on an image you could find on Google. The processing unit is what manages the zoom.
Overall, both manufacturers make very capable cameras. The information is available on their websites and here for you to compare. For me, the decision would ultimately come down to operating system preferences and preference of user interface.
When it comes to portable devices aimed at a college going audience, not many products can really compare to the sleek and powerful MacBook Air and Surface computers, each fulfilling a similar role as per the design of Apple and Microsoft respectively.
While both computers are excellent, they’re quite difficult to choose between. Both are offered at similar sub two-thousand-dollar price points, and both are designed with portability and aesthetics as the major goals of the devices. However, there are a number of key differences which can be highlighted that can help to make the decision when purchasing one of these machines.
Interface and Form Factor
The form factors of each device are strikingly different, with some variation depending on the specific model purchased. The MacBook Air comes in both 11 and 13-inch variants, with the 13 inch boasting some spec increases to boot. Surfaces, however, are a little more varied. If you’re looking for the newest devices on the market (which I would personally recommend), you’re essentially deciding between the Surface Pro 4 and the Surface Book.
While the Surface Pro 4 is essentially a tablet computer with an optional attachable keyboard, much like an iPad, the Surface Book is much more of a dedicated laptop-style device. Many people will prefer this style, as the more robust keyboard makes typing a much more pleasurable experience, yet the simplicity of the tablet experience might draw some to choose the Surface Pro 4 instead. Each device rings in at a similar size, the Surface Pro 4 having a slightly smaller 12.3-inch screen when compared to the Surface Books 13.5 inch.
Either way, both Surface devices present one striking difference in terms of the interface; touch screen. Touch screen is a valuable tool to many that increases ease of use and productivity, especially when in an environment where a stable desk is unavailable. Furthermore, each device comes with a touch screen sensitive stylus, useful for things such as drawing diagrams and signing documents in a convenient fashion.
The difference between the Surface and the MacBook Air essentially boils down to what it is you’re looking for. If you want the more traditional laptop experience, while sacrificing the utility of a touch screen in exchange for a slightly more portable device, the MacBook Air may be what you’re interested in. However, if a tablet-style hybrid device is more your style (with the Surface Pro 4 airing much more on the side of tablet than the Surface Book), surface devices may be worth looking into. Either way, you’re getting an excellent portable workstation to fit whatever needs you may have.
When it comes to internal hardware, both the Apple and Microsoft options are surprisingly similar. Both the MacBook Air and the Surface can be configured with a variety of processors, the MacBook allowing either an i5 or a much beefier i7, while the Surface Pro 4 also allows for less powerful core m3 and i3 processors, the Surface Book however being locked to the previously mentioned i5 and i7 just like the MacBook.
For general use, an i5 is really all that the average person needs. However, if you plan on doing any sort of gaming on these machines (which is not recommended, due to the lack of a non-integrated graphics card in any the machines, with the only exception being the much higher end Surface Books), an i7 could be worth the extra money.
Basically; the m3 and i3 are basic processors capable of doing most anything the average user would need, perhaps slugging behind a bit when it comes to multitasking. The i5 is a much more capable chip for this, and if you really need the extra juice, the i7 will certainly get the job done.
Memory and storage are another important aspect of these devices. The MacBook Air can be configured to have up to 512gb of extremely speedy flash-based storage, as well as up to 8gb of internal memory. Unless you’re someone who has literally thousands of photos on their computer, this should definitely be enough for the average user in terms of storage. Furthermore, 8gb of memory should definitely be enough, and will only ever begin to slow you down in the most demanding of multitasking scenarios, such as rendering video for an editing project.
Both surface devices have very similar configurations, with the Surface Pro 4 ranging from 4gb of memory to 16gb, while the Surface Book is locked at either 8gb or 16gb. Internal storage is pretty much the same story; the Surface Pro 4 can handle up to 256gb of storage (half that of the MacBook), while the Surface Book can take an impressive 1tb of the same flash based storage as the MacBook.
What this boils down to is that, depending on how much you need, the Surface Book could be your best option for mass storage. If 8gb of memory just isn’t enough for you, and you have over 500gb of files that you need stored, the high configurations of the Surface Book may just be your only option, as the MacBook Air only has a few options.
However, for most people, I would say that each device is about equivalent in terms of storage and memory. I wouldn’t let this bother you too much when picking your device, as external drives are always a way to expand storage, and more than 8gb of memory really isn’t necessary for most users.
To conclude, there’s one more category of discussion that needs to be touched upon: price.
Both the Surface and the MacBook Air are devices which you can get for under 2000 dollars, with the Surface Pro 4 and MacBook Air both being available (at minimum conditions) for just under 1000.
MacBook Airs range from about 900 dollars for a minimum configuration 11-inch model, all the way up to 1200 dollars for a 13-inch model armed with 8gb of memory, 512gb of storage and a powerful i7 processor.
Surfaces, however, range quite a bit. You can get yourself a minimum configuration Surface Pro 4 for about 900 dollars, just like the MacBook, with the only difference being that the Surface Pro 4 is configurable to up to an 1800-dollar machine.
If you’re interested in a Surface Book, expect to pay about 1200 dollars for the cheapest configuration, with its options ranging up to a shocking 3000 dollars for the model with a 1tb solid state drive built into the machine.
Whichever device you get, all of them fulfill the same basic role: a sleek, powerful, portable device with productivity in mind. If I were to buy these devices, I’d either go for the 1200 dollar MacBook Air configured with an i7 processor and 8gb of memory, or the 1200-dollar Surface Book. While this Surface Book configuration does require you to use an i5 instead of an i7, the addition of a touch screen and stylus definitely win back the lost value.
Albert Einstein famously disparaged quantum entanglement as “spooky action at a distance,” because the idea that two particles separated by light-years could become “entangled” and instantaneously affect one another was counter to classical physics and intuitive reasoning. All fundamental particles have a property called spin, angular momentum and orientation in space. When measuring spin, either the measurement direction is aligned with the spin of a particle -classified as spin up- or the measurement is opposite the spin of the particle -classified as spin down. If the particle spin is vertical but we measure it horizontally the result is a 50/50 chance of being measured spin up or spin down. Likewise, different angles produce different probabilities of obtaining spin up or spin down particles. Total angular momentum of the universe must stay constant, and therefore in terms of entangled particles, they must have opposite spins when measured in the same direction. Einstein’s theory of relativity was centered around the idea that nothing can move faster than the speed of light, but somehow, these particles appeared to be communicating instantaneously to ensure opposite spin. He surmised that all particles were created with a definite spin regardless of the direction they were measured in, but this theory proved to be wrong. Quantum entanglement is not science fiction; it is a real phenomenon which will fundamentally shape the future of teleportation and computing.
What is a “mechanical” keyboard and what is different about it that sets it apart from the $10 keyboard that you’ve been using? How are different mechanical keyboards different? Should you buy one? Great questions, with somewhat tricky answers.
What makes a keyboard “mechanical”?
Most keyboards you encounter nowadays are rubber-dome or membrane keyboards. The membrane is underneath each key, so when you press the key down, the membrane depresses and makes contact with another membrane on the base of the keyboard. When these membranes contact, the keyboard gets a signal that a key has been pressed and sends that information to the computer.
Now, the difference between that and a mechanical keyboard, is that instead of a membrane being depressed, a key on a mechanical keyboard depresses a physical switch, and when that switch is pressed, a signal gets sent to the computer.
The main difference between these types of keyboards, as you can tell, is the physical switch being depressed vs. the membranes contacting each other that tells the computer when a key has been pressed.
For the most part, nearly all rubber-dome keyboards feel the same, and give little tactile feedback, that is, you don’t know how exactly how hard you have to press a key for it to register on your computer. For mechanical keyboards, there are different mechanical key switches that all feel different, and give different levels of tactile feedback. When you feel the tactile feedback on a mechanical keyboard, you know you’ve registered a keypress on the computer.
Cherry MX mechanical switches:
Nearly all mechanical keyboards use switches made by Cherry, and they are typically denoted by the color of the switch. The most common switches are Blue, Green, Brown, Clear, Black, and Red. Switches have different levels of force, measured in grams (g), needed to depress the key, as well as different levels of tactile feedback that they give. Some switches give strong tactile and audible feedback for keypresses, while others give almost none unless the key is pressed all the way in.
Cherry MX Blue (Tactile Click)
If you’re an oldschool computer user, MX Blue switches may remind you of the clicky keyboards from the 1980’s. The blue switch has both strong tactile feedback and a loud “click” when you activate the key, making it a quite popular choice for typists, however, the loud clickiness makes it somewhat of a nuisance in workplaces with shared spaces. It has an actuation force of 50g, making it somewhat of a stiff switch.
Cherry MX Green (Tactile Click)
Green switches are very similar to Blue switches, but have a much higher actuation force, sitting at 70g. This makes them much stiffer than blue switches. Greens still have the loud click and tactile feedback similar to blues.
Cherry MX Brown (Tactile Bump)
The MX Brown switches have a softer tactile feedback than MX Blue switches, and no loud click. With the tactile feedback and no loud click, they are often considered a middleground between the Blue switches and the Black switches, and provide a option for both typing and gaming. Brown switches have an actuation force of 45g, making them one of the lighter switches.
Cherry MX Clear (Tactile Bump)
MX Clear switches are similar to Brown switches, with a stronger actuation force (65g) and a slightly stronger tactile click. Again, these are a good middle ground switches for both gaming and typing, and are a good choice if you like a stiffer key.
Cherry MX Black (Linear)
A big difference between tactile switches mentioned above and linear switches such as the Black and Red switches is that with linear switches, there is no tactile feedback until the key is pressed all the way down (called “bottoming out”). For all other switches so far, you have tactile feedback telling you when your keypress is registered on the computer. With Black and Red switches however, the keypress can register without any tactile feedback.
Black switches have a high actuation force of 60g, making stray keypresses less likely. Black switches are commonly used by gamers who need accurate keypresses.
Cherry MX Red (Linear)
MX Red switches are very similar to Black switches, but with a lower actuation force, sitting at 45g. These switches are smooth all the way with no tactile bump or click, other than when it bottoms out. These switches are commonly used by gamers who need fast, rapid keypresses.
Should you switch to a mechanical keyboard?
Mechanical keyboards are quality products that last longer than normal membrane or rubber-dome keyboards, and the build quality is reflected in the price. Many keyboards will run you upwards of $100, but for most people, that price is well justified. So, should you get one? The answer to that question really depends on your personal preference and personal experience. Reading about all these different switches really means nothing until you try typing on a mechanical keyboard. There is a huge difference between looking at moving pictures about what the switches do and actually feeling what it’s like to type or game on one. The bottom line is, go somewhere you can try out different keyboards with different switches, and see which one you like. Everybody’s preferences are different when it comes to typing, and certain keyboards may fit yours better than others.
DDR4 memory has been on the market for some time now and looks to be a permanent successor to DDR3L and older DDR3 memory. However, other than being one more DDR than DDR3, what is the difference between the old and the new? There are a few key differences, some good and some bad. This article will give a broad overview of the costs and benefits of DDR4 memory.
Increased Clock Speed
Every component in your computer has to have a clock, otherwise the seemingly endless sequences of ones and zeros would become all jumbled up and basic logic functions would become impossible. Memory, though it does not perform any logical work on the data which lives in its vast arrays of binary-junction transistors, does still use a clock to govern the rate at which the aforementioned data is overwritten or refreshed. With faster clock speeds, DDR4 can be written to, and read from, far faster than DDR3 or DDR3L. This gives a big advantage to those out there using blazing fast computer processors which are held back by the speed at which they can read and write to memory. However, users looking to purchase a laptop with DDR4 memory may not experience any noticeable speed increase from DDR3.
Lower Power Consumption
With new versions of computer components, often times manufacturers are able to boast about improved power efficiency. With DDR4 this is the case! Older DDR3 memory sticks would require about 1.5 volts to run; the new DDR4 memory sticks can run off about 1.2 volts. This may not seem like a whole lot, but the amount of power needed to store data in memory is very very minuscule so a large amount of excess power is turned into heat. Anyone who has spent a few hours playing video games on a laptop will know just what excess power consumption feels like when it’s going into one’s legs. A hot computer doesn’t just cause mild discomfort; transistors, as non-ohmic resistors, are impeded in their ability to switch electric current on and off when they get hot. This means less ability to perform the mathematical functions that are at the base of all computing and, therefore, a slower computer! Less power consumption means less excess power, a cooler machine, and a faster computing experience.
Moore’s Law has two components to it: computing power will double roughly every 18 months, and the cost of existing computer components will be halved in the same amount of time. Sometimes we reap the benefits of the halved cost; sometimes we don’t. At the moment, purchasing DDR4 memory for a new computer is a costly endeavor. DDR4 memory can work more than two times faster than DDR3 but there is a considerable cost premium. This is to be taken into consideration when choosing whether or not to make the leap into DDR4: is the improved speed and efficiency worth the price. That question lies well beyond the scope of this humble article.
With modern computers, we enjoy an unprecedented level of flexibility. Computers now are more modular than ever. However, just because different components can fit together without being modified, that does not mean they will work together. With DDR4, you need, with only a few exceptions, brand new, top-of-the-line components to work. This means, if you are to purchase or construct a computing using the fast, new memory, you need a fast, new CPU and a fast, new motherboard. For those of you out there who have no interest in building a computer, you will be paying upfront for a laptop or desktop fitted with the latest version of everything. This will further up the cost of purchasing a machine fitted with DDR4 memory.
So What’s the Deal?
With all new things, there are costs and benefits. With DDR4, yes, you will experience faster read and write speeds and overall faster computing, but it will come at a cost. For people who use their computers for browsing the internet and word processing, there will be a very small noticeable difference. However, for avid users of applications such as Photoshop and Final Cut Pro, DDR4 will yield a substantial speed increase.
Ultimately, it is up the use whether or they want to take the leap into the new realm of faster read and write speeds. Yes, you will get to have a blazing-fast computer that you can brag to your friends about, but it will come at a cost. You will also run the risk of spending more money and not really getting all that much more speed if you are using mostly memory non-intensive programs. However, if you are like this humble IT guy, and spend much time video and photo editing and want a computer that is not going to start hissing when you open Photoshop, then DDR4 is the memory for you!
You finally sat down to start that paper you’ve been putting off, hit the power button on your laptop and nothing but a folder with a question mark shows up. Or maybe you just got back from the library and just want a relaxing afternoon online. However, when you wake up your computer, all you see is a black screen and text reading “Boot device not found.”
When diagnosing issues where your computer won’t boot, there are a few different diagnostic tests that you can run to determine what is causing the issue. These can vary depending on what kind of computer you have. For all manufacturers, the first step is determining whether or not the computer turns on. With laptops, check whether or not any lights come on. If it is unplugged, try making sure the battery is seated correctly and plugging it into the power adapter (be sure to use a known-good wall outlet). If none of these work, the most likely cause is failure of the main logic board.
If your computer does turn on at all, this could mean there is a hardware failure. Usually if the computer doesn’t turn on at all this means there is some kind of power failure. It could be as simple as your battery dying, which can be solved by charging the laptop with a known good power adapter. On the other hand, this could also be caused by a motherboard that has failed.
The other hardware point of failure is usually the hard drive. In this case Windows and Macs will give two different errors. Macs will boot to a folder with a question mark. Windows could show a number of different screens depending on the manufacturer and how old the machine is. Usually it will look something like the following:
The last point of failure for boot failure is the operating system. If the operating system has been corrupted, it can cause any number of errors to be shown on startup. On Windows machines this usually results in a blue screen of death. To fix this, usually the hard drive needs to be wiped and Windows needs to be reinstalled (after making sure your files are backed up). Macs, on the other hand, have a few recovery options, the most useful being disk first aid. Holding down Command-R while the machine is booting will bring up the recovery boot options:
Regardless of what happens when you try to turn on your computer though, there is always a solution to fix any problems that might happen. Determining where the point of failure is can be the difficult part. Once you know that, it’s much easier to make a decision about fixing the computer.
Whether you just want to project your laptop screen onto a bigger monitor, or you’re buying a new monitor for your desktop, the search for a monitor, like any other component, is riddled with tech jargon that is often difficult to understand. This article is designed to give buyers a quick guide about the differences between TN and IPS, the two main monitor types of today’s world.
A Little Background on Monitors
Back in the not so distant past, CRT, or Cathode Ray Tube, was the standard monitor type. CRTs got information in an analog format along the cable. The cathode, or electron gun, sits at the back of the monitor’s tapered back and fires electrons corresponding to the signal received from the cable. Closer towards the screen is a set of anodes, that direct the electron to the RGB layer of the actual screen, via part of the signal from the cable. While these monitors were state of the art once upon a time, they don’t really have much of a place in today’s world with the invention of LCD screens, which have become the standard for today’s monitors.
LCD, Liquid Crystal Displays, don’t suffer from the same drawbacks as CRTs. For one, they use far less power. Also, CRTs tend to be harsher to stare at, and lack customization options in terms of brightness controls to the degree that modern monitors do. Additionally, LCDs are much more clear than CRTs, allowing for a more accurate image to be displayed. Modern LCD monitors work by having a two layer system of LED lights and LCD screen. The LED lights are referred to as a “backlight” and cause the image to be projected more clearly than the otherwise fairly dark LCD. The LCD layer, then, is in charge of color production, and the actual recreation of the image. LCD monitors are digital now, via such connections as HDMI or DisplayPort, and therefore can transmit data faster.
Now that we know a little about monitor history, let’s move on to the difference between TN panels and IPS panels.
TN, or Twisted Nematic panels, use a ‘nematic’ kind of liquid crystal to rotate and pass light through, corresponding to the signal transmitted. The main advantage of TN panels is speed. TN panels take advantage of something called an “active 3D shutter” which in essence allows them to display up to twice as much information as other types of panels. Additionally, the response time of TN panels is much quicker than IPS, though it is possible to find faster IPS panels. The delay in response time for a TN panel is roughly 2ms (milliseconds) however they can go as low as 1ms. Another benefit of TN panels is that they are generally cheaper than their IPS equivalent. This fast response time, and cheap factor, make these monitors quite popular in the gaming community, as well as the general consumer market, as gamers will experience less delay time when rendering an image. Additionally, TN panels allow for a higher refresh rate, going as high as 144Hz – though once again, it is possible to get IPS monitors with similar specs, just for a more money.
The major downside of TN panels is that they lack 100% accurate color reproduction. If you’re browsing Facebook, it’s not very important. However, if you’re doing color sensitive work perhaps for a movie or a photo edit, then TN panels may not be the right monitor for you.
The main difference between IPS, In-plane Switching, and TN panels, as touched on above, are price and color reproduction. IPS monitors are generally preferred by those in the professional rendering industry, as they more accurately portray colors of images. The downside, however, is that they are more expensive, though it is quite possible to find affordable IPS monitors for price ranges from $150 all the way up to thousands of dollars.
IPS monitors work by having a parallel instead of perpendicular array of pixels, which in addition to allowing for better color reproduction has the benefit of excellent viewing angles, while TN panels can often discolor if viewed from any relatively extreme angle. In essence, IPS panels were designed to address the flaws with TN panels, and therefore are preferred by many, from the average consumer to the professional editor.
Don’t let the benefits of IPS panels ruin your opinion of TN panels, though, for TN panels are still fantastic for certain situations. If you’re just sitting in one place in front of your computer, and absolutely perfect color reproduction isn’t really important to you, then TN is the way to go, especially if you’re trying to save a little on your monitor purchase.
To summarize, TN panels have a better response time, as well as a cheaper price tag, while IPS panels have better viewing angles and color reproduction for a little extra cash. Whatever your choice of type, there are a plethora of excellent monitors for sale across the internet, in an immense variety of sizes and resolutions.
If you own a computer, chances are you have a lot of important data stored on there. It may seem safe and sound, but tragedy could be waiting to strike. Data loss from a failed hard drive is an all too common but preventable problem that could happen to anyone. So, how do you prevent it?
Most computer storage is on a hard drive disk, which consists of a series of spinning disks, or platter, on which data is stored, and a moving arm, or read-write head, which reads and writes data. The platter motor spins the platters at over 5400 rpm (and sometimes up to 15,000 rpm), and the head motor moves the read-write head over the platters. The Hard drive is one of the only moving parts left in the modern computer, and as such is one of the most vulnerable to damage. Always avoid dropping or shaking your computer, especially while it is on. This could cause the parts in the hard drive to bump together (literally your computer crashing).
Unfortunately, sometimes hard drives fail through no fault of the owner. One possible way a hard drive can fail is if the files on it become corrupt. This can be caused by an operating system update getting interrupted or malware. When this happens, your computer may continually try to reboot, or display errors when starting up. Whatever the case, usually most data can be recovered by doing what is called an archive reinstall. This process can repair or overwrite damaged system files. Any member of the 5 College community experiencing this problem can check in their computer to our repair center to get an archive installation done. Just stop in to the Help Center and we can help decide if that is necessary.
Another issue that can be more serious is mechanical failure. What this means is that the hard drive is not spinning or the read-write head is unable to move properly. When this happens it can be very difficult to recover any data because there is a risk of causing physical damage to the platters where the data is stored. This problem is often accompanied by strange noises coming from your computer in addition to failure to boot. Generally, this requires a professional data recovery service to retrieve files, and can be expensive.
The best way to prevent data loss from a failed hard drive is to keep backups. Although it can be impossible to prevent a failure, it doesn’t mean you have to lose your data. An external hard drive can be a great way to keep dated copies of files so you can restore any file to a specific version of it. Important files can be kept on a CD or flash drive. These are not suitable for all your files since they have limited space, but they are also less prone to failure.
One of the best ways to back up data is to use a cloud storage service such as Google Drive or Dropbox. Since the files are stored by the service, you don’t have to worry about losing the flash drive or mechanical failure. All you need to access your files is an internet connection. And, all UMass students, faculty, and staff get access to unlimited storage on both Google Drive and Box. Both of these services can be used not just to store your files, but also access and share them anywhere.
When you’re stuck troubleshooting a problem in Linux, whether it’s a full installation or a recovery USB, it can be useful to know some commands to give you more information about the machine. We’ll start with commands that you might use for troubleshooting from a recovery USB.
Virtual reality has long been a dream of gamers everywhere. The next level of immersion into a fictional world will bring players themselves into the game, instead of simply showing it on a screen. The idea of being ‘plugged in’ to a different reality has been used in fictional films like The Matrix and TV shows like Fringe, but that’s all these realities have been – fiction.
For the past few years, virtual reality projects have been popping up and growing in complexity and immersion. There are a few different ideas about how it should be done; here we will take a look at some of the most well-known virtual reality projects.
One of the first major virtual reality projects, the Oculus Rift is arguably the most recognizable name in the industry so far. Originally announced in August 2012, the Oculus Rift started as a Kickstarter campaign that raised $2.4 million. In June 2015, Facebook bought the Oculus VR company for $2 billion. Oculus Rift devices have been seen at numerous gaming and technology expos, such as PAX, E3 and SXSW, as development kit platforms for many indie games. The Oculus Rift Development Kit has went through 2 iterations and has been used for development for the past 3 years.
The Oculus Rift boasts a 1080×1200 resolution per eye, a 90Hz refresh rate, and a 100 degree field of vision.The consumer edition of the device is approaching its release in Q1 2016.
Initially, it was little more than a virtual reality development kit exclusive to developers and game studios. The company had been distributing Development Kits since its Kickstarter campaign. Today, the Oculus Rift is preparing for its consumer launch, and some preorders have already been shipped.
The Oculus Rift is generally considered the most premium of current VR projects. The manufacturing process for the Rift involves hundreds of custom parts and tracking sensors. The project has been praised for being one of the most sleek and seamless VR devices, and is also notable in its progress in one of the biggest challenges in the VR industry today: VR interaction.
We are a long way away from virtual reality experiences that would allow the user to naturally move in or touch something in the environment. Many other projects either leave the user stationary and only able to look around; some, including the Oculus Rift, allow users to move using a gamepad. Oculus, however, has also made progress of their own in VR interaction. The Oculus Touch is a pair of ergonomic controllers featuring buttons, joysticks, and triggers that also track hand movement. The Oculus Touch compliments the Oculus Rift and is currently available for developers.
The Oculus Rift will need to be run by a very powerful computer, since it is so graphically intensive. Their website recommends a machine with:
CPU: Intel i5-4590 equivalent or greater
GPU: GTX 970 / AMD 290 equivalent or greater
OS: Windows 7 or newer
2x USB 3.0 ports
1x HDMI 1.3 video output
Dell, Alienware, and ASUS have already announced lines of Oculus-ready high performance PC towers, starting at around $950-$1000.
The Oculus Rift Consumer Edition is scheduled to hit the market in Q1 2016. It will cost $350, and include removable headphones (allowing the user to use their own headphones), an Xbox One for Windows controller, the Oculus Touch controller, and an LED camera stand used to track head movement.
Originally announced in September 2014, the Samsung Gear VR was developed by Samsung in collaboration with Oculus. The device itself is not a complete virtual reality experience; the most recent revision needs a Samsung Galaxy S6, S6 Edge, or Note 5 to be plugged into it by Micro USB to act as the display and processor. The headset itself contains only the field of view lenses and an accelerometer (the phone’s built-in accelerometer is not very powerful and does not provide adequately accurate tracking capability to provide a premium VR experience).
The Samsung Gear VR is currently one of the most popular consumer-grade virtual reality headsets because of its low price; the headset itself only costs $100. The phone, of course, is separate, but many Gear VR users already use an S6 device as their personal smartphone.
The Gear VR features a small trackpad and button on the right side of the headset, allowing for limited VR interaction capability.
However, you do get what you pay for. The display’s immersion is only as good as the device powering it, which is usually 60Hz or less, and there are no built-in headphones; you have to plug them into the phone and deal with the headphone wire. Graphics are usually prerendered and not as detailed as tethered VR devices that rely on a PC tower for active rendering.
Google Cardboard is the cheapest of the consumer-level options for virtual reality.
It is essentially a build-it-yourself Gear VR. Like the Gear VR, it is powered entirely by the smartphone, but unlike the VR, it relies the phone’s built-in accelerometer, and there is no headstrap so you have to hold the device up to your eyes while using it. The headset itself is, as the name implies, nothing but a folded cardboard container with a pair of convex lenses inside.
Google Cardboard is easy to make at home, and its website gives instructions on how to find the parts necessary and put them together. There are many manufacturer variations on Google Cardboard that are built in different ways and available for purchase and assembly.
The headset fits any phone up to 6″ and Cardboard apps are available for iOS, Android, and Windows Phone.
The HTC Vive, announced in March 2015, is a virtual reality headset being developed in partnership between HTC and Valve. The device is part of Valve’s larger effort to expand the Steam platform into more areas – including other projects such as the Steam Controller, Steam Link, Steam Machines, and SteamOS, all part of the Steam Universe.
The headset is tethered to a base known as the Lighthouse, but it is still meant to be moved around in. The device contains more than 70 sensors including a MEMS gyroscope, accelerometer and laser position sensors. The headset comes with two Lighthouse towers that emit lasers to map out the room in accordance with the headset’s front cameras. The cameras also track static and moving objects in front of the user, allowing the device to warn the user of hitting an obstacle, like a wall.
Valve has released SteamVR APIs to everyone under the label OpenVR, allowing developers to create virtual reality environments with or without the use of Steam.
The Vive Developer Edition is available now for free for certain developers, and it comes with SteamVR Controllers, a pair of one-handed controllers similar to the Oculus Touch, but based off of the concave trackpads of the Steam Controller. No word yet on a Consumer Edition.
Microsoft’s HoloLens platform is a little different from the other virtual reality headsets we’ve seen; it’s more like Google Glass than the Oculus Rift. Instead of showing you a completely different world, the HoloLens captures the setting around you and superimposes ‘holograms,’ in a sort of ‘mixed reality.’ You still see what’s in front of you, but you can see and interact with non-real figures as if it’s all right in front of you.
Users can interact with the holograms through eye movements, voice commands, and hand gestures. The device uses an array of video cameras and microphones, an inertial measurement unit (IMU), an accelerometer, a gyroscope, and a magnetometer. A ‘light engine’ sits atop the lenses and projects light into a diffractive element that then reflects into the user’s eyes, creating the illusion of holograms.
The most impressive part of the HoloLens is its integration. The device needs no wires nor external processing power. It is completely untethered, allowing the user to move freely through their environment. The headset houses the battery and all of the processor systems inside. It contains a holographic processing unit (HPU) that takes in the information from the environmental sensors and creates the holographs. The holographic display is presented with an optical projection system.
The Development Edition will begin shipping in Q1 2016 and will cost $3000. There is no word yet of a consumer edition.
One of the key pieces of hardware inside your computer is the hard drive. You may have also heard it called the hard disk or sometimes (incorrectly), the memory.
If you imagine your computer as a human body, your hard drive could be described as the long-term memory of the body. It is where data gets permanently stored for later use.
There are two types of hard drives that you will see frequently, standard hard drives (HDD) and solid state drives (SSD). The traditional hard drive is much more common and uses a magnetic arm to write data along a series of spinning disks. Solid-state drives use a series of interconnected flash memory chips to store data. We will get into why you would chose one over the other later in this article.
Why Would You Replace It?
Your (standard) hard drive is one of the few moving parts inside your computer (the others usually being your cooling fans and your CD drive). Because of this, standard hard drives are often one of the first parts to fail in a computer, and tend to do so after 3-5 years.
Oftentimes computer issues such as slowness or failure to boot are caused by an older hard drive beginning to fail.
What Replacement Hard Drive Should I Buy?
The Hard Drive that you should buy to replace your failing one depends on the way you use your computer. There are several factors to take into account.
The first important factor is size. Ask yourself: how much space do you use on your hard drive? Though HDDs range from just a few gigabytes to several terabytes (1 TB = 1024 GB), the most common sizes currently are 500GB and 1TB. If you use your computer to simply browse the Internet and do basic schoolwork, 500GB will be more than enough for you. However, if you use your computer to store large amounts of media files, or if you play many video games, you should buy a 1TB hard drive.
The second question that you should ask yourself is whether you want an SSD or an HDD. SSDs offer many distinct advantages over a traditional hard drive. They are significantly faster, and tend to be more durable than traditional drives, as they have no moving parts. Replacing a traditional hard drive with an SSD is one of the simplest ways to speed up your computer. Unfortunately SSDs are much more expensive than HDDs for the same amount of space. A 500GB HDD often costs close to the price of a 128GB SSD.
One way to overcome the cost issue presented by SSDs is to install more than one hard drive. In many cases, it makes sense to install a small SSD, which would host your operating system and your most frequently used programs, and to use an HDD for most of your data and media.
How Do I Replace It?
On Windows PCs, especially desktops, hard Drives are often relatively simple to replace. Guides can be found on Youtube, as well as at https://www.ifixit.com/Device/PC .
Alternatively, you can come to the IT User Services Help Center, located in the LGRC low rise. We can take a hard drive you bring us (or we can sell you one of our own), and we can replace it for a small service fee (usually about $50).
Between my job as a consultant here and on my travels using other peoples computers, I have developed only one irrevocable, unavoidable, instinctual pet peeve. No, it’s not 42 tool bars in a browser, non-activated copies of Windows, or even vicious malware. I can deal with those. What bothers me the most is a dirty computer. A grimy, sticky, slightly yellowed keyboard with a splattered screen.
Please understand that just like door knobs, steering wheels and phones, your keyboards will be handled more then anything else in your entire day. If more then one person uses that same computer, then there is two people who have effectively swapped any grime, dirt, sickness or whatever else you call it. Normally, I’d be fine with this cesspool of microorganisms, but I’ve seen from experience how much more myself and others get sick when in huge, dense populations for a long time. So, it’s better safe then sorry to clean your computer, and rest assured that there’s one less thing that won’t get you sick.
1.) Cleaning your Keyboard and Mouse
Cleaning a keyboard is easy. I try to do my personal ones about once every few weeks, but for shared keyboard I’d say go for once a week or even once a day. The best method I’ve found is using certified keyboard wipes, which will not harm any electronic compliments in the keyboard. Trust me, do not just use any wet disinfecting wipe, I’ve lost a favorite wireless keyboard this way, and I’d hate to imagine what it would do to a laptop.You can just take a cloth, and wipe gently across the keyboard with it. If there are any problem spots, just add a little elbow grease.
You can do the same thing with a computer mouse and tracpad, just be sure to use only the recommended wipes.
2.) Clean your Screen
Screens are tricky. Matte screens, as much I would like to clean them, are almost impossible to get clean, as there will always be streaks left over. You can try using “dry wipes”, but results may vary. My advice: don’t get your screens dirty.
Touch screens and the screens on most Macbooks- ie “Glass” Screen- are different. For there, you can use approved isopropyl alcohol wipes, which come in convenient little packages that you buy in a box of 20 something. You simply open the package, unfold the wipe, and wipe. You should be done when the wipe is dry. You can also use any other approved isopropyl screen cleaner. To get rid of streaks, use a microfiber cloth afterwords.
Protip: While powered off, put your laptop on it’s top while cleaning, so the screen on the desk, and the body of laptop is against your body. This will eliminate stress on the hinges of the screen.
Again, do not use any non certified wipes, as they may cause damage.
We take for granted the lonely audio paradise that is our personal set of headphones, but there will come a time when you have an insatiable need to let everyone around you experience the re-released album from an obscure 1976 Uzbekistani progressive rock band whether they want to or not. When that time comes, you are going to need speakers. Real, audio producing, jaw dropping speakers. Unfortunately, you don’t have nice speakers, you just have okay ones. Your speakers are mediocre at best. You just have two little itty-bitty speakers that have been jammed into your laptop. I’ll let you in on secret: if you are using those, you are not getting the most out of your music. You are doing whatever music you are listening to a disservice. The sound is distorted, muffled, and just bad. Right now, you should be feeling a little embarrassed and certainly bummed out. But rest assured, that’s okay! I know the feeling. We all need laptops, and they can only fit speakers that are so big. But there is one thing you can do that will make you, everyone around you, and your obscure music feel much much better: get good speakers. Or just get any speakers. Anything other than laptop speakers.
Today, we are going to fix that by going through some basics on which kind of speakers you can buy, and what you should look for in each. I’m going to walk you through a few examples, ranging from simple portable speakers that require no setup, to complex ones that you’ll need a pair of wire strippers for. While doing that, I’m going to try and keep a few things in mind, particularly what the perfect set speakers for you should do:
1.) Speakers should not cost you lots of money
2.) Speakers should be able to do what you bought them for
3.) Speakers should sound pretty good
Now, all of those make sense, right? First one is maintaining a budget here. While it’d be nice to have a pair of Klipsch speakers, I really don’t have a couple grand to put down for them, so instead I looked for something that would only cost me a few hundred. Next, is keeping functionality in mind. Again, while I’d like a 1 “Jigawatt” stereo system with 6 point surround sound and active bass, that’s not something I could have outside during an impromptu cookout on the beach. But what does “sound pretty good” mean? Sound quality is a funny thing. What may sound good to one person may be nails on a chalkboard to someone else. It depends on a lot of factors: what kind of audio quality you play, the acoustics of the room, if there is any ambient sound around, or even if the track is equalized or not. Now, you can get into a lot of technical talk about the frequency ranges for tweeters and the power of amps. But unless you are a dedicated audiophile (more on that later) none of those really mean anything. So honestly, what I mean when I say speakers should sound good, I mean speakers should sound good to you. The most important thing when buying speakers is to test them yourself. Get some kind of higher audio quality track that you love, preferably one that isn’t a drumming bass line or soft high notes, but a nice mix. Also, try to get something that uses real instruments, unaltered vocals, or lays off constant sampling . If it doesn’t sound fuzzy or faded, buy it. Be sure to test different systems to get a wide selection of what they all sound like as well.
With that all out of the way, lets get down to business. We’ll quickly define some tiers of speakers based on who would want a certain type of speaker, using the three criteria above.
The College Wanderer: Bluetooth Speakers
These are the hip new thing out there. No bigger then a water bottle, Bluetooth speakers are ultra portable, can easily cost between $100 and $300, and actually sound decent. I recommend these kind of speakers to anyone who is on the move, outside, or finds themselves working in different places all the time. Perfect for the college kid who tends to be all over the place. They are easy to connect to with any laptop or smart phone. Surprisingly, I haven’t listened to Bluetooth speakers that I haven’t liked or been immediately impressed with.
Despite how good they sound, there are some issues with Bluetooth speakers to be aware of. These run on battery, so unless plugged in, they will not last forever. The bass will usually be lacking, but that really shouldn’t bother you. Finally, please be careful with these, as they can be easily stolen or lost due to their small size and portability.
The Desktop Commander : Desktop Speakers
If you are one of those college kids who doesn’t leave your room that much, hopefully because you have way too much to study for, you’ll want to look into some desktop speakers. These simply plug into you computer’s headphone jack and a power outlet, and don’t really have any bells and whistles. If you are really ambitious, you could even spring for a setup with more then two speakers, or even separate bass. For a dorm, these are perfect: they are small, easy to pack up when you leave, and not something to worry about getting stolen. Typically, Logitech are the biggest brand you’ll see advertised, and they are pretty good for what you pay for.
However, while these may fit your needs, you may lose some audio quality. Also, you may have to invest in an auxiliary audio cord to connect your speakers.
The Audio-Übermench: The Stereo System
Most college kids, recent grads, and office workers would have stopped and settled for less with their audio equipment, but not you. You seek the holy grail of audio equipment: the stereo system. This is the final frontier of audio. You will have something to supply the sound, a receiver to direct, distort, and amplify it, and multiple speakers of different sizes to play it. You are now an audiophile, a lover of good sound equipment; someone who will always take quality to the next level. Your system will deliver high quality sound and most certainly knock your socks off.
If this is you, you need a stereo system. Unless you are buying for a TV surround system, you only really need two bookshelf speakers, and maybe a bass. You will need speaker wire to run in between the speakers and the receiver, but it really isn’t a hard process. In the back of the receiver, there will be corresponding plugs for each speaker- typically left, right, and bass. They may be more or less depending on the size of the receiver, and how much sound you want.
The worst part about this kind of system is the price. You might only spend $300 for the entire system, or you could go all out and drop a months pay check for one speaker. However, you shouldn’t be opposed to digging through locals ads or yard sales to get some good equipment at next to nothing. I’ve seen perfectly good working systems that could be worth 3 times what they were selling for, just because they are used. You might want to avoid these if you don’t have much room, or very thin walls.
The Weirdo in the Corner: Alternatives to Large Receivers
So here’s the scenario: you don’t want small desktop speakers, you don’t need a Bluetooth speaker, and you don’t have the room or cash for a big stereo system. What do you do? None of these are a good option, and you don’t want to settle for less. Don’t let the man tell you what to do!
What you want to do is look into smaller, low powered receivers. Excluding speakers (remember, used ones are okay here), one of these bad boys will always be under $100. One of my favorites, one that I’ve used personally for years is the Lepai Receiver. The company has no other products I know of, but the receiver they make is wonderful. It’s maybe 2in x 3in x 4in. You can fit it on a bookshelf, attach two speakers to it with wire, get an aux cord, and boom, decent audio with very little setup. It’s a totally strange arrangement, but it gets the job done.
Smartwatches are a type of wearables that is starting to rise in popularity. They have been around for a while with devices like the Samsung Gears and Sony Smartwatches but no one really bothered to buy one. In 2012, however, a kickstarter campaign called Pebble appeared which got people to start thinking more seriously about buying a smartwatch. More recently came the Android Wear devices starting off with the Samsung Gear Live, LG G Watch, and Moto 360, and now to be released is the Apple Watch. Throughout the release of these smartwatches, everyone has been looking forward to figuring out how a smartwatch should function. Here we’ll be looking at how each smartwatch works currently.
So far the features of a smartwatch include apps, changeable watch faces, and the ability to display notifications that are from your smartphone.
Pebble at $100 is currently the cheapest smartwatch on the market. Although it isn’t packed with fancy features such as a color touch screen or speakers, it does its job as a smartwatch. The focus of Pebble is the ability to see notifications on your phone from your wrist. Pebble even stores recent notifications so you can look back to them if you accidentally dismiss them. On top of that are a great variety of apps that can be used on the Pebble. Apps such as weather, calendar, controllers for apps or smartphones, games, and much more. Pebble works on both iPhones and Android devices. The Pebble can last up to 7 days on one charge making it the longest lasting smartwatch to date. Overall, it’s a simple smartwatch that isn’t packed with the fancy features of other smartwatches, but it definitely does its job and lives up to the “smart” part of being a smartwatch.
Pebble Time is an upcoming watch featuring a new OS called Timeline. The new OS streamlines information by allowing users to scroll through information such as weather, news, etc. The Pebble Time also features a 64 color display without compromising the battery life and a microphone for dictating replies to messages
Android Wear has recently emerged into the scene offering color touchscreen displays, integrated Google Now, and for some devices circular displays. Android Wear watches can go from $199 to $300 and works with most Android version 4.3 and up devices. Android Wear is the same Android that runs on Android devices but with a different UI to work like a watch. This enable apps that run on Android devices to potentially run on Android Wear devices. Even so there is a growing number of apps for Android Wear, one including a music player app that allows you to play music straight from the watch as long as its paired to a Bluetooth speaker or headset. Android Wear shows your notifications from your phone and can be easily dismissed on both devices with a swipe. It also can display various information through Google Now’s card system. Android Wear is fairly new and flexible so even if there currently isn’t much it can do, in the future it can adapt to be better.
Apple Watch is a new smartwatch from Apple. It is projected to cost $350 and comes in many different models that feature different material builds and design for both the case and band of the watch. Along with a touch screen, the Apple Watch has a button and digital crown (or dial) on the right side of the smartwatch. The Apple Watch connects to only iOS 8 and up devices or in other words, iPhone 5 and newer. The Apple Watch features notification with the ability to do quick replies or gestures to dismiss them, the ability to answer calls and talk using the watch, and, of course, Apps to work independently or collaboratively with your iPhone.
To borrow a definition from MIT, affective computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena. Anything that recognizes, analyzes, simulates, or interacts with human emotion generally falls under this term. Does it sound a little far-fetched for your computer to understand your emotions? Well, it may not be. There is already software that can understand emotion based on facial expression or voice intonation. In fact, a quick search should give you many, many different programs that do this.
There are a lot of humans on our planet – over 7 billion humans, in fact. And with a lot of people, comes a lot of waste. The average “life expectancy” of an electronic device (computers, phones, tablets, TVs) is only about three years. Of course we throw away our devices when they break, but most of this e-waste is due to upgrades and replacements. Do we need to upgrade our phone every couple of years? Probably not. Do we upgrade it anyway? Of course.
Google’s Chromecast is currently the best-selling Electronic product on Amazon, and there’s a good reason why. People have been looking for the quickest and cheapest way to get content from the services they subscribe to their TV’s. Although there are gaming consoles, and set-top boxes that achieve this, Chromecast’s a little different. It’s a simple HDMI dongle, in a shape similar to a USB Drive. All it needs to operate is some USB power, which most TV’s can already provide via their USB servicing port and a connection to a wireless network, which in the age of everything wireless, many people are apt to have.
Ever since the era of XBox 360s, PS3s, and the new consoles just recently released, how we play games (and store them) has been revolutionized. The consoles of today have internal storage, usually hundreds of gigabytes, similar to how a computer handles storage.
By looking at a handful of the popular consoles, starting with the fourth generation, we’re going to see how technology has changed over the years – how we’ve built bigger and bigger pieces of software, and more and more efficient storage technologies.
Interested in circuits and programming? The Arduino is a good place to start. The Arduino (on the left) is basically a pre-augmented microcontroller that is programmed in high-level Arduino code that is similar to C++/C. The brain behind the arduino is the microcontroller (the black, rectangular chip), which is a device that, when programmed, allows one to automate processes electronically, e.g., it can be used to interpret and respond to data read in from some external sensors, or to turn motors at a certain speed for some amount of time.
For Thanksgiving this year I am most thankful for solid-state drives. I had an HP notebook with an HDD (Hard Disk Drive) and recently bought a refurbished Lenovo laptop with an SSD (Solid State Drive). It is a great deal faster than my old computer. Though my new computer also has a better processor and more RAM, part of the significant difference in speed has to do with now having an SSD. It is much more expensive than the hard disk drive counterpart, but it was well-worth the cost because I value speed more than disk space.
Welcome back! For those of you who are first time readers, this is a series dedicated to covering the different aspects related to building your own computer. This week I will be going over CPUs, and discussing what you should have in mind when choosing which to get.
Ultrabooks are the future of the PC industry because they offer an affordable, lightweight, and quality computing experience. New technologies are enabling companies like ASUS, Acer, Apple, Samsung, Dell, and HP to produce relatively inexpensive ultrabooks with exceptional battery life and performance. This article will examine some of these new innovations that unlock the potential of ultrabooks.
If you are planning on building your own computer, whether it is for a high end workstation, or a custom gaming rig you will definitely be looking at different video cards. This post will cover the overwhelming variety of different video cards, and hopefully answer some questions that you might have.