Category Archives: Hardware

Setting Roam Aggression on Windows Computers

What is Wireless Roaming?

Access Points

To understand what roaming is, you first have to know what device makes the software function necessary.

If you are only used to household internet setups, the idea of roaming might be a little strange to think about. In your house you have your router, which you connect to, and that’s all you need to do. You may have the option of choosing between 2.4GHz and 5GHz channels, however that’s as complicated as it can get.

Now imagine that your house is very large, let’s say the size of UMass Amherst. Now, from your router in your living room, the DuBois Library, it might be a little difficult to connect to from all the way up in your bedroom on Orchard Hill. Obviously in this situation, the one router will never suffice, and so a new component is needed.

An Access Point (AP for short) provides essentially the same function as a router, except that multiple APs used in conjunction project a Wi-Fi network further than a single router ever could. All APs are tied back to a central hub, which you can think of as a very large, powerful modem, which provides the internet signal via cable from the Internet Service Provider (ISP) out to the APs, and then in turn to your device.

On to Roaming

So now that you have your network set up with your central hub in DuBois (your living room), and have an AP in your bedroom (Orchard Hill), what happens if you want to go between the two? The network is the same, but how is your computer supposed to know that the AP in Orchard Hill is not the strongest signal when you’re in DuBois. This is where roaming comes in. Based on what ‘aggressiveness’ your WiFi card is set to roam at, your computer will test the connection to determine which AP has the strongest signal based on your location, and then connect to it. The network is set up such that it can tell the computer that all the APs are on the same network, and allow your computer to transfer your connection without making you input your credentials every time you move.

What is Roam Aggressiveness?

The ‘aggressiveness’ with which your computer roams determines how frequently and how likely it is for your computer to switch APs. If you have it set very high, your computer could be jumping between APs frequently. This can be a problem as it can cause your connection to be interrupted frequently as your computer authenticates to another AP. Having the aggressiveness set very low, or disabling it, can cause your computer to ‘stick’ to one AP, making it difficult to move around and maintain a connection. The low roaming aggression is the more frequent problem people run into on large networks like eduroam at UMass. If you are experiencing issues like this, you may want to change the aggressiveness to suit your liking. Here’s how:

How to Change Roam Aggressiveness on Your Device:

First, navigate to the Control Panel which can be found in your Start menu. Then click on Network and Internet.

From there, click on Network and Sharing Center. 

Then, you want to select Wi-Fi next to Connections. Note: You may not have eduroam listed next to Wi-Fi if you are not connected or connected to a different network.

Now, select Properties and agree to continue when prompted for Administrator permissions.

After selecting Configure for your wireless card (your card will differ with your device from the one shown in the image above).

Finally, navigate to Advanced, and then under Property select Roaming Sensitivity Level. From there you can change the Value based on what issue you are trying to address.

And that’s all there is to it! Now that you know how to navigate to the Roaming settings, you can experiment a little to find what works best for you. Depending on your model of computer, you may have more than just High, Middle, Low values.

Changing roaming aggressiveness can be helpful for stationary devices, like desktops, too. Perhaps someone near you has violated UMass’ wireless airspace policy and set up and hotspot network or a wireless printer. Their setup may interfere with the AP closest to you, and normally, it could cause packet loss, or latency (ping) spiking. You may not even be able to connect for a brief time. Changing roaming settings can help your computer move to the next best AP while the interference is occurring, resulting in a more continuous experience for you.

RRAM: A Retrospective Analysis of the Future of Memory

Mechanisms of Memory

Since the dawn of digital computation, the machine has only known one language: binary.  This strange concoction of language and math has existed physically in many forms since the beginning.  In its simplest form, binary represents numerical values using only two values, 1 and 0.  This makes mathematical operations very easy to perform with switches. It also makes it very easy to store information in a very compact manor.

Early iterations of data storage employed some very creative thinking and some strange properties of materials.

 

IBM 80-Column Punch Card

One of the older (and simpler) methods of storing computer information was on punch cards.  As the name suggests, punch cards would have sections punched out to indicate different values.  Punch cards allowed for the storage of binary as well as decimal and character values.  However, punch cards had an extremely low capacity, occupied a lot of space, and were subject to rapid degradation.  For these reasons, punch cards became phased out along with black and white TV and drive-in movie theaters.

Macroscopic Image of Ferrite Memory Cores

Digital machines had the potential to view and store data using far less intuitive methods.  King of digital memory from the 1960s unto the mid-to-late 70s was magnetic core memory.  By far one of the prettiest things ever made for the computer, this form of memory was constructed with a lattice of interconnected ferrite beads.  These beads could be magnetized momentarily when a current of electricity passed near them.  Upon demagnetizing, they would induce a current in nearby wire.  This current could be used to measure the binary value stored in that bead.  Current flowing = 1, no current = 0.

Even more peculiar was the delay-line memory used in the 1960s.  Though occasionally implemented on a large scale, the delay-line units were primarily used from smaller computers as there is no way they were even remotely reliable… Data was stored in the form of pulsing twists through a long coil of wire.  This mean that data could be corrupted if one of your fellow computer scientists slammed the door to the laboratory or dropped his pocket protector near the computer or something.  This also meant that the data in the coil had to be constantly read and refreshed every time the twists traveled all the way through the coil which, as anyone who has ever played with a spring before knows, does not take a long time.

Delay-Line Memory from the 1960s

This issue of constant refreshing may seem like an issue of days past, but DDR memory, the kind that is used in modern computers, also has to do this.  The DDR actually stands for double data rate and refers to the number of times every cycle that the data in every binary cell is copied into an adjacent cell and then copied back.  This reduces the amount of useful work per clock cycle that a DDR memory unit can do.  Furthermore, only 64 bits of the 72-bit DIMM connection used for DDR memory are actually used for data (the rest are for Hamming error correction).  So we only use about half the work that DDR memory does for actual computation and it’s still so unreliable that we need a whole 8 bits for error correction; perhaps this explains why most computers now come with three levels of cache memory whose sole purpose is to guess what data the processor will need in the hopes that it will reduce the processor’s need to access the RAM.

DDR Memory Chip on a Modern RAM Stick

Even SRAM (the faster and more stable kind of memory used in cache) is not perfect and it is extremely expensive.  A MB of data on a RAM stick will run you about one cent while a MB of cache can be as costly as $10.  What if there were a better way or making memory that was more similar to those ferrite cores I mentioned earlier?  What if this new form of memory could also be written and read to with speeds orders of magnitude greater than DDR RAM or SRAM cache?  What if this new memory also shared characteristics with human memory and neurons?

 

Enter: Memristors and Resistive Memory

As silicon-based transistor technology looks to be slowing down, there is something new on the horizon: resistive RAM.  The idea is simple: there are materials out there whose electrical properties can be changed by having a voltage applied to them.  When the voltage is taken away, these materials are changed and that change can be measured.  Here’s the important part: when an equal but opposite voltage is applied, the change is reversed and that reversal can also be measured.  Sounds like something we learned about earlier…

The change that takes place in these magic materials is in their resistivity.  After the voltage is applied, the extent to which these materials resist a current of electricity changes.  This change can be measured and therefor binary data can be stored.

A Microscopic Image of a Series of Memristors

Also at play in the coming resistive memory revolution is speed.  Every transistor ever made is subject to something called propagation delay: the amount of time required for a signal to traverse the transistor.  As transistors get smaller and smaller, this time is reduced.  However, transistors cannot get very much smaller because of quantum uncertainty in position: a switch is no use if the thing you are trying to switch on and off can just teleport past the switch.  This is the kind of behavior common among very small transistors.

Because the memristor does not use any kind of transistor, we could see near-speed-or-light propagation delays.  This means resistive RAM could be faster than DDR RAM, faster than cache, and someday maybe even faster than the registers inside the CPU.

There is one more interesting aspect here.  Memristors also have a tendency to “remember” data long after is has been erased and over written.  Now, modern memory also does this but, because the resistance of the memristor is changing, large arrays of memristors could develop sections with lower resistance due to frequent accessing and overwriting.  This behavior is very similar to the human brain; memory that’s accessed a lot tends to be easy to… well… remember.

Resistive RAM looks to be, at the very least, a part of the far-reaching future of computing.  One day we might have computers which can not only recall information with near-zero latency, but possibly even know the information we’re looking for before we request it.

What is S.M.A.R.T?

Have you ever thought your computer might be dying but you don’t know what? Symptoms that people might be familiar with may include slowing down, increased startup time, programs freezing, constant disk usage, and audible clicking. While these symptoms may happen to a lot of people, they don’t necessarily mean the hard drive is circling the drain. With a practically unlimited number of other things that could make the computer slow down and become unusable, how are you supposed to find out exactly what the problem is? Fortunately, the most common part to fail in a computer, the hard drive (or data drive), has a built-in testing technology that even users can use to diagnose their machines without handing over big bucks to a computer repair store or having to buy an entire new computer if their computer is out of warranty.

Enter SMART (Self-Monitoring, Analysis and Reporting Technology). SMART is a monitoring suite that checks computer drives for a list of parameters that would indicate drive failure. SMART collects and stores data about the drive including errors, failures, times to spin up, reallocated sectors, and read/write abilities. While many of these attributes may be confusing in definition and even more confusing in their recorded numerical values, SMART software can predict a drive failure and even notify the user of the computer that the software has detected a failing drive. The user can then look at the results to verify, or in unsure, bring to a computer repair store for a verification and drive replacement.

So how does one get access to SMART? Many computers include built in diagnostic suites that can be accessed via a boot option when the computer first turns on. Others manufacturers require that you download an application without your operating system that can run a diagnostic test. These diagnostic suites will usually check the SMART status, and if the drive is in fact failing, the diagnostic suite will report a drive is failing or has failed. However, most of these manufacturer diagnostics will simply only say passed or failed, if you want access to the specific SMART data you will have to use a Windows program such as CrystalDiskInfo, a Linux program such as GSmartControl, or SMART Utility for Mac OS.

These SMART monitoring programs are intelligent enough to detect when a drive is failing, to give you ample time to back up your data. Remember, computer parts can always be replaced, lost data is lost forever. However, it should be noted that SMART doesn’t always detect when a drive fails. If a drive suffers a catastrophic failure like a physical drop or water damage while on SMART cannot predict these and the manufacturer is not at fault. Therefore, while SMART is best to be used as a tool to assess whether a drive is healthy or not, it is used most strongly in tandem with a good reliable backup system and not as a standalone protection against data failure.

Transit by Wire – Automating New York’s Aging Subways

When I left New York in January, the city was in high spirits about its extensive Subway System.  After almost 50 years of construction, and almost 100 years of planning, the shiny, new Second Avenue subway line had finally been completed, bringing direct subway access to one of the few remaining underserved areas in Manhattan.  The city rallied around the achievement.  I myself stood with fellow elated riders as the first Q train pulled out of the 96th Street station for the first time; Governor Andrew Cuomo’s voice crackling over the train’a PA system assuring riders that he was not driving the train.

In a rather ironic twist of fate, the brand-new line was plagued, on its first ever trip, with an issue that has been effecting the entire subway system since its inception: the ever present subway delay.

A small group of transit workers gathered in the tunnel in front of the stalled train to investigate a stubborn signal.  The signal was seeing its first ever train, yet its red light seemed as though it had been petrified by 100 years of 24-hour operation, just like the rest of them.

Track workers examine malfunctioning signal on Second Avenue Line

When I returned to New York to participate in a summer internship at an engineering firm near Wall Street, the subway seemed to be falling apart.  Having lived in the city for almost 20 years and having dealt with the frequent subway delays on my daily commute to high school, I had no reason to believe my commute to work would be any better… or any worse.  However, I started to see things that I had never seen: stations at rush hour with no arriving trains queued on the station’s countdown clock, trains so packed in every car that not a single person was able to board, and new conductors whose sole purpose was to signal to the train engineers when it was safe to close the train doors since platforms had become too consistently crowded to reliably see down.

At first, I was convinced I was imagining all of this.  I had been living in the wide-open and sparsely populated suburbs of Massachusetts and maybe I had simply forgotten the hustle and bustle of the city.  After all, the daily ridership on the New York subway is roughly double the entire population of Massachusetts.  However, I soon learned that the New York Times had been cataloging the recent and rapid decline of the city’s subway.  In February, the Times reported a massive jump in the number of train delays per month, from 28,000 per month in 2012 up to 70,000 at the time of publication.

What on earth had happened?  Some New Yorkers have been quick to blame Mayor Bill De’Blasio  However, the Metropolitan Transportation Authority, the entity which owns and operates the city subway, is controlled by the state and thus falls under the jurisdiction of Governor Andrew Cuomo.  However, it’s not really Mr. Cuomo’s fault either.  In fact, it’s no one person’s fault at all!  The subway has been dealt a dangerous cocktail of severe overcrowding and rapidly aging infrastructure.

 

Thinking Gears that Run the Trains

Anyone with an interest in early computer technology is undoubtedly familiar with the mechanical computer.  Before Claude Shannon invented electronic circuitry that could process information in binary, all we had to process information were large arrays of gears, springs, and some primitive analog circuits which were finely tuned to complete very specific tasks.  Some smaller mechanical computers could be found aboard fighter jets to help pilots compute projectile trajectories.  If you saw The Imitation Game last year, you may recall the large computer Alan Turing built to decode encrypted radio transmissions during the Second World War.

Interlocking machine similar to that used in the NYC subway

New York’s subway had one of these big, mechanical monsters after the turn of the century; In fact, New York still has it.  Its name is the interlocking machine and it’s job is simple: make sure two subway trains never end up in the same place at the same time.  Yes, this big, bombastic hunk of metal is all that stands between the train dispatchers and utter chaos.  Its worn metal handles are connected directly to signals, track switches, and little levers designed to trip the emergency breaks of trains that roll past red lights.

The logic followed by the interlocking machine is about as complex as engineers could make it in 1904:

  • Sections of track are divided into blocks, each with a signal and emergency break-trip at their entrance.
  • When a train enters a block, a mechanical switch is triggered and the interlocking machine switches the signal at the entrance of the block to red and activates the break-trip.
  • After the train leaves the block, the interlocking machine switches the track signal back to green and deactivates the break-trip.

Essentially a very large finite-state machine, this interlocking machine was revolutionary back at the turn of the century.  At the turn of the century, however, some things were also acting in the machine’s favor; for instance, there were only three and a half million people living in New York at the time, they were all only five feet tall, and the machine was brand new.

As time moved on, the machine aged and so did too did the society around it.  After the Second World War, we replaced the bumbling network of railroads with an even more extensive network of interstate highways.  The train signal block, occupied by only one train at a time, was replaced by a simpler mechanism: the speed limit.

However, the MTA and the New York subways have lagged behind.  The speed and frequency of train service remains limited by how many train blocks were physically built into the interlocking machines (yes, in full disclosure, there is more than one interlocking machine but they all share the same principles of operation).  This has made it extraordinarily difficult for the MTA to improve train service; all the MTA can do is maintain the again infrastructure.  The closest thing the MTA has to a system-wide software update is a lot of WD40.

 

Full-Steam Ahead

There is an exception to the constant swath of delays…two actually.  In the 1990s and then again recently, the MTA did yank the old signals and interlocking machines from two subway lines and replace them with a fully automated fleet of trains, controlled remotely by a digital computer.  In a odd twist of fate, the subway evolved straight from its Nineteenth Century roots straight to Elon Musk’s age of self-driving vehicles.

The two lines selected were easy targets, both serve large swaths of suburb in Brooklyn and Queens and both are two-track lines, meaning they have no express service.  This made the switch to automated trains easy and very effective for moving large numbers of New Yorkers.  And the switch was effective!  Of all the lines in New York, the two automated lines have seen the least reduction in on-time train service.  The big switch also had some more proactive benefits, like the addition of accurate countdown clocks in stations, a smoother train ride (especially when stopping and taking off), and the ability for train engineers to play Angry Birds during their shifts (yes, I have seen this).

The first to receive the update was the city’s, then obscure, L line.  The L is one of the only two trains to traverse the width of the Manhattan Island and is the transportation backbone for many popular neighborhoods in Brooklyn.  In recent years, these neighborhoods have seen a spike in population due, in part, to frequent and reliable train service.

L train at its terminal station in Canarsie, Brooklyn

The contrast between the automated lines and the gear-box-controlled lines is astounding.  A patron of the subway can stand on a train platform waiting for an A or C train for half an hour… or they could stand on another platform and see two L trains at once on the same stretch of track.

The C line runs the oldest trains in the system, most of them over 50 years old.

The city also elected to upgrade the 7 line; the only other line in the city to traverse the width of Manhattan and one of only two main lines to run through the center of Queens.  Work on the 7 is set to finish soon and the results looks to be promising.

Unfortunately for the rest of the city’s system, the switch to automatic train control for those two lines was not cheap and it was not quick.  In 2005, it was estimated that a system-wide transition to computer controlled trains would not be completed until 2045.  Some other cities, most notably London, made the switch to automated trains years ago.  It is though to say why New York has lagged behind, but it most likely has to do with the immense ridership of the New York system.

New York is the largest American city by population and by land area.  This makes other forms of transportation far less viable when traveling though the city.  After a the public opinion of highways in the city was ruined in the 1960s following the destruction of large swaths of the South Bronx, many of the city’s neighborhoods have been left nearly inaccessible via car.  Although New York is a very walkable city, its massive size makes commuting by foot from the suburbs to Manhattan impractical as well.  Thus the subways must run every day and for every hour of the day.  If the city wants to shut down a line to do repairs, they often cant.  Often times, line are only closed for repairs on weekends and nights for a few hours.

 

Worth the Wait?

Even though it may take years for the subway to upgrade its signals, the city has no other option.  As discussed earlier, the interlocking machine can only support so many trains on a given length of track.  On the automated lines, transponders are placed every 500 feet, supporting many more trains on the same length of track.  Trains can also be stopped instantly instead of having to travel to the next red-signaled block.  With the number of derailments and stalled trains climbing, this unique ability of the remote-controlled trains is invaluable.  Additionally, automated trains running on four-track lines with express service could re-route instantly to adjacent tracks in order to completely bypass stalled trains.  Optimization algorithms could be implemented to have a constant and dynamic flow of trains.  Trains could be controlled more precisely during acceleration and breaking to conserve power and prolong the life of the train.

For the average New Yorker, these changes would mean shorter wait times, less frequent train delays, and a smoother and more pleasant ride.  In the long term, the MTA would most likely save millions of dollars in repair costs without the clunky interlocking machine.  New Yorkers would also save entire lifetimes worth of time on their commutes.  The cost may be high, but unless the antiquated interlocking machines are put to rest, New York will be paying for it every day.

Water Damage: How to prevent it, and what to do if it happens

Getting your tech wet is often one of the most common things that people tend to worry about when it comes to their devices. Rightfully so; water damage is often excluded from manufacturer warranties, can permanently ruin technology under the right circumstances, and is one of the easiest things to do to a device without realizing it.

What if I told you that water, in general, is one of the easiest and least-likely things to ruin your device, if reacted to properly?

Don’t get me wrong; water damage is no laughing matter. It’s the second most common reason that tech ends up kicking the bucket, the most common being drops (but not for the reason you might think). While water can quite easily ruin a device within minutes, most, if not all of its harm can be prevented if one follows the proper steps when a device does end up getting wet.

My goal with this article is to highlight why water damage isn’t as bad as it sounds, and most importantly, how to react properly when your shiny new device ends up the victim to either a spill… or an unfortunate swan dive into a toilet.

_________________

Water is, in its purest form, is pretty awful at conducting electricity. However, because most of the water that we encounter on a daily basis is chock-full of dissolved ions, it’s conductive enough to cause serious damage to technology if not addressed properly.

If left alone, the conductive ions in the water will bridge together several points on your device, potentially allowing for harmful bursts of electricity to be sent places which would result in the death of your device.

While that does sound bad, here’s one thing about water damage that you need to understand: you can effectively submerge a turned-off device in water, and as long as you fully dry the whole thing before turning it on again, there’s almost no chance that the water will cause any serious harm.

Image result for underwater computer

You need to react fast, but right. The worst thing you can do to your device once it gets wet is try to turn it on or ‘see if it still works’. The very moment that a significant amount of water gets on your device, your first instinct should be to fully power off the device, and once it’s off, disconnect the battery if it features a removable one.

As long as the device is off, it’s very unlikely that the water will be able to do anything significant, even less so if you unplug the battery. The amount of time you have to turn off your device before the water does any real damage is, honestly, complete luck. It depends on where the water seeps in, how conductive it was, and how the electricity short circuited itself if a short did occur. Remember, short circuits are not innately harmful, it’s just a matter of what ends up getting shocked.

Once your device is off, your best chance for success is to be as thorough as you possibly can when drying it. Dry any visible water off the device, and try to let it sit out in front of a fan or something similar for at least 24 hours (though please don’t put it near a heater).

Rice is also great at drying your devices, especially smaller ones. Simply submerge the device in (unseasoned!) rice, and leave it again for at least 24 hours before attempting to power it on. Since rice is so great at absorbing liquids, it helps to pull out as much water as possible.

Image result for phone in rice

If the device in question is a laptop or desktop computer, bringing it down to us at the IT User Services Help Center in Lederle A109 is an important option to consider. We can take the computer back into the repair center and take it apart, making sure that everything is as dry as possible so we can see if it’s still functional. If the water did end up killing something in the device, we can also hopefully replace whatever component ended up getting fried.

Overall, there are three main points to be taken from this article:

Number one, spills are not death sentences for technology. As long as you follow the right procedures, making sure to immediately power off the device and not attempt to turn it back on until it’s thoroughly dried, it’s highly likely that a spill won’t result in any damage at all.

Number two is that, when it comes to water damage, speed is your best friend. The single biggest thing to keep in mind is that, the faster you get the device turned off and the battery disconnected, the faster it will be safe from short circuiting itself.

Lastly, and a step that many of us forget about when it comes to stuff like this; take your time. A powered off device that was submerged in water has an really good chance at being usable again, but that chance goes out the window if you try to turn it on too early. I’d suggest that for smartphones and tablets, at the very least, they should get a thorough air drying followed by at least 24 hours in rice. For laptops and desktops, however, your best bet is to either open it up yourself, or bring it down the Help Center so we can open it up and make sure it’s thoroughly dry. You have all the time in the world to dry it off, so don’t ruin your shot at fixing it by testing it too early.

I hope this article has helped you understand why not to be afraid of spills, and what to do if one happens. By following the procedures I outlined above, and with a little bit of luck, it’s very likely that any waterlogged device you end up with could survive it’s unfortunate dip.

Good luck!

Tips for Gaming Better on a Budget Laptop

Whether you came to college with an old laptop, or want to buy a new one without breaking the bank, making our basic computers faster is something we’ve all thought about at some point. This article will show you some software tips and tricks to improve your gaming experience without losing your shirt, and at the end I’ll mention some budget hardware changes you can make to your laptop. First off, we’re going to talk about in-game settings.

 

In-Game Settings:

All games have built in settings to alter the individual user experience from controls to graphics to audio. We’ll be talking about graphics settings in this section, primarily the hardware intensive ones that don’t compromise the look of the game as much as others. This can also depend on the game and your individual GPU, so it can be helpful to research specific settings from other users in similar positions.

V-Sync:

V-Sync, or Vertical Synchronization, allows a game to synchronize the framerate with that of your monitor. Enabling this setting will increase the smoothness of the game. However, for lower end computers, you may be happy to just run the game at a stable FPS that is less than your monitor’s refresh rate. (Note – most monitors have a 60Hz or 60 FPS refresh rate). For that reason, you may want to disable it to allow for more stable low FPS performance.

Anti-Aliasing:

Anti-Aliasing, or AA for short, is a rendering option which reduces the jaggedness of lines in-game. Unfortunately the additional smoothness heavily impacts hardware usage, and disabling this while keeping other things like texture quality or draw distance higher can make big performance improvements without hurting a game’s appearance too much. Additionally, there are many different kinds of AA options that games might have settings for. MSAA (Multisampling AA), and the even more intensive, TXAA (Temporal AA), are both better smoothing processes that have an even bigger impact on performance. Therefore turning these off on lower-end machines is almost always a must. FXAA (Fast Approximate AA) uses the least processing power, and can therefore be a nice setting to leave on if your computer can handle it.

Anisotropic Filtering (AF):

This setting adds depth of field to a game, by making things further away from your character blurrier. Making things blurrier might seem like it would make things faster, however it actually puts a greater strain on your system as it needs to make additional calculations to initiate the affect. Shutting this off can yield improvements in performance, and some players even prefer it, as it allows them to see distant objects more clearly.

Other Settings:

While the aforementioned are the heaviest hitters in terms of performance, changing some other settings can help increase stability and performance too (beyond just simple texture quality and draw distance tweaks). Shadows and reflections are often unnoticed compared to other effects, so while you may not need to turn them off, turning them down can definitely make an impact. Motion blur should be turned off completely, as it can make quick movements result in heavy lag spikes.

Individual Tweaks:

The guide above is a good starting point for graphics settings; because there are so many different models, there are any equally large number of combinations of settings. From this point, you can start to increase settings slowly to find the sweet spot between performance and quality.

Software:

Before we talk about some more advanced tips, it’s good practice to close applications that you are not using to increase free CPU, Memory, and Disk space. This alone will help immensely in allowing games to run better on your system.

Task Manager Basics:

Assuming you’ve tried to game on a slower computer, you’ll know how annoying it is when the game is running fine and suddenly everything slows down to slideshow speed and you fall off a cliff. Chances are that this kind of lag spike is caused by other “tasks” running in the background, and preventing the game you are running from using the power it needs to keep going. Or perhaps your computer has been on for awhile, so when you start the game, it runs slower than its maximum speed. Even though you hit the “X” button on a window, what’s called the “process tree” may not have been completely terminated. (Think of this like cutting down a weed but leaving the roots.) This can result in more resources being taken up by idle programs that you aren’t using right now. It’s at this point that Task Manager becomes your best friend. To open Task Manager, simply press CTRL + SHIFT + ESC at the same time or press CTRL + ALT + DEL at the same time and select Task Manager from the menu. When it first appears, you’ll notice that only the programs you have open will appear; click the “More Details” Button at the bottom of the window to expand Task Manager. Now you’ll see a series of tabs, the first one being “Processes” – which gives you an excellent overview of everything your CPU, Memory, Disk, and Network are crunching on. Clicking on any of these will bring the process using the highest amount of each resource to the top of the column. Now you can see what’s really using your computer’s processing power. It is important to realize that many of these processes are part of your operating system, and therefore cannot be terminated without causing system instability. However things like Google Chrome and other applications can be closed by right-clicking and hitting “End Task”. If you’re ever unsure of whether you can end a process or not safely, a quick google of the process in question will most likely point you in the right direction.

Startup Processes:

Here is where you can really make a difference to your computer’s overall performance, not just for gaming. From Task Manager, if you select the “Startup” tab, you will see a list of all programs and services that can start when your computer is turned on. Task Manager will give an impact rating of how much each task slows down your computers boot time. The gaming app Steam, for example, can noticeably slow down a computer on startup. A good rule of thumb is to allow virus protection to start with Windows, however everything else is up to individual preference. Shutting down these processes on startup can prevent unnecessary tasks from ever being opened, and allow for more hardware resource availability for gaming.

Power Usage:

You probably know that unlike desktops, laptops contain a battery. What you may not know is that you can alter your battery’s behavior to increase performance, as long as you don’t mind it draining a little faster. On the taskbar, which is by default located at the bottom of your screen, you will notice a collection of small icons next to the date and time on the right, one of which looks like a battery. Left-clicking will bring up the menu shown below, however right-clicking will bring up a menu with an option “Power Options” on it.

 

 

 

 

Clicking this will bring up a settings window which allows you to change and customize your power plan for your needs. By default it is set to “Balanced”, but changing to “High Performance” can increase your computer’s gaming potential significantly. Be warned that battery duration will decrease on the High Performance setting, although it is possible to change the battery’s behavior separately for when your computer is using the battery or plugged in.

Hardware:

Unlike desktops, for laptops there are not many upgrade paths. However one option exists for almost every computer that can have a massive effect on performance if you’re willing to spend a little extra.

Hard Disk (HDD) to Solid State (SSD) Drive Upgrade:

Chances are that if you have a budget computer, it probably came with a traditional spinning hard drive. For manufacturers, this makes sense as they are cheaper than solid states, and work perfectly well for light use. Games can be very demanding on laptop HDDs to recall and store data very quickly, sometimes causing them to fall behind. Additionally, laptops have motion sensors built into them which restrict read/write capabilities when the computer is in motion to prevent damage to the spinning disk inside the HDD. An upgrade to a SSD not only eliminates this restriction, but also has a much faster read/write time due to the lack of any moving parts. Although SSDs can get quite expensive depending on the size you want, companies such as Crucial or Kingston offer a comparatively cheap solution to Samsung or Intel while still giving you the core benefits of a SSD. Although there are a plethora of tutorials online demonstrating how to install a new drive into your laptop, make sure you’re comfortable with all the dangers before attempting, or simply take your laptop into a repair store to have them do it for you. It’s worth mentioning that when you install a new drive, you will need to reinstall Windows, and all your applications from your old drive.

Memory Upgrade (RAM):

Some laptops have an extra memory slot, or just ship with a lower capacity than what they are capable of holding. Most budget laptops will ship with 4GB of memory, which is often not enough to support both the system, and a game.

Upgrading or increasing memory can give your computer more headroom to process and store data without lagging up your entire system. Unlike with SSD upgrades, memory is very specific and it is very easy to buy a new stick that fits in your computer, but does not function with its other components. It is therefore critical to do your research before buying any more memory for your computer; that includes finding out your model’s maximum capacity, speed, and generation. The online technology store, Newegg, has a service here that can help you find compatible memory types for your machine.

Disclaimer: 

While these tips and tricks can help your computer to run games faster, there is a limit to what hardware is capable of. Budget laptops are great for the price point, and these user tricks will help squeeze out all their potential, but some games will simply not run on your machine. Make sure to check a game’s minimum and recommended specs before purchasing/downloading. If your computer falls short of minimum requirements, it might be time to find a different game or upgrade your setup.

Quantum Computers: How Google & NASA are pushing Artificial Intelligence to its limit

qcomp

“If you think you understand quantum physics, you don’t understand quantum physics”. Richard Feynman quoted that statment in relation to the fact that we simply do not yet fully understand the mechanics of the quantum world. NASA, Google, and DWave are trying to figure this out as well by revolutionizing our understanding of physics and computing with the first commercial quantum computer that runs 100 million times faster than traditional computers.

Quantum Computers: How they work

To understand how quantum computers work you must first recognize how traditional computers work. For several decades, a computer processor’s base component is the transistor. A transistor either allows or blocks the flow of electrons (aka electricity) with a gate. This transistor can then be one of two possible values: on or off, flowing or not flowing. The value of a transistor is binary, and is used to represent digital information by representing them in binary digits, or bits for short. Bits are very basic, but paired together can produce exponentially more and more possible values as they are added. Therefore, more transistors means faster data processing. To fit more transistors on a silicon chip we must keep shrinking the size of them. Transistors nowadays have gotten so small they are only the size of 14nm. This is 8x less than the size of an HIV virus and 500x smaller than a red blood cell.

As transistors are getting to the size of only a few atoms, electrons may just transfer through a blocked gate in a concept called quantum tunneling. This is because in the quantum realm physics works differently than what we are used to understanding, and computers start making less and less sense at this point. We are starting to see a physical barrier to the efficiency technological processes, but scientists are now using these unusual quantum properties to their advantage to develop quantum computers.

Introducing the Qubit!

With computers using bits as their smallest unit of information, quantum computers use qubits. Like bits, qubits can represent the values of 0 or 1. This 0 and 1 is determined by a photon and its spin in a magnetic field where polarization represents the value; what separates them from bits is that they can also be in any proportion of both states at once in a property called superpositioning. You can test the value of a photon by passing it through a filter, and it will collapse to be either vertically or horizontally polarized (0 or 1). Unobserved, the qubit is in superposition with probabilities for either state – but the instant you measure it it collapses to one of the definite states, being a game-changer for computing.

201011_qubit_vs_bit

When normal bits are lined up they can represent one of many possible values. For example, 4 bits can represent one of 16 (2^4) possible values depending on their orientation. 4 qubits on the other hand can represent all of these 16 combinations at once, with each added qubit growing the number of possible outcomes exponentially!

Qubits can also feel another property we can entanglement; a close connection that has qubits react to a change in the other’s state instantaneously regardless of the distance between them both. This means when you measure one value of a qubit, you can deduce the value of another without even having to look at it!

Traditional vs Quantum: Calculations Compared

When performing logic on traditional computers it is pretty simple. Computers perform logic on something we call logic gates using a simple set of inputs and producing a single output (based on AND, OR, XOR, and NAND). For example, two bits being 0 (false) and 1 (true) passed through an AND gate is 0 since both bits aren’t true. 0 and 1 being passed through an OR gate will be 1 since either of the two needs to have the value of true for the outcome to remain true. Quantum gates perform this on a much more complex level. They manipulate an input of superpositions (qubits each with probabilities of 0 or 1), rotates these probabilities and produces another superposition as an output; measuring the outcome, collapsing the superpositions into an actual sequence of 0s and 1s for one final definite answer. What this means is that you can get the entire lot of calculations possible with a setup all done at the same time!

quantum-computers-new-generation-of-computers-part1-by-prof-lili-saghafi-17-638

When measuring the result of qubit’s superpositions, they will probably give you the one you want. However you need to be sure that this outcome is correct so you may need to double check and try again. Exploiting the properties of superposition and entanglement can be exponentially more efficient than ever possible on a traditional computer.

What Quantum Computers mean for our future

Quantum computers will most likely not replace our home computers, but they are much more superior. In applications such as data searching in corporate databases, computers may need to search every entry in a table. Quantum computers can do this task in a square root of that time; and for tables with billions of entries this can save a tremendous amount of time and resources. The most famous use of quantum computers is IT security. Tasks such as online banking and browsing your email is kept secure by encryption, where a public key is made for everyone to encode messages only you can decode. Problem is public keys can be used to calculate one’s secret private key, but doing the math on a normal computer would literally take years of trial and error. Quantum computers can do this in a breeze with an exponential decrease in calculation time! Simulations in the quantum world are intense on resources, regular computers lack on resources for bigger structures such as molecules. So why not simulate quantum physics with actual quantum physics? Quantum simulations for instance could lead to insights on proteins that can revolutionize medicine as we know it.

140903112645-google-quantum-computer-1024x576

What’s going on now in Quantum Computing? How NASA & Google are using AI to reveal nature’s biggest secrets.

We’re unsure if quantum computers will only be a specialized tool, or a big revolution for humanity. We do not know the limits for technology but there is only one way to find out. One of the first commercial quantum computers developed by DWave will be stored in Google and NASA’s research center in California. They operate the chip at an incredible temperature at 200 times colder than interstellar space. They are currently focused on using it to solve optimization problems, finding the best outcome given a set of data. For example: finding the best flight path to visit a set of places you’d like to see. Google and NASA are also using artificial intelligence on this computer to further our understanding of the natural world. Since it operates on quantum level mechanics beyond our knowledge, we can ask it questions that we may never otherwise be able to figure out. Questions such as “are we alone?” and “where did we come from?” can be explored. We have evolved into creatures that are able to ask the nature of physical reality, and being able to solve the unknown is even more awesome as a species. We have the power to do it and we must do it, because that is what it means to be human.

A Basic Guide to Digital Audio Recording

The Digital Domain

iab-digital-audio-committee-does-dallas-4

Since the dawn of time, humans have been attempting to record music.  For the vast majority of human history, this has been really really difficult.  Early cracks at getting music out of the hands of the musician involved mechanically triggered pianos whose instructions for what to play were imprinted onto long scrolls of paper.  These player pianos were difficult to manufacture (this was prior to the industrial revolution) and not really viable for casual music listening.  There was also the all-important phonograph, which recorded sound itself mechanically onto the surface of a wax cylinder.

If it sounds like the aforementioned techniques were difficult to use and manipulate, it was!  Hardly anyone owned a phonograph since they were expensive, recordings were hard to come by, and they really didn’t sound all that great.  Without microphones or any kind of amplification, bits of dust and debris which ended up on these phonograph records could completely obscure the original recording behind a wall of noise.

Humanity had a short stint with recording sound as electromagnetic impulses on magnetic tape.  This proved to be one of the best ways to reproduce sound (and do some other cool and important things too).  Tape was easy to manufacture, came in all different shapes and sizes, and offered a whole universe of flexibility for how sound could be recorded onto it.  Since tape recorded an electrical signal, carefully crafted microphones could be used to capture sounds with impeccable detail and loudspeakers could be used to play back the recorded sound at considerable volumes.  Also at play were some techniques engineers developed to reduce the amount of noise recorded onto tape, allowing the music to be front and center atop a thin floor of noise humming away in the background.  Finally, tape offered the ability to record multiple different sounds side-by-side and play them back at the same time.  These side-by-side sounds came to be known as ‘tracks’ and allowed for stereophonic sound reproduction.

Tape was not without its problems though.  Cheap tape would distort and sound poor.  Additionally, tape would deteriorate over time and fall apart, leaving many original recordings completely unlistenable.  Shining bright on the horizon in the late 1970s was digital recording.  This new format allowed for low-noise, low cost, and long-lasting recordings.  The first pop music record to be recorded digitally was Ry Cooder’s, Bop till you Drop in 1979.  Digital had a crisp and clean sound that was rivaled only by the best of tape recording.  Digital also allowed for near-zero degradation of sound quality once something was recorded.

Fast-forward to today.  After 38 years of Moore’s law, digital recording has become cheap and simple.  Small audio recorders are available at low cost with hours and hours of storage for recording.  Also available are more hefty audio interfaces which offer studio-quality sound recording and reproduction to any home recording enthusiast.

 

Basic Components: What you Need

Depending on what you are trying to record, your needs may vary from the standard recording setup.  For most users interested in laying down some tracks, you will need the following.

Audio Interface (and Preamplifier): this component is arguably the most important as it connects everything together.  The audio interface contains both analog-to-digital converters and a digital-to-analog convert; these allow it to both turn sound into the language of your computer for recording, and turn the language of your computer back into sound for playback.  These magical little boxes come in many shapes and sizes; I will discus these in a later section, just be patient.

Digital Audio Workstation (DAW) Software: this software will allow your computer to communicate with the audio interface.  Depending on what operating system you have running on your computer, there may be hundreds of DAW software packages available.  DAWs vary greatly in complexity, usability, and special features; all will allow you the basic feature of recording digital audio from an audio interface.

Microphone: perhaps the most obvious element of a recording setup, the microphone is one of the most exciting choices you can make when setting up a recording rig.  Microphones, like interfaces and DAWs, come in all shapes a sizes.  Depending on what sound you are looking for, some microphones may be more useful than others.  We will delve into this momentarily.

Monitors (and Amplifier): once you have set everything up, you will need a way to hear what you are recording.  Monitors allow you to do this.  In theory, you can use any speaker or headphone as a monitor.  However, some speakers and headphones offer more faithful reproduction of sound without excessive bass and can be better for hearing the detail in your sound.

 

Audio Interface: the Art of Conversion

Two channel USB audio interface.

Two channel USB audio interface.

The audio interface can be one of the most intimidating elements of recording.  The interface contains the circuitry to amplify the signal from a microphone or instrument, convert that signal into digital information, and then convert that information back to an analog sound signal for listening on headphones or monitors.

Interfaces come in many shapes and sizes but all do similar work.  These days, most interfaces offer multiple channels of recording at one time and can record in uncompressed CD-audio quality or better.

Once you step into the realm of digital audio recording, you may be surprised to find a lack of mp3 files.  Turns out, mp3 is a very special kind of digital audio format and cannot be recorded to directly; mp3 can only be created from existing audio files in non-compressed formats.

You may be asking yourself, what does it mean for audio to be compressed?  As an electrical engineer, it may be hard for me to explain this in a way that humans can understand, but I will try my best.  Audio takes up a lot of space.  Your average iPhone or Android device maybe has 32 GB of space but most people can keep thousands of songs on their device.  This is done using compression.  Compression is the computer’s way of listening to a piece of music, and removing all the bits and pieces that most people wont notice.  Soft and infrequent noises, like the sound of a guitarist’s fingers scraping a string, are removed while louder sounds, like the sound of the guitar, are left in.  This is done using the Fourier Transform and a bunch of complicated mathematical algorithms that I don’t expect anyone reading this to care about.

When audio is uncompressed, a few things are true: it takes up a lot of space, it is easy to manipulate with digital effects, and it often sounds very, very good.  Examples of uncompressed audio formats are: .wav on Windows, .aif and .aiff on Macintosh, and .flac for all the free people of the Internet.  Uncompressed audio comes in many different forms but all have two numbers which describe their sound quality: ‘word length’ or ‘bit depth’ and ‘sample rate.’

The information for digital audio is contained in a bunch of numbers which indicate the loudness or volume of the sound at a specific time.  The sample rate tells you how many times per second the loudness value is captured.  This number needs to be at least two times higher than the highest audible frequency, otherwise the computer will perceive high frequencies as being lower than they actually are.  This is because of the Shannon Nyquist Theorem which I, again, don’t expect most of you to want to read about.  Most audio is captured at 44.1 kHz, making the highest frequency it can capture 22.05 kHz, which is comfortably above the limits of human hearing.

The word length tells you how many numbers can be used to represent different volumes of loudness.  The number of different values for loudness can be up to 2^word length.  CDs represent audio with a word length of 16 bits, allowing for 65536 different values for loudness.  Most audio interfaces are capable of recording audio with a 24-bit word length, allowing for exquisite detail.  There are some newer systems which allow for recording with a 32-bit word length but these are, for the majority part, not available at low-cost to consumers.

I would like to add a quick word about USB.  There is a stigma, in the business, against USB audio interfaces.  Many interfaces employ connectors with higher bandwidth, like FireWire and Thunderbolt, and charge a premium for it.  It may seem logical, faster connection, better quality audio.  Hear this now: no audio interface will ever be sold which has a connector that is too slow for the quality audio it can record.  This is to say, USB can handle 24-bit audio with a 96 kHz sample rate, no problem.  If you notice latency in your system, it is from the digital-to-analog and analog-to-digital converters as well as the speed of your computer; latency in your recording setup has nothing to do with what connector your interface uses.  It may seem like I am beating a dead horse here, but many people think this and it’s completely false.

One last thing before we move on to the DAW, I mentioned earlier that frequencies above half the recording sample rate will be perceived, by your computer, as lower frequencies.  These lower frequencies can show up in your recording and can cause distortion.  This phenomena has a name and it’s called aliasing.  Aliasing doesn’t just happen with audible frequencies, it can happen with super-sonic sound too.  For this reason, it is often advantageous to record at higher sample rates to avoid having these higher frequencies perceived within the audible range.  Most audio interfaces allow for recording 24-bit audio with a 96 kHz sample rate.  Unless you’re worried about taking up too much space, this format sounds excellent and offers the most flexibility and sonic detail.

 

Digital Audio Workstation: all Out on the Table

Apple's pro DAW software: Logic Pro X

Apple’s pro DAW software: Logic Pro X

The digital audio workstation, or DAW for short, is perhaps the most flexible element of your home-studio.  There are many many many DAW software packages out there, ranging in price and features.  For those of you looking to just get into audio recording, Audacity is a great DAW to start with.  This software is free and simple.  It offers many built-in effects and can handle the full recording capability of any audio interface which is to say, if you record something well on this simple and free software, it will sound mighty good.

Here’s the catch with many free or lower-level DAWs like Audacity or Apple’s Garage Band: they do not allow for non-destructive editing of your audio.  This is a fancy way of saying that once you make a change to your recorded audio, you might not be able to un-make it.  Higher-end DAWs like Logic Pro and Pro Tools will allow you to make all the changes you want without permanently altering your audio.  This allows you to play around a lot more with your sound after its recorded.  More expensive DAWs also tend to come with a better-sounding set of built-in effects.  This is most noticeable with more subtle effects like reverb.

There are so many DAWs out there that it is hard to pick out a best one.  Personally, I like Logic Pro, but that’s just preference; many of the effects I use are compatible with different DAWs so I suppose I’m mostly just used to the user-interface.  My recommendation is to shop around until something catches your eye.

 

The Microphone: the Perfect Listener

Studio condenser and ribbon microphones.

Studio condenser and ribbon microphones.

The microphone, for many people, is the most fun part of recording!  They come in many shapes and sizes and color your sound more than any other component in your setup.  Two different microphones can occupy polar opposites in the sonic spectrum.

There are two common types of microphones out there: condenser and dynamic microphones.  I can get carried away with physics sometimes so I will try not to write too much about this particular topic.

Condenser microphones are a more recent invention and offer the best sound quality of any microphone.  They employ a charged parallel plate capacitor to measure vibrations in the air.  This a fancy way of saying that the element in the microphone which ‘hears’ the sound is extremely light and can move freely even when motivated by extremely quiet sounds.

Because of the nature of their design, condenser microphones require a small amplifier circuit built-into the microphone.  Most new condenser microphones use a transistor-based circuit in their internal amplifier but older condenser mics employed internal vacuum-tube amplifiers; these tube microphones are among some of the clearest and most detailed sounding microphones ever made.

Dynamic microphones, like condenser microphones, also come in two varieties, both emerging from different eras.  The ribbon microphone is the earlier of the two and observes sound with a thin metal ribbon suspended in a magnetic field.  These ribbon microphones are fragile but offer a warm yet detailed quality-of-sound.

The more common vibrating-coil dynamic microphone is the most durable and is used most often for live performance.  The prevalence of the vibrating-coil microphone means that the vibrating-coil is often dropped from the name (sometimes the dynamic is also dropped from the name too); when you use the term dynamic mic, most people will assume you are referring to the vibrating-coil microphone.

With the wonders of globalization, all microphones can be purchase at similar costs.  Though there is usually a small premium to purchase condenser microphones over dynamic mics, costs can remain comfortably around $100-150 for studio-quality recording mics.  This means you can use many brushes to paint your sonic picture.  Often times, dynamic microphones are used for louder instruments like snare and bass drums, guitar amplifiers, and louder vocalists.  Condenser microphones are more often used for detailed sounds like stringed instruments, cymbals, and breathier vocals.

Monitors: can You Hear It?

Studio monitors at Electrical Audio Studios, Chicago

Studio monitors at Electrical Audio Studios, Chicago

When recording, it is important to be able to hear the sound that your system is hearing.  Most people don’t think about it, but there are many kinds of monitors out there: the screen on our phones and computers which allow us to see what the computer is doing, to the viewfinder on a camera which allows us to see what the camera sees.  Sound monitors are just as important.

Good monitors will reproduce sound as neutrally as possible and will only distort at very very high volumes.  These two characteristics are important for monitoring as you record, and hearing things carefully as you mix.  Mix?

Once you have recorded your sound, you may want to change it in your DAW.  Unfortunately, the computer can’t always guess what you want your effects to sound like, so you’ll need to make changes to settings and listen.  This could be as simple as changing the volume of one recorded track or it could be as complicated as correcting an offset in phase of two recorded tracks.  The art of changing the sound of your recorded tracks is called mixing.

If you are using speakers as monitors, make sure they don’t have ridiculously loud bass, like most speakers do.  Mixing should be done without the extra bass; otherwise, someone playing back your track on ‘normal’ speakers will be underwhelmed by a thinner sound.  Sonically neutral speakers make it very easy to hear what you finished product will sound like on any system.

It’s a bit harder to do this with headphones as their proximity to your ears makes the bass more intense.  I personally like mixing on headphones because the closeness to my ear allows me to hear detail better.  If you are to mix with headphones, your headphones must have open-back speakers in them.  This means that there is no plastic shell around the back of the headphone.  With no set volume of air behind the speaker, open-back headphones can effortlessly reproduce detail, even at lower volumes.

closed-vs-open-back-headphones  1

Monitors aren’t just necessary for mixing, they also help to hear what you’re recording as you record it.  Remember when I was talking about the number of different loudnesses you can have for 16-bit and 24-bit audio?  Well, when you make a sound louder than the loudest volume you can record, you get digital distortion.  Digital distortion does not sound like Jimi Hendrix, it does not sound like Metallica, it sounds abrasive and harsh.  Digital distortion, unless you are creating some post-modern masterpiece, should be avoided at all costs.  Monitors, as well as the volume meters in your DAW, allow you to avoid this.  A good rule of thumb is: if it sounds like it’s distorting, it’s distorting.  Sometimes you won’t hear the distortion in your monitors, this is where the little loudness bars on your DAW software come in; those bad boys should never hit the top.

 

A Quick Word about Formats before we Finish

These days, most music ends up as an mp3.  Convenience is important so mp3 does have its place.  Most higher-end DAWs will allow you to make mp3 files upon export.  My advise to any of your learning sound-engineers out there is to just play around with formatting. However, a basic outline of some common formats may be useful…

24-bit, 96 kHz: This is best format most systems can record to.  Because of large files sizes, audio in this format rarely leaves the DAW.  Audio of this quality is best for editing, mixing, and converting to analog formats like tape or vinyl.

16-bit, 44.1 kHz: This is the format used for CDs.  This format maintains about half of the information that you can record on most systems, but it is optimized for playback by CD players and other similar devices.  Its file-size also allows for about 80 minutes of audio to fit on a typical CD.  Herein lies the balance between excellent sound quality, and file-size.

mp3, 256 kb/s: Looks a bit different, right?  The quality of mp3 is measured in kb/s.  The higher this number, the less compressed the file is and the more space it will occupy.  iTunes uses mp3 at 256 kb/s, Spotify probably uses something closer to 128 kb/s to better support streaming.  You can go as high as 320 kb/s with mp3.  Either way, mp3 compression is always lossy so you will never get an mp3 to sound quite as good as an uncompressed audio file.

 

In Conclusion

Recording audio is one of the most fun hobbies one can adopt.  Like all new things, recording can be difficult when you first start out but will become more and more fulfilling over time.  One can create their own orchestras at home now; a feat which would have been near impossible 20 years ago.  The world has many amazing sounds and it is up to people messing around with microphone in bedrooms and closets to create more.

Hard Drives: How Do They Work?

What’s a HDD?

A Hard Disk Drive (HDD for short) is a type of storage commonly used as the primary storage system both laptop and desktop computers. It functions like any other type of digital storage device by writing bits of data and then recalling them later. It stands to mention that an HDD is what’s referred to as “non-volatile”, which simply means that it can save data without a source of power. This feature, coupled with their large storage capacity and their relatively low cost are the reasons why HDDs are used so frequently in home computers. While HDDs have come a long way from when they were first invented, the basic way that they operate has stayed the same.

How does a HDD physically store info?

Inside the casing there are a series of disk-like objects referred to as “platters”.

The CPU and motherboard use software to tell what’s called the “Read/Write Head” where to move on the platter and where it then provides an electrical charge to a “sector” on the platter. Each sector is an isolated part of the disk containing thousands of subdivisions all capable of accepting a magnetic charge. Newer HDDs have a sector size of 4096 bytes or 32768 bits; Each bit’s magnetic charge translates to a binary 1 or 0 of data. Repeat this stage and eventually you have a string of bits which when read back can give the CPU instructions, whether it be updating your operating system, or opening your saved document in Microsoft Word.

As HDDs have been developed, one key factor that has changed is the orientation of the sectors on the platter. Hard Drives were first designed for “Longitudinal Recording” – meaning the longer side of the platter is oriented horizontally – and since then have utilized a different method called “Perpendicular Recording” where the sectors are stacked on end. This change was made as hard drive manufacturers were hitting a limit on how small they could make each sector due to the “Superparamagnetic Effect.” Essentially, the superparamagnetic effect means that hard drive sectors smaller than a certain size will flip magnetic charge randomly based on temperature. This phenomenon would result in inaccurate data storage, especially given the heat that an operating hard drive emits.

One downside to Perpendicular Recording is increased sensitivity to magnetic fields and read error, creating a necessity for more accurate Read/Write arms.

How software affects how info is stored on disk:

Now that we’ve discussed the physical operation of a Hard Drive, we can look at the differences in how operating systems such as Windows, MacOS, or Linux utilize the drive. However, beforehand, it’s important we mention a common data storage issue that occurs to some degree in all of the operating systems mentioned above.

Disk Fragmentation

Disk Fragmentation occurs after a period of data being stored and updated on a disk. For example, unless an update is stored directly after a base program, there’s a good chance that something else has been stored on the disk. Therefore the update for the program will have to be placed in a different sector farther away from the core program files. Due to the physical time it takes the read/write arm to move around, fragmentation can eventually slow down your system significantly, as the arm will need to reference more and more separate parts on your disk. Most operating systems will come with a built in program designed to “Defragment” the disk, which simply rearranges the data so that all the files for one program are in once place. The process takes longer based on how fragmented the disk has become. Now we can discuss different storage protocols and how they affect fragmentation.

Windows:

Windows uses a base computer language called MS-DOS (Microsoft Disk Operating System) and a file management system called NTFS, or New Technology File System, which has been the standard for the company since 1993. When given a write instruction, an NT file system will place the information as close as possible to the beginning of the disk/platter. While this methodology is functional, it only leaves a small buffer zone in between different files, eventually causing fragmentation to occur. Due to the small size of this buffer zone, Windows tends to be the most susceptible to fragmentation.

Mac OSX:

OSX and Linux are both Unix based operating systems. However their file system are different; Mac uses the HFS+ (Hierarchical File System Plus) protocol, which replaced the hold HFS method. HFS+ differs in that it can handle a larger amount of data at a given time, being 32bit and not 16bit. Mac OSX doesn’t need a dedicated tool for defragmentation like Windows does OSX avoids the issue by not using space on the HDD that has recently been freed up – by deleting a file for example – and instead searches the disk for larger free sectors to store new data. Doing so increases the space older files will have closer to them for updates. HFS+ also has a built in tool called HFC, or Hot File adaptive Clustering, which relocates frequently accessed data to specials sectors on the disk called a “Hot Zone” in order to speed up performance. This process, however, can only take place if the drive is less than 90% full, otherwise issues in reallocation occur.  These processes coupled together make fragmentation a non-issue for Mac users.

Linux:

Linus is an open-source operating system which means that there are many different versions of it, called distributions, for different applications. The most common distributions, such as Ubuntu, use the ext4 file system. Linux has the best solution to fragmentation as it spreads out files all over the disk, giving them all plenty of room to increase in size without interfering with each other. In the event that a file needs more space, the operating system will automatically try to move files around it give it more room. Especially given the capacity of most modern hard drives, this methodology is not wasteful, and results in no fragmentation in Linux until the disk is above roughly 85% capacity.

What’s an SSD? How is it Different to a HDD?

In recent years, a new technology has become available on the consumer market which replaces HDDs and the problems they come with. Solid State Drives (SSDs) are another kind of non-volatile memory that simply store a positive charge or no charge in a tiny capacitor. As a result, SSDs are much faster than HDDs as there are no moving parts, and therefore no time to move the read/write arm around. Additionally, no moving parts increases reliability immensely. Solid state drives do have a few downsides, however. Unlike with hard drives, it is difficult to tell when a solid state is failing. Hard drives will slow down over time, or in extreme cases make audible clicking signifying the arm is hitting the platter (in which case your data is most likely gone) while solid states will simply fail without any noticeable warning. Therefore, we must rely on software such as “Samsung Magician” which ships with Samsung’s solid states. The tool works by writing and reading back a piece of data to the drive and checking how fast it is able to do this. If the time it takes to write that data falls below a certain threshold, the software will warn the user that their solid state drive is beginning to fail.

Do Solid States Fragment Too?

While the process of having data pile on top of itself, and needing to put files for one program in different place is still present, it doesn’t matter with solid states as there is no delay caused by the read/write arm of a hard drive moving back and forth between the different sectors. Fragmentation does not decrease performance the way it does with hard drives, but it does affect the life of the drive. Solid states that have scattered data can have a reduced lifespan. The way that solid states work cause the extra write cycles caused by defragmenting to decrease the overall lifespan of the drive, and is therefore avoided for the most part given its small impact. That being said a file system can still reach a point on a solid state where defragmentation is necessary. It would be logical for a  hard drive to be defragmented automatically every day or week, while a solid state might require only a few defragmentations, if any, throughout its lifetime.

Wearable Technology

2016 has given us a lot of exciting new technologies to experiment with and be excited for. As time goes by technology is becoming more and more integrated into our every day lives and it does not seem like we will be stopping anytime soon. Here are some highlights from the past year and some amazing things we can expect to get our hands on in the years to come.

Contact Lenses

That’s right, we’re adding electronic capabilities to the little circles in your eyes. We’ve seen Google Glass, but this goes to a whole other level. Developers are already working on making lenses that can measure your blood sugar, improve your vision and even display images directly on your eye! Imagine watching a movie that only you can see, because it’s inside your face!

Kokoon

Kokoon started out as a Kickstarter that raised over 2 million dollars to fund its sleep sensing headphones. It is the first of its kind, able to help you sleep and monitor when you have fallen asleep to adjust your audio in real time. It’s the insomnia’s dream! You can find more information on the Kokoon here: http://kokoon.io/

Nuzzle

Nuzzle is a pet collar with built in GPS tracking to keep your pet safe in case it gets lost. But it does more than that. Using the collar’s companion app, you can monitor your dogs activity and view wellness statistics. Check it out: http://hellonuzzle.com/

Hearables

Your ears are the perfect place to measure all sorts of important stuff about your body such as your temperature and heart rate. Many companies are working on earbuds that can sit in your ear and keep statistics on these things in real time. This type of technology could save lives, as it could possibly alert you about a heart attack before your heart even knows it.

Tattoos

Thought it couldn’t get crazier than electronic contacts? Think again. Companies like Chaotic Moon and New Deal Design are working on temporary tattoos than can use the electric currents on the surface of your skin to power them up and do all kinds of weird things including open doors. Whether or not these will be as painful as normal tattoos is still a mystery, but we hope not!

VR

Virtual Reality headsets have been around for a while now, but they represent the ultimate form or wearable technology. These headsets are not mainstream yet and are definitely not perfected, but we can expect to be getting access to them within the next couple of years.

Other impressive types of wearable tech have been greatly improved on this year such as smart watches and athletic clothing. We’re even seeing research done on Smart Houses, which can be controlled completely with your Smart Phone, and holographic image displays that don’t require a screen. The future of wearable technology is more exciting than ever, so get your hands on whatever you can and dress to impress!

A Fundamental Problem I See with the Nintendo Switch

Nintendo’s shiny new console will launch on March 3rd…or wait, no…Nintendo’s shiny new handheld will launch on March 3rd…Wait…hold on a second…what exactly do you call it?

The Nintendo Switch is something new and fresh that is really just an iteration on something we’ve already seen before.

In 2012, The Wii U, widely regarded as a commercial flop, operated on the concept that you could play video games at home with two screens rather than one. The controller was a glorified tablet that you couldn’t use as a portable system. At most, if your grandparents wanted to use the television to watch Deal or No Deal, you could take the tablet into the other room and stream the gameplay to its display.

Two months later, Nvidia took this concept further with the Nvidia Shield Portable. The system was essentially a bulky Xbox 360 controller with a screen you could stream your games to from your gaming PC. The system also allowed you to download light games from the Google Play store, so while it wasn’t meant to be treated as a handheld, it could be used as one if you really wanted to.

Then, a full year after the release of the Wii U, Sony came out with the PlayStation 4. Now, if you owned a PlayStation Vita from 2011, you could stream your games from your console to your Vita. Not only would this work locally, but you could also do it over Wi-Fi. So, what you had was a handheld that could also play your PS4 library from anywhere that had a strong internet connection. This became an ultimately unused feature as Sony gave up trying to compete with the 3DS. As of right now, Sony is trying to implement this ability to stream over Wi-Fi to other devices, such as phones and tablets.

Screen Shot 2017-02-15 at 10.23.57 AM

And now we have the Nintendo Switch. Rather than make a system that can stream to a handheld, Nintendo decided to just create a system that can be both. Being both a handheld and a console might seem like a new direction when in reality I’d like to think it’s more akin to moving in two directions at once. The Wii U was a dedicated console with an optional function to allow family to take the TV from you, the Nvidia Shield Portable was an accessory that allowed you to play your PC around the house, and the PlayStation Vita was a handheld that had the ability to connect to a console to let you play games anywhere you want. None of these devices were both a console and a handheld at once, and by trying to be both, I think Nintendo might be setting themselves up for problems down the road.

Screen Shot 2017-02-15 at 10.22.24 AM

Screen Shot 2017-02-15 at 10.39.27 AM

Remember the Wii? In 2006, the Wii was that hot new item that every family needed to have. I still remember playing Wii bowling with my sisters and parents every day for a solid month after we got it for Christmas. It was a family entertainment system, and while you could buy some single player games for it, the only time I ever see the Wii getting used anymore is with the latest Just Dance at my Aunt’s house during family get-togethers. Nobody really played single player games on it, and while that might have a lot to do with the lack of stellar “hardcore” titles, I think it has more to do with Nintendo’s mindset at the time. Nintendo is a family friendly company, and gearing their system towards inclusive party games makes sense.

Screen Shot 2017-02-15 at 10.24.24 AM

Nintendo also has their line of 3DS portable systems. The 3DS isn’t a family system; everyone is meant to have their own individual devices. It’s very personal in this sense; rather than having everyone gather around a single 3DS to play party games on, everyone brings their own. Are you starting to see what I’m getting at here?

 

Nintendo is trying to appeal to both the whole family and create a portable experience for a single member of the family. I remember unboxing the Wii for Christmas with my sisters. The Wii wasn’t a gift from my parents to me; it was a gift for the whole family. I also remember getting my 3DS for Christmas, and that gift had my name on it and my name alone. Now, imagine playing Monster Hunter on your 3DS when suddenly your sisters ask you to hand it over so they can play Just Dance. Imagine having a long, loud fight with your brother over who gets to bring the 3DS to school today because you both have friends you want to play with at lunch. Just substitute 3DS with Nintendo Switch, and you’ll understand why I think the Switch has some trouble on the horizon.

You might argue that if you’re a college student who doesn’t have your family around to steal the switch away, this shouldn’t be a problem. While that might be true, remember that Nintendo’s target demographic is and has always been the family. Unless they suddenly decide to target the hardcore demographic, which it doesn’t look like they’re planning on doing, Nintendo’s shiny new console/handheld will probably tear the family apart more than it will bring them together. When you’re moving in two directions at once, you’re bound to split in half.

 

Organic Light-Emitting Diode Displays

The screen you’re reading this on is most likely a Twisted Nematic, or TN for short, screen. TN screens are the most ubiquitous and oldest screens still used today. TN panels tend to be cheap to produce, have terrible viewing angles where colors quickly become distorted at an angle. But these types of panels generally have low power draw and the ability to produce high frame rates, which make them a popular choice for laptops and gaming screens respectively.

If you’re viewing this on a higher quality screen, or a computer or phone where you’ve spent more than the average price tag, you probably have an In-Plane Switching display, or IPS. These panels offer a wider range of accurate and vibrant colors, and offer them more consistently at angles, making them a good choice for viewing photos, or sharing images or videos with friends all watching on one screen.

However, both these screen technologies share similar inherent disadvantages. Both screens function similiarly, utilizing a backlight to display a colored image to the display. This takes up valuable space, produces more weight, and can be less efficient to display certain ranges of colors.

In come Organic Light-Emitting Diode Displays, or OLED for short. Working without a backlight, OLED displays individually can light up each pixel on an array, creating richer colors and a more vibrant display. For example, to display the color black, the pixel tasked would not turn on at all, creating a much richer black color (instead of it being backlit). Not only can OLED displays can be smaller, but they can be more power efficient when viewing darker colors and blacks, as the pixels don’t have to be on at all. Additionally, OLED displays will be thinner, more power efficient, have better viewing angles, and will have a better response time than any other type of LCD panel.

OLED panels aren’t quite where we want them yet though, as manufactures still work out problems. OLED panels are very expensive, because only a handful of manufacturer’s produce them. Once more manufacturers start seeing the need for a future of OLED panels, manufacturing prices will go down and companies start to invest in the materials and machinery needed to produce such panels. The other issue is battery life in a negative sense. When displaying images that are all black, OLED panels are incredibly power efficient. But with screens that are all white, that require the most amount of power to produce, OLED panels can up to twice as much power to power the screen than a comparable LCD screen. Finally, OLED panels have significant problems with their longevity, as problems such as ghosting, burn-ins, and consistency to display a certain brightness all become problems as the panels age.

Overall, OLED panels will be the future of displays. They have several advantages over modern LCD panels such as TN or IPS displays, but as a relatively new technology, there are many bugs that still must be worked out. Many laptops such as the Thinkpad X1 Yoga, HP Spectre x360, and Dell Alienware 15 all have options for them, there are also a few TVs available with such panels, the Apple Watch and Touchbar on the new MacBook Pro also feature OLED components. So as OLED panels become more ubiquitous in life, you may want to think about spending the extra cash to include one in your newest technology gadget, and enjoy its advantages.

Bluetooth Headphones: Are you ready to go wireless?

The time has finally come, and Apple has removed the 3.5mm jack from it’s newest line of iPhones entirely. While this will lead to a new generation of lighting connector based headphones, it will also considerably increase the popularity of bluetooth headphones. Like the electric car and alternative forms of energy, bluetooth headphones are something that everyone’s going to have to accept eventually, but that’s not such a bad thing. Over the past few years bluetooth headphones have gotten cheaper, better sounding, and all around more feasible for the average consumer. With the advent of Bluetooth 4.2, the capacity is there for high-fidelity audio streaming. Think about it: as college students we spent a lot of our time walking around (especially on our 1,463 acre campus). Nothing is more annoying than having your headphone cable caught on clothing, creating cable noise, or getting disconnected all together. There are many different form factors of bluetooth headphones to fit any lifestyle and price point. Here are a few choices for a variety of users.

Are you an athlete? Consider the Jaybird Bluebuds X

bluThese around-the-neck IEMs provide incredibly sound quality, and have supports to stay in your ears wether you’re biking, running, or working out. Workout getting too intense and you’re worried about your headphones? Don’t sweat it! The Bluebuds are totally water-proof, with a lifetime warranty if anything does happen.
.
.
.
.


Looking for portable Bluetooth on a budget? The Photive BTH3 is for you

photiveWell reviewed online, these $45 headphones provide a comfortable fit and a surprising sound signature. It’s tough to find good wired headphones for that price, yet the BTH3s sound great with the added bonus of wireless connectivity and handsfree calling. When you’re not using them, they can fold flat and fit into an included hard case to be put into your bag safely.
.
.


.High performance import at a middle of the road price.
s700Full disclosure: These are my headphone of choice. At double the price of the previous option and around 1/4th the price of the Beats Studio wireless, we find these over-ear bluetooth headphones from the makers of the famous ATH-M50. With a light build, comfortable ear cups and amazing sound quality, these headphones take the cake for price-performance in the ~$100 range.
.


Have more money than you know what to do with? Have I got an option for you.

vmoda What you see here are the V-MODA Crossfade Wireless headphones, and they come in at a wallet squeezing $300 MSRP. With the beautiful industrial design and military-grade materials, it’s an easy choice over the more popular Apple wireless headphone offerings. Like other headphones in the V-MODA line, these headphones are bass-oriented, but the overall sound signature is great for on the go listening.

Today’s Virtual Reality Headsets

The world of Virtual Reality has had a dramatic increase in popularity in recent years. The technology that people have been waiting for has finally arrived and it comes in the form of a head-mounted display (HMD). There are many brands of HMD which range in their ability to achieve total immersion. The low-end forms of VR use a smartphone and a pair of lenses, like Google’s Cardboard:

OLYMPUS DIGITAL CAMERA

The Google Cardboard costs $15 and is about the cheapest form of VR you can find, assuming you already own a compatible smartphone.

The cheapest versions of VR use the same same lens-enclosure method of delivering VR. Users are limited to apps they can find on their phone’s app stores, which are buggy at best. Still, if you’re unsure whether or not you want to buy a more immersive HMD, this is a great way to get an idea of what you’ll be buying. The real immersion begins when the display and the technology inside is specifically designed for VR gaming.

The best VR experience while still keeping your wallet happy is from Samsung Gear VR, but it requires that you already own a recent Samsung Galaxy smartphone:

Samsung_Gear_VR

Samsung Gear VR

AT $60, the Samsung Gear VR has some more intricate technology than the Google Cardboard allowing for a better experience. You could also add the Gear 360, which allows for “walk around the room” immersion for $350 but if you find that price point reasonable you may be better off in the high-end territory. The Gear VR has its own app store with games designed for use with it.

If you don’t have a Galaxy Smartphone, but you do have a PlayStation, you may be interested in what 25818482705_8a1bb716bf_bSony has been working on. Their VR HMD is the Playstation VR. At $400, the PSVR connects to your PlayStation for use with VR-enabled games. The PSVR is meant to be used with the Playstation Move Controllers which will add another $100 to your total. A Sony executive says plans to make PSVR compatible with PC may be in their future.

The high-end forms of VR include the Oculus Rift and HTC Vive:

HTC Vive

These HMDs are designed with PC games in mind. They provide an experience far superior to the cheap options but will run at a high price of $599 for the Rift and $799 for the Vive. The Vive includes two hand controllers which allow the user to have virtual hands for interacting with VR objects. Oculus is working on a similar device, the Oculus Touch, which is available for pre-order as of October 2016.

Oculus Rift

Many companies are investing in virtual reality and creating their own devices to compete with the front-runners. It is expected that the VR market will expand much further, especially once the price point of the high-end HMDs comes down. Virtual Reality is in a state of great potential; the applications of these headsets goes well beyond gaming. The military is interesting in them for training purposes. Educators can use them to teach students. Doctors can use them to treat psychological conditions. I have no doubt that Virtual Reality will eventually become part of our everyday lives.

Comparing Samsung and Apple Cameras

This year, Samsung and Apple both released a new generation of devices. If you don’t have a particular operating system preference and photography is your thing, then this article is for you.

The Samsung Galaxy S7 and S7 edge both have the same cameras with the following specifications: Dual Pixel Auto Focus 12 mp rear camera, F1.7 aperture, Records in UHD 4K resolution(3840 x 2160) @ 30fps, flash on rear camera.

Dual Pixel Auto Focus was introduced on smartphones for the first time with the Samsung Galaxy devices. All of the pixels in the camera’s sensor are allocated to phase detection and sensing light, whereas in previous smartphone cameras less of the pixels were used for phase detection and auto focus.

Aperture is the opening of the lens and it is measured in F-stops. These numbers correspond to the size of the opening in the lens. A smaller F-stop is a larger opening in the lens, and a larger F-stop is a smaller opening. With an aperture of F1.7, the 7th generation Galaxy devices have the largest smartphone aperture. This enables the camera to take in more light, resulting in better low-light photos.

The rear camera on the Samsung devices records in 4K resolution, which is the resolution that newer consumer TVs display in..

This information about Samsung devices and any further specifications can be found on their website at http://www.samsung.com/us/

Unlike the seventh generation of Galaxy devices, the iPhone 7 and 7 Plus have slightly different features, but they also have many similarities.

The iPhone 7 camera boasts the following features. For ease of comparison, the features that can be most easily compared to the Samsung Galaxy devices have been bolded.

12 mp rear camera with F1.8 aperture                                                                           Digital zoom up to 5x                                                                                                         Optical image stabilization
Six‑element lens
Panorama (up to 63 megapixels)
Sapphire crystal lens cover
Backside illumination sensor
Hybrid IR filter
Autofocus with Focus Pixels
Tap to focus with Focus Pixels
Live Photos with stabilization
Wide color capture for photos and Live Photos
Improved local tone mapping
Body and face detection
Exposure control
Noise reduction
Auto HDR for photos
Auto image stabilization
Burst mode
Timer mode
Photo geotagging
Video Recording
4K video recording at 30 fps
1080p HD video recording at 30 fps or 60 fps
720p HD video recording at 30 fps
Optical image stabilization for video
Quad-LED True Tone flash
Slo‑mo video support for 1080p at 120 fps and 720p at 240 fps
Time‑lapse video with stabilization
Cinematic video stabilization (1080p and 720p)
Continuous autofocus video
Body and face detection
Noise reduction
Take 8-megapixel still photos while recording 4K video
Playback zoom
Video geotagging

In addition to these features, the iPhone 7 Plus also features a telephoto lens with an F2.8 aperture. 2x optical zoom and digital zoom up to 10x are also available.

The F1.8 lens is a slightly smaller aperture than the 7th generation Samsung devices, but it is a very small difference. The additional telephoto lens and optical zoom on the iPhone 7 Plus make it capable of taking better pictures at a distance.

This information about Apple devices and any further specifications can be found on their website at http://www.apple.com

Digital and optical zoom both accomplish the same job, they just do that job different ways. Optical zoom is based on the lens itself. Different parts of the lens move to zoom and focus, which is why smartphone cameras have limited optical zoom. Digital zoom is entirely computer based, so it’s very similar to zooming in on an image you could find on Google. The processing unit is what manages the zoom.

Overall, both manufacturers make very capable cameras. The information is available on their websites and here for you to compare. For me, the decision would ultimately come down to operating system preferences and preference of user interface.

Should you get a Surface or a MacBook Air?

The Surface Book

The MacBook Air

When it comes to portable devices aimed at a college going audience, not many products can really compare to the sleek and powerful MacBook Air and Surface computers, each fulfilling a similar role as per the design of Apple and Microsoft respectively.

While both computers are excellent, they’re quite difficult to choose between. Both are offered at similar sub two-thousand-dollar price points, and both are designed with portability and aesthetics as the major goals of the devices. However, there are a number of key differences which can be highlighted that can help to make the decision when purchasing one of these machines.

Interface and Form Factor

The form factors of each device are strikingly different, with some variation depending on the specific model purchased. The MacBook Air comes in both 11 and 13-inch variants, with the 13 inch boasting some spec increases to boot. Surfaces, however, are a little more varied. If you’re looking for the newest devices on the market (which I would personally recommend), you’re essentially deciding between the Surface Pro 4 and the Surface Book.

Microsoft-Surface-Pro-4

While the Surface Pro 4 is essentially a tablet computer with an optional attachable keyboard, much like an iPad, the Surface Book is much more of a dedicated laptop-style device. Many people will prefer this style, as the more robust keyboard makes typing a much more pleasurable experience, yet the simplicity of the tablet experience might draw some to choose the Surface Pro 4 instead. Each device rings in at a similar size, the Surface Pro 4 having a slightly smaller 12.3-inch screen when compared to the Surface Books 13.5 inch.

Either way, both Surface devices present one striking difference in terms of the interface; touch screen. Touch screen is a valuable tool to many that increases ease of use and productivity, especially when in an environment where a stable desk is unavailable. Furthermore, each device comes with a touch screen sensitive stylus, useful for things such as drawing diagrams and signing documents in a convenient fashion.

The difference between the Surface and the MacBook Air essentially boils down to what it is you’re looking for. If you want the more traditional laptop experience, while sacrificing the utility of a touch screen in exchange for a slightly more portable device, the MacBook Air may be what you’re interested in. However, if a tablet-style hybrid device is more your style (with the Surface Pro 4 airing much more on the side of tablet than the Surface Book), surface devices may be worth looking into. Either way, you’re getting an excellent portable workstation to fit whatever needs you may have.

Specs

i3-logo-2016getimageIntel-Core-i7-6500U-6th-Generation

When it comes to internal hardware, both the Apple and Microsoft options are surprisingly similar. Both the MacBook Air and the Surface can be configured with a variety of processors, the MacBook allowing either an i5 or a much beefier i7, while the Surface Pro 4 also allows for less powerful core m3 and i3 processors, the Surface Book however being locked to the previously mentioned i5 and i7 just like the MacBook.

For general use, an i5 is really all that the average person needs. However, if you plan on doing any sort of gaming on these machines (which is not recommended, due to the lack of a non-integrated graphics card in any the machines, with the only exception being the much higher end Surface Books), an i7 could be worth the extra money.

Basically; the m3 and i3 are basic processors capable of doing most anything the average user would need, perhaps slugging behind a bit when it comes to multitasking. The i5 is a much more capable chip for this, and if you really need the extra juice, the i7 will certainly get the job done.

Memory and storage are another important aspect of these devices. The MacBook Air can be configured to have up to 512gb of extremely speedy flash-based storage, as well as up to 8gb of internal memory. Unless you’re someone who has literally thousands of photos on their computer, this should definitely be enough for the average user in terms of storage. Furthermore, 8gb of memory should definitely be enough, and will only ever begin to slow you down in the most demanding of multitasking scenarios, such as rendering video for an editing project.

Both surface devices have very similar configurations, with the Surface Pro 4 ranging from 4gb of memory to 16gb, while the Surface Book is locked at either 8gb or 16gb. Internal storage is pretty much the same story; the Surface Pro 4 can handle up to 256gb of storage (half that of the MacBook), while the Surface Book can take an impressive 1tb of the same flash based storage as the MacBook.

What this boils down to is that, depending on how much you need, the Surface Book could be your best option for mass storage. If 8gb of memory just isn’t enough for you, and you have over 500gb of files that you need stored, the high configurations of the Surface Book may just be your only option, as the MacBook Air only has a few options.

However, for most people, I would say that each device is about equivalent in terms of storage and memory. I wouldn’t let this bother you too much when picking your device, as external drives are always a way to expand storage, and more than 8gb of memory really isn’t necessary for most users.

Price

To conclude, there’s one more category of discussion that needs to be touched upon: price.

Money-PNG

Both the Surface and the MacBook Air are devices which you can get for under 2000 dollars, with the Surface Pro 4 and MacBook Air both being available (at minimum conditions) for just under 1000.

MacBook Airs range from about 900 dollars for a minimum configuration 11-inch model, all the way up to 1200 dollars for a 13-inch model armed with 8gb of memory, 512gb of storage and a powerful i7 processor.

Surfaces, however, range quite a bit. You can get yourself a minimum configuration Surface Pro 4 for about 900 dollars, just like the MacBook, with the only difference being that the Surface Pro 4 is configurable to up to an 1800-dollar machine.

If you’re interested in a Surface Book, expect to pay about 1200 dollars for the cheapest configuration, with its options ranging up to a shocking 3000 dollars for the model with a 1tb solid state drive built into the machine.

Whichever device you get, all of them fulfill the same basic role: a sleek, powerful, portable device with productivity in mind. If I were to buy these devices, I’d either go for the 1200 dollar MacBook Air configured with an i7 processor and 8gb of memory, or the 1200-dollar Surface Book. While this Surface Book configuration does require you to use an i5 instead of an i7, the addition of a touch screen and stylus definitely win back the lost value.

Disproving Einstein: the Phenomenon of Quantum Entanglement and Implications of Quantum Computing

Quantum-Entanglement

Albert Einstein famously disparaged quantum entanglement as “spooky action at a distance,” because the idea that two particles separated by light-years could become “entangled” and instantaneously affect one another was counter to classical physics and intuitive reasoning. All fundamental particles have a property called spin, angular momentum and orientation in space. When measuring spin, either the measurement direction is aligned with the spin of a particle -classified as spin up- or the measurement is opposite the spin of the particle -classified as spin down. If the particle spin is vertical but we measure it horizontally the result is a 50/50 chance of being measured spin up or spin down. Likewise, different angles produce different probabilities of obtaining spin up or spin down particles. Total angular momentum of the universe must stay constant, and therefore in terms of entangled particles, they must have opposite spins when measured in the same direction. Einstein’s theory of relativity was centered around the idea that nothing can move faster than the speed of light, but somehow, these particles appeared to be communicating instantaneously to ensure opposite spin. He surmised that all particles were created with a definite spin regardless of the direction they were measured in, but this theory proved to be wrong. Quantum entanglement is not science fiction; it is a real phenomenon which will fundamentally shape the future of teleportation and computing.

Continue reading

An Intro to Mechanical Keyboards

What is a “mechanical” keyboard and what is different about it that sets it apart from the $10 keyboard that you’ve been using? How are different mechanical keyboards different? Should you buy one? Great questions, with somewhat tricky answers.

What makes a keyboard “mechanical”?

Most keyboards you encounter nowadays are rubber-dome or membrane keyboards. The membrane is underneath each key, so when you press the key down, the membrane depresses and makes contact with another membrane on the base of the keyboard. When these membranes contact, the keyboard gets a signal that a key has been pressed and sends that information to the computer.

Now, the difference between that and a mechanical keyboard, is that instead of a membrane being depressed, a key on a mechanical keyboard depresses a physical switch, and when that switch is pressed, a signal gets sent to the computer.

The main difference between these types of keyboards, as you can tell, is the physical switch being depressed vs. the membranes contacting each other that tells the computer when a key has been pressed.

For the most part, nearly all rubber-dome keyboards feel the same, and give little tactile feedback, that is, you don’t know how exactly how hard you have to press a key for it to register on your computer. For mechanical keyboards, there are different mechanical key switches that all feel different, and give different levels of tactile feedback. When you feel the tactile feedback on a mechanical keyboard, you know you’ve registered a keypress on the computer.

Cherry MX mechanical switches:

Nearly all mechanical keyboards use switches made by Cherry, and they are typically denoted by the color of the switch. The most common switches are Blue, Green, Brown, Clear, Black, and Red. Switches have different levels of force, measured in grams (g), needed to depress the key, as well as different levels of tactile feedback that they give. Some switches give strong tactile and audible feedback for keypresses, while others give almost none unless the key is pressed all the way in.

Cherry MX Blue (Tactile Click)Cherry MX Blue Switches

If you’re an oldschool computer user, MX Blue switches may remind you of the clicky keyboards from the 1980’s. The blue switch has both strong tactile feedback and a loud “click” when you activate the key, making it a quite popular choice for typists, however, the loud clickiness makes it somewhat of a nuisance in workplaces with shared spaces. It has an actuation force of 50g, making it somewhat of a stiff switch.

Cherry MX Green (Tactile Click)Cherry MX Green Switches

Green switches are very similar to Blue switches, but have a much higher actuation force, sitting at 70g. This makes them much stiffer than blue switches. Greens still have the loud click and tactile feedback similar to blues.

 

Cherry MX Brown (Tactile Bump)Cherry MX Brown Switches

The MX Brown switches have a softer tactile feedback than MX Blue switches, and no loud click. With the tactile feedback and no loud click, they are often considered a middleground between the Blue switches and the Black switches, and provide a option for both typing and gaming. Brown switches have an actuation force of 45g, making them one of the lighter switches.

Cherry MX Clear (Tactile Bump)Cherry MX Clear Switches

MX Clear switches are similar to Brown switches, with a stronger actuation force (65g) and a slightly stronger tactile click. Again, these are a good middle ground switches for both gaming and typing, and are a good choice if you like a stiffer key.

 

 

Cherry MX Black (Linear)Cherry MX Black Switches

A big difference between tactile switches mentioned above and linear switches such as the Black and Red switches is that with linear switches, there is no tactile feedback until the key is pressed all the way down (called “bottoming out”). For all other switches so far, you have tactile feedback telling you when your keypress is registered on the computer. With Black and Red switches however, the keypress can register without any tactile feedback.
Black switches have a high actuation force of 60g, making stray keypresses less likely. Black switches are commonly used by gamers who need accurate keypresses.

Cherry MX Red (Linear)Cherry MX Red Switches

MX Red switches are very similar to Black switches, but with a lower actuation force, sitting at 45g. These switches are smooth all the way with no tactile bump or click, other than when it bottoms out. These switches are commonly used by gamers who need fast, rapid keypresses.

 

Should you switch to a mechanical keyboard?

Mechanical keyboards are quality products that last longer than normal membrane or rubber-dome keyboards, and the build quality is reflected in the price. Many keyboards will run you upwards of $100, but for most people, that price is well justified. So, should you get one? The answer to that question really depends on your personal preference and personal experience. Reading about all these different switches really means nothing until you try typing on a mechanical keyboard. There is a huge difference between looking at moving pictures about what the switches do and actually feeling what it’s like to type or game on one. The bottom line is, go somewhere you can try out different keyboards with different switches, and see which one you like. Everybody’s preferences are different when it comes to typing, and certain keyboards may fit yours better than others.

DDR4 Memory vs. DDR3

It’s new! It’s fast! It’s 4!

ddr4_2

DDR4 memory has been on the market for some time now and looks to be a permanent successor to DDR3L and older DDR3 memory.  However, other than being one more DDR than DDR3, what is the difference between the old and the new?  There are a few key differences, some good and some bad.  This article will give a broad overview of the costs and benefits of DDR4 memory.

 

Increased Clock Speed

Crystal_Oscillator_with_32kHz__155MHz_Frequency_Range

Every component in your computer has to have a clock, otherwise the seemingly endless sequences of ones and zeros would become all jumbled up and basic logic functions would become impossible.  Memory, though it does not perform any logical work on the data which lives in its vast arrays of binary-junction transistors, does still use a clock to govern the rate at which the aforementioned data is overwritten or refreshed.  With faster clock speeds, DDR4 can be written to, and read from, far faster than DDR3 or DDR3L.  This gives a big advantage to those out there using blazing fast computer processors which are held back by the speed at which they can read and write to memory.  However, users looking to purchase a laptop with DDR4 memory may not experience any noticeable speed increase from DDR3.

 

Lower Power Consumption 

power_2

With new versions of computer components, often times manufacturers are able to boast about improved power efficiency.  With DDR4 this is the case!  Older DDR3 memory sticks would require about 1.5 volts to run; the new DDR4 memory sticks can run off about 1.2 volts.  This may not seem like a whole lot, but the amount of power needed to store data in memory is very very minuscule so a large amount of excess power is turned into heat.  Anyone who has spent a few hours playing video games on a laptop will know just what excess power consumption feels like when it’s going into one’s legs.  A hot computer doesn’t just cause mild discomfort; transistors, as non-ohmic resistors, are impeded in their ability to switch electric current on and off when they get hot.  This means less ability to perform the mathematical functions that are at the base of all computing and, therefore, a slower computer!  Less power consumption means less excess power, a cooler machine, and a faster computing experience.

 

Higher Cost

moores-law-graph

Moore’s Law has two components to it: computing power will double roughly every 18 months, and the cost of existing computer components will be halved in the same amount of time.  Sometimes we reap the benefits of the halved cost; sometimes we don’t.  At the moment, purchasing DDR4 memory for a new computer is a costly endeavor.  DDR4 memory can work more than two times faster than DDR3 but there is a considerable cost premium.  This is to be taken into consideration when choosing whether or not to make the leap into DDR4: is the improved speed and efficiency worth the price.  That question lies well beyond the scope of this humble article.

FURTHER READING – Corsair’s performance comparison using Intel Skylake CPU

 

NO Backwards-Compatibility

With modern computers, we enjoy an unprecedented level of flexibility.  Computers now are more modular than ever.  However, just because different components can fit together without being modified, that does not mean they will work together.  With DDR4, you need, with only a few exceptions, brand new, top-of-the-line components to work.  This means, if you are to purchase or construct a computing using the fast, new memory, you need a fast, new CPU and a fast, new motherboard.  For those of you out there who have no interest in building a computer, you will be paying upfront for a laptop or desktop fitted with the latest version of everything.  This will further up the cost of purchasing a machine fitted with DDR4 memory.

 

So What’s the Deal?

With all new things, there are costs and benefits.  With DDR4, yes, you will experience faster read and write speeds and overall faster computing, but it will come at a cost.  For people who use their computers for browsing the internet and word processing, there will be a very small noticeable difference.  However, for avid users of applications such as Photoshop and Final Cut Pro, DDR4 will yield a substantial speed increase.

Ultimately, it is up the use whether or they want to take the leap into the new realm of faster read and write speeds.  Yes, you will get to have a blazing-fast computer that you can brag to your friends about, but it will come at a cost.  You will also run the risk of spending more money and not really getting all that much more speed if you are using mostly memory non-intensive programs.  However, if you are like this humble IT guy, and spend much time video and photo editing and want a computer that is not going to start hissing when you open Photoshop, then DDR4 is the memory for you!

Your computer won’t boot…now what?

You finally sat down to start that paper you’ve been putting off, hit the power button on your laptop and nothing but a folder with a question mark shows up. Or maybe you just got back from the library and just want a relaxing afternoon online. However, when you wake up your computer, all you see is a black screen and text reading “Boot device not found.”

When diagnosing issues where your computer won’t boot, there are a few different diagnostic tests that you can run to determine what is causing the issue. These can vary depending on what kind of computer you have. For all manufacturers, the first step is determining whether or not the computer turns on. With laptops, check whether or not any lights come on. If it is unplugged, try making sure the battery is seated correctly and plugging it into the power adapter (be sure to use a known-good wall outlet). If none of these work, the most likely cause is failure of the main logic board.

If your computer does turn on at all, this could mean there is a hardware failure. Usually if the computer doesn’t turn on at all this means there is some kind of power failure. It could be as simple as your battery dying, which can be solved by charging the laptop with a known good power adapter. On the other hand, this could also be caused by a motherboard that has failed.

The other hardware point of failure is usually the hard drive. In this case Windows and Macs will give two different errors. Macs will boot to a folder with a question mark. Windows could show a number of different screens depending on the manufacturer and how old the machine is. Usually it will look something like the following:

The last point of failure for boot failure is the operating system. If the operating system has been corrupted, it can cause any number of errors to be shown on startup. On Windows machines this usually results in a blue screen of death. To fix this, usually the hard drive needs to be wiped and Windows needs to be reinstalled (after making sure your files are backed up). Macs, on the other hand, have a few recovery options, the most useful being disk first aid. Holding down Command-R while the machine is booting will bring up the recovery boot options:

Regardless of what happens when you try to turn on your computer though, there is always a solution to fix any problems that might happen. Determining where the point of failure is can be the difficult part. Once you know that, it’s much easier to make a decision about fixing the computer.

TN or IPS Monitors? What’s the Difference?

Whether you just want to project your laptop screen onto a bigger monitor, or you’re buying a new monitor for your desktop, the search for a monitor, like any other component, is riddled with tech jargon that is often difficult to understand. This article is designed to give buyers a quick guide about the differences between TN and IPS, the two main monitor types of today’s world.

A Little Background on Monitors

Back in the not so distant past, CRT, or Cathode Ray Tube, was the standard monitor type. CRTs got information in an analog format along the cable. The cathode, or electron gun, sits at the back of the monitor’s tapered back and fires electrons corresponding to the signal received from the cable. Closer towards the screen is a set of anodes, that direct the electron to the RGB layer of the actual screen, via part of the signal from the cable. While these monitors were state of the art once upon a time, they don’t really have much of a place in today’s world with the invention of LCD screens, which have become the standard for today’s monitors.

LCD, Liquid Crystal Displays, don’t suffer from the same drawbacks as CRTs. For one, they use far less power. Also, CRTs tend to be harsher to stare at, and lack customization options in terms of brightness controls to the degree that modern monitors do. Additionally, LCDs are much more clear than CRTs, allowing for a more accurate image to be displayed. Modern LCD monitors work by having a two layer system of LED lights and LCD screen. The LED lights are referred to as a “backlight” and cause the image to be projected more clearly than the otherwise fairly dark LCD. The LCD layer, then, is in charge of color production, and the actual recreation of the image. LCD monitors are digital now, via such connections as HDMI or DisplayPort, and therefore can transmit data faster.

Now that we know a little about monitor history, let’s move on to the difference between TN panels and IPS panels.

TN Panels

TN, or Twisted Nematic panels, use a ‘nematic’ kind of liquid crystal to rotate and pass light through, corresponding to the signal transmitted. The main advantage of TN panels is speed. TN panels take advantage of something called an “active 3D shutter” which in essence allows them to display up to twice as much information as other types of panels. Additionally, the response time of TN panels is much quicker than IPS, though it is possible to find faster IPS panels. The delay in response time for a TN panel is roughly 2ms (milliseconds) however they can go as low as 1ms. Another benefit of TN panels is that they are generally cheaper than their IPS equivalent. This fast response time, and cheap factor, make these monitors quite popular in the gaming community, as well as the general consumer market, as gamers will experience less delay time when rendering an image. Additionally, TN panels allow for a higher refresh rate, going as high as 144Hz – though once again, it is possible to get IPS monitors with similar specs, just for a more money.

The major downside of TN panels is that they lack 100% accurate color reproduction. If you’re browsing Facebook, it’s not very important. However, if you’re doing color sensitive work perhaps for a movie or a photo edit, then TN panels may not be the right monitor for you.

IPS Panels

The main difference between IPS, In-plane Switching,  and TN panels, as touched on above, are price and color reproduction. IPS monitors are generally preferred by those in the professional rendering industry, as they more accurately portray colors of images. The downside, however, is that they are more expensive, though it is quite possible to find affordable IPS monitors for price ranges from $150 all the way up to thousands of dollars.

IPS monitors work by having a parallel instead of perpendicular array of pixels, which in addition to allowing for better color reproduction has the benefit of excellent viewing angles, while TN panels can often discolor if viewed from any relatively extreme angle. In essence, IPS panels were designed to address the flaws with TN panels, and therefore are preferred by many, from the average consumer to the professional editor.

Don’t let the benefits of IPS panels ruin your opinion of TN panels, though, for TN panels are still fantastic for certain situations. If you’re just sitting in one place in front of your computer, and absolutely perfect color reproduction isn’t really important to you, then TN is the way to go, especially if you’re trying to save a little on your monitor purchase.

Conclusion 

To summarize, TN panels have a better response time, as well as a cheaper price tag, while IPS panels have better viewing angles and color reproduction for a little extra cash. Whatever your choice of type, there are a plethora of excellent monitors for sale across the internet, in an immense variety of sizes and resolutions.

Don’t Be A Victim Of Data Loss

If you own a computer, chances are you have a lot of important data stored on there. It may seem safe and sound, but tragedy could be waiting to strike. Data loss from a failed hard drive is an all too common but preventable problem that could happen to anyone. So, how do you prevent it?

Most computer storage is on a hard drive disk, which consists of a series of spinning disks, or platter, on which data is stored, and a moving arm, or read-write head, which reads and writes data. The platter motor spins the platters at over 5400 rpm (and sometimes up to 15,000 rpm), and the head motor moves the read-write head over the platters. The Hard drive is one of the only moving parts left in the modern computer, and as such is one of the most vulnerable to damage. Always avoid dropping or shaking your computer, especially while it is on. This could cause the parts in the hard drive to bump together (literally your computer crashing).

HDD.jpg

https://kristin-itgs.wikispaces.com/Hardware+and+software

Unfortunately, sometimes hard drives fail through no fault of the owner. One possible way a hard drive can fail is if the files on it become corrupt. This can be caused by an operating system update getting interrupted or malware. When this happens, your computer may continually try to reboot, or display errors when starting up. Whatever the case, usually most data can be recovered by doing what is called an archive reinstall. This process can repair or overwrite damaged system files. Any member of the 5 College community experiencing this problem can check in their computer to our repair center to get an archive installation done. Just stop in to the Help Center and we can help decide if that is necessary.

Another issue that can be more serious is mechanical failure. What this means is that the hard drive is not spinning or the read-write head is unable to move properly. When this happens it can be very difficult to recover any data because there is a risk of causing physical damage to the platters where the data is stored. This problem is often accompanied by strange noises coming from your computer in addition to failure to boot. Generally, this requires a professional data recovery service to retrieve files, and can be expensive.

The best way to prevent data loss from a failed hard drive is to keep backups. Although it can be impossible to prevent a failure, it doesn’t mean you have to lose your data. An external hard drive can be a great way to keep dated copies of files so you can restore any file to a specific version of it. Important files can be kept on a CD or flash drive. These are not suitable for all your files since they have limited space, but they are also less prone to failure.

One of the best ways to back up data is to use a cloud storage service such as Google Drive or Dropbox. Since the files are stored by the service, you don’t have to worry about losing the flash drive or mechanical failure. All you need to access your files is an internet connection. And, all UMass students, faculty, and staff get access to unlimited storage on both Google Drive and Box. Both of these services can be used not just to store your files, but also access and share them anywhere.

Virtual Reality: The Next Generation of Gaming

Virtual reality has long been a dream of gamers everywhere. The next level of immersion into a fictional world will bring players themselves into the game, instead of simply showing it on a screen. The idea of being ‘plugged in’ to a different reality has been used in fictional films like The Matrix and TV shows like Fringe, but that’s all these realities have been – fiction.

Until now.

For the past few years, virtual reality projects have been popping up and growing in complexity and immersion. There are a few different ideas about how it should be done; here we will take a look at some of the most well-known virtual reality projects.

Oculus Rift

Oculus_VR_Logo

The Oculus VR company logo, creators of the Oculus Rift.

One of the first major virtual reality projects, the Oculus Rift is arguably the most recognizable name in the industry so far. Originally announced in August 2012, the Oculus Rift started as a Kickstarter campaign that raised $2.4 million. In June 2015, Facebook bought the Oculus VR company for $2 billion. Oculus Rift devices have been seen at numerous gaming and technology expos, such as PAX, E3 and SXSW, as development kit platforms for many indie games. The Oculus Rift Development Kit has went through 2 iterations and has been used for development for the past 3 years.

The Oculus Rift boasts a 1080×1200 resolution per eye, a 90Hz refresh rate, and a 100 degree field of vision.The consumer edition of the device is approaching its release in Q1 2016.

Initially, it was little more than a virtual reality development kit exclusive to developers and game studios. The company had been distributing Development Kits since its Kickstarter campaign. Today, the Oculus Rift is preparing for its consumer launch, and some preorders have already been shipped.

oculus-rift-consumer-edition

The Oculus Rift Consumer Edition, available Q1 2016.

The Oculus Rift is generally considered the most premium of current VR projects. The manufacturing process for the Rift involves hundreds of custom parts and tracking sensors. The project has been praised for being one of the most sleek and seamless VR devices, and is also notable in its progress in one of the biggest challenges in the VR industry today: VR interaction.

We are a long way away from virtual reality experiences that would allow the user to naturally move in or touch something in the environment. Many other projects either leave the user stationary and only able to look around; some, including the Oculus Rift, allow users to move using a gamepad. Oculus, however, has also made progress of their own in VR interaction. The Oculus Touch is a pair of ergonomic controllers featuring buttons, joysticks, and triggers that also track hand movement. The Oculus Touch compliments the Oculus Rift and is currently available for developers.

oculus-rift-with-oculus-touch

The Oculus Touch controllers communicate wirelessly with the Oculus Rift, offering a more immersive and less tethered VR interaction experience.

The Oculus Rift will need to be run by a very powerful computer, since it is so graphically intensive. Their website recommends a machine with:

  • CPU: Intel i5-4590 equivalent or greater
  • GPU: GTX 970 / AMD 290 equivalent or greater
  • RAM: 8GB+
  • OS: Windows 7 or newer
  • 2x USB 3.0 ports
  • 1x HDMI 1.3 video output

Dell, Alienware, and ASUS have already announced lines of Oculus-ready high performance PC towers, starting at around $950-$1000.

The Oculus Rift Consumer Edition is scheduled to hit the market in Q1 2016. It will cost $350, and include removable headphones (allowing the user to use their own headphones), an Xbox One for Windows controller, the Oculus Touch controller, and an LED camera stand used to track head movement.

Samsung Gear VR

Originally announced in September 2014, the Samsung Gear VR was developed by Samsung in collaboration with Oculus. The device itself is not a complete virtual reality experience; the most recent revision needs a Samsung Galaxy S6, S6 Edge, or Note 5 to be plugged into it by Micro USB to act as the display and processor. The headset itself contains only the field of view lenses and an accelerometer (the phone’s built-in accelerometer is not very powerful and does not provide adequately accurate tracking capability to provide a premium VR experience).

samsung-gear-vr-for-s6

Samsung’s most recent revision of the Gear VR, made for use with the S6 device line.

The Samsung Gear VR is currently one of the most popular consumer-grade virtual reality headsets because of its low price; the headset itself only costs $100. The phone, of course, is separate, but many Gear VR users already use an S6 device as their personal smartphone.

The Gear VR features a small trackpad and button on the right side of the headset, allowing for limited VR interaction capability.

samsung-gear-vr-tested

Will from Tested gives the Samsung Gear VR a shot – but forgets to insert the display. Click to watch their test run of the headset.

However, you do get what you pay for. The display’s immersion is only as good as the device powering it, which is usually 60Hz or less, and there are no built-in headphones; you have to plug them into the phone and deal with the headphone wire. Graphics are usually prerendered and not as detailed as tethered VR devices that rely on a PC tower for active rendering.

Google Cardboard

Google Cardboard is the cheapest of the consumer-level options for virtual reality.
It is essentially a build-it-yourself Gear VR. Like the Gear VR, it is powered entirely by the smartphone, but unlike the VR, it relies the phone’s built-in accelerometer, and there is no headstrap so you have to hold the device up to your eyes while using it. The headset itself is, as the name implies, nothing but a folded cardboard container with a pair of convex lenses inside.

fold

A Google Cardboard headset using a Nexus phone. The phone is folded into the front of the headset and held in place with velcro.

Google Cardboard is easy to make at home, and its website gives instructions on how to find the parts necessary and put them together. There are many manufacturer variations on Google Cardboard that are built in different ways and available for purchase and assembly.

google-cardboard-virtual-reality-vr-headset-3d-glasses

A diagram of the basic parts needed to assemble Google Cardboard. Click the image to learn more about how to get Google Cardboard.

The headset fits any phone up to 6″ and Cardboard apps are available for iOS, Android, and Windows Phone.

HTC Vive

The HTC Vive, announced in March 2015, is a virtual reality headset being developed in partnership between HTC and Valve. The device is part of Valve’s larger effort to expand the Steam platform into more areas – including other projects such as the Steam Controller, Steam Link, Steam Machines, and SteamOS, all part of the Steam Universe.

htc_vive

The many dots on the front of the headset are laser position sensors – the device is meant to operate in a 15’x15′ space.

The headset is tethered to a base known as the Lighthouse, but it is still meant to be moved around in. The device contains more than 70 sensors including a MEMS gyroscope, accelerometer and laser position sensors. The headset comes with two Lighthouse towers that emit lasers to map out the room in accordance with the headset’s front cameras. The cameras also track static and moving objects in front of the user, allowing the device to warn the user of hitting an obstacle, like a wall.

Valve has released SteamVR APIs to everyone under the label OpenVR, allowing developers to create virtual reality environments with or without the use of Steam.

The Vive Developer Edition is available now for free for certain developers, and it comes with SteamVR Controllers, a pair of one-handed controllers similar to the Oculus Touch, but based off of the concave trackpads of the Steam Controller. No word yet on a Consumer Edition.

Microsoft HoloLens

Microsoft’s HoloLens platform is a little different from the other virtual reality headsets we’ve seen; it’s more like Google Glass than the Oculus Rift. Instead of showing you a completely different world, the HoloLens captures the setting around you and superimposes ‘holograms,’ in a sort of ‘mixed reality.’ You still see what’s in front of you, but you can see and interact with non-real figures as if it’s all right in front of you.

https://www.youtube.com/watch?v=aThCr0PsyuA

Users can interact with the holograms through eye movements, voice commands, and hand gestures. The device uses an array of video cameras and microphones, an inertial measurement unit (IMU), an accelerometer, a gyroscope, and a magnetometer. A ‘light engine’ sits atop the lenses and projects light into a diffractive element that then reflects into the user’s eyes, creating the illusion of holograms.

microsoft-hololens-minecraft-virtual-reality

Microsoft bought Mojang in September 2014 for $2.5 billion. Minecraft for HoloLens is one of the most notable uses for the headset currently in development.

The most impressive part of the HoloLens is its integration. The device needs no wires nor external processing power. It is completely untethered, allowing the user to move freely through their environment. The headset houses the battery and all of the processor systems inside. It contains a holographic processing unit (HPU) that takes in the information from the environmental sensors and creates the holographs. The holographic display is presented with an optical projection system.

microsoft-hololens

The Microsoft HoloLens is completely untethered and houses all of the processing power inside of the headset.

The Development Edition will begin shipping in Q1 2016 and will cost $3000. There is no word yet of a consumer edition.

Replacing Your Hard Drive

What Is Your Hard Drive?

One of the key pieces of hardware inside your computer is the hard drive. You may have also heard it called the hard disk or sometimes (incorrectly), the memory.

If you imagine your computer as a human body, your hard drive could be described as the long-term memory of the body. It is where data gets permanently stored for later use.

There are two types of hard drives that you will see frequently, standard hard drives (HDD) and solid state drives (SSD). The traditional hard drive is much more common and uses a magnetic arm to write data along a series of spinning disks. Solid-state drives use a series of interconnected flash memory chips to store data. We will get into why you would chose one over the other later in this article.

Why Would You Replace It?

Your (standard) hard drive is one of the few moving parts inside your computer (the others usually being your cooling fans and your CD drive). Because of this, standard hard drives are often one of the first parts to fail in a computer, and tend to do so after 3-5 years.

Oftentimes computer issues such as slowness or failure to boot are caused by an older hard drive beginning to fail.

What Replacement Hard Drive Should I Buy?

The Hard Drive that you should buy to replace your failing one depends on the way you use your computer. There are several factors to take into account.

The first important factor is size. Ask yourself: how much space do you use on your hard drive? Though HDDs range from just a few gigabytes to several terabytes (1 TB = 1024 GB), the most common sizes currently are 500GB and 1TB. If you use your computer to simply browse the Internet and do basic schoolwork, 500GB will be more than enough for you. However, if you use your computer to store large amounts of media files, or if you play many video games, you should buy a 1TB hard drive.

The second question that you should ask yourself is whether you want an SSD or an HDD. SSDs offer many distinct advantages over a traditional hard drive. They are significantly faster, and tend to be more durable than traditional drives, as they have no moving parts. Replacing a traditional hard drive with an SSD is one of the simplest ways to speed up your computer. Unfortunately SSDs are much more expensive than HDDs for the same amount of space. A 500GB HDD often costs close to the price of a 128GB SSD.

One way to overcome the cost issue presented by SSDs is to install more than one hard drive. In many cases, it makes sense to install a small SSD, which would host your operating system and your most frequently used programs, and to use an HDD for most of your data and media.

How Do I Replace It?

On Windows PCs, especially desktops, hard Drives are often relatively simple to replace. Guides can be found on Youtube, as well as at https://www.ifixit.com/Device/PC .

Alternatively, you can come to the IT User Services Help Center, located in the LGRC low rise. We can take a hard drive you bring us (or we can sell you one of our own), and we can replace it for a small service fee (usually about $50).