The University of Massachusetts Amherst
Categories
Hardware iOS Mac OSX

What’s New With AirPods 2?

Apple’s AirPods have quickly become the best selling wireless headphones and are now the second-best selling Apple product. The small white buds have quickly become ubiquitous across the U.S. and are many people’s go-to wireless earbud option. This week, Apple has refreshed the AirPods with a newer model, giving them additional features. These new second generation AirPods look identical to the first generation on the outside, but on the inside much has changed. Utilizing Apple’s new H1 Chip (as opposed to the W1 chip inside the first generation), the new AirPods are able to pair to your iPhone more quickly than ever, and are now able to switch between devices in a much shorter time-frame (a common complaint with the first generation AirPods). Additionally, the new AirPods offer lower latency, which means audio will be more in sync with videos and games. Battery life as also seen an improvement, with talk time now up to 3 hours on a single charge.

Perhaps the biggest feature of these new AirPods does not have anything to do with the earbuds themselves. The case that the new AirPods ship with is now wireless-charging enabled. This means that AirPods can now be charged wirelessly using any Qi-enabled wireless charging pad. Additionally, the new AirPods with Wireless Charging Case will be compatible with Apple’s upcoming AirPower Mat, which will charge an iPhone, Apple Watch, and AirPods, all wirelessly. For those of you with first generation AirPods, don’t fret! Apple is looking to share the wireless charging features with all AirPods owners. The Wireless Charging Case is cross-compatible with both generations of AirPods, and is available for separate purchase for a reduced price. This means that if you already own a pair of AirPods, you are able to purchase the new Wireless Charging Case individually and use it with your first generation AirPods.

With the continued success of AirPods and the continued removal of analog headphone ports from mobile devices, the wireless headphone market will be one that will continue to evolve rapidly for the foreseeable feature. Seeing what features Apple will add to future AirPods to entice customers to continue purchasing them will be interesting, as will seeing how their competitors in the space will improve their products to compete.

Categories
Hardware Software Web

Cryptocurrency – Why decentralization is a big deal.

TheMerkle Bitcoin Ethereum

Cryptocurrencies have taken a seemingly permanent foothold in the world of technology and banking; more and more people are reaching out and investing or making transactions with Bitcoin and similar online coins. The potential impact that these decentralized coins have on our society is enormous for laypeople and tech enthusiasts alike.

Why is decentralization a big deal?

Throughout all of history, from the great Roman Empire to modern-day United States, money has been backed, affiliated, printed, and controlled by a governing body of the state. Artificial inflation rates, adjustable interest rates, and rapid economic collapses and growth are all side-effects of a governing body with an agenda controlling the money and its supply.

Bitcoin, for example, is one of many online cryptocurrencies, and has no official governing entity. This is completely uncharted territory, as not only is it not being manipulated artificially, but it is not associated with any governing body or any regulations and laws that may come with it. The price is determined solely on the open market – supply and demand.

No other currency has ever been free of a governing body and state like cryptocurrencies are today. The major effect of this is what it will do to the banking industry. Banks rely on governments to control interest rates, and they rely on there being a demand for money, specifically a demand for money to be spent and saved. Banks are intertwined with our identity, it is assumed everyone has a checking account and is a member of a large bank, and thus the forfeiting of all of our privacy and personal information that goes along with creating a bank account and identity. The opportunity to choose whether or not to be part of a bank, and further to be your own bank and hold your own cryptocurrencies in your own locked vault, is a privilege none of our ancestors were ever granted.

The implications of a mass of people determining to be their own bank is catastrophic for banking entities. Purchasing and transacting things will become more secure, and more private. People will not be able to be tracked by where they swiped their credit card, as Bitcoin by it’s very nature is anonymous and leaves no trail. The demand for banks will go down and change the entire workings of the very foundation of our government – if enough people choose to take this route.

What’s the catch?

A heated discussion is currently present on the usability of cryptocurrency in today’s world, this is a topic that is under heavy scrutiny as ultimately it will determine how successful it is for cryptocurrencies to be the major player in today’s economy.

The con’s of cryptocurrency currently lie in the usability for small and/or quick transactions in today’s society. In order for Bitcoin to be able to be used, it most be supported by both a buyer and a seller. That means that business owners must have a certain threshold of “tech saviness” to be able to even entertain the thought of accepting bitcoin as a payment.

Bitcoin transaction visualization

In conjunction with needing to be supported on both ends, the fees for transacting are determined by how quickly the transaction needs to “go through” the network – see this article on how bitcoin transactions work on the tech side – and how big the transaction is monetarily. For example, a $100 transaction to another person that needs to get to that person in 20 minutes will likely be significantly more expensive than a $100 transaction that needs to get to that person in a 24 hour period. This spells trouble for small transactions, like your local coffee shop. If a coffee shop wants to accept bitcoin, they have two options. They can either take the gamble and allow a longer period of time for transactions to process – running the risk of someone not actually sending a transaction and skimming a free service from them – or require a quick 20 minute transaction but have higher fees for the buyer, and in turn a possible drop in sales via bitcoin.

The last point is crucial to understanding and predicting the future of cryptocurrencies in our world. If the fees and time for transactions to complete are lowered and made more efficient, Bitcoin will almost inevitably take a permanent resting place in our society as a whole, and perhaps be the most used currency, changing the game and freeing money up from regulation, agendas, and politics.

Categories
Apps Hardware Security

Data Backups

Broken laptops happen to anyone and everyone, and they generally choose the least convenient time to break down. Whether it’s right at the beginning of an online test, as soon as you finish a long and important paper, or you just finished all your work and really just want to watch netflix, your laptop seems to know exactly when you least want it to break. However, while a ruined Netflix session might be unfortunate, there’s not much worse than losing all of your files.

Nowadays computers are used to store everything from irreplaceable home movies to 100 page long thesis papers, and backing up data is more important than ever. If your computer crashes, there’s no guarantee that your data will still be there if it turns on again. If that happens, the best way to save yourself some heartbreak and frustration is to have a regular backup of your data, or even two (or three if it’s something as important as your thesis!). For someone who barely uses their laptop, backing up once a month might be plenty. However anyone who regularly uses their laptop to write up or edit documents (which is the case for most students) should be backing up their machine at least once a week if not even more frequently.

So how and where can you backup your data? Well there’s a few popular options, namely on an external drive or in the cloud.

External

For external drives, 1TB is a standard size, although you might want to get a bigger one if you have a really large amount of files that you want to back up (or a million photos and videos). Some popular brands are Seagate, Western Digital, and Toshiba and they run about $50 for 1TB drives. Also be sure to get one that has USB 3.0, as that will increase the speed of the data transfer.

Image result for hard drive back up

Cloud

UMass provides unlimited secure online storage through Box. With Box you are able to securely store and share your files online, so that they can be accessible through multiple devices and so that you won’t lose them if your laptop decides suddenly to stop working. To read more about Box or get started with backing up your files you can go to https://www.umass.edu/it/box.

Image result for box cloud backup

Categories
Hardware

The Future Of Wireless Charging

The idea of powering devices wirelessly has been around for centuries, ever since Nikola Tesla built the Tesla tower that could light up lamps about 2 km away based on electromagnetic induction. Wireless Charging devices can be traced back to electric toothbrushes that used a relatively primitive form of inductive charging, decades before Nokia announced Integrated Inductive charging in its break-though Lumia 920 model in 2012. This marked the birth of the Qi standard which at that time was still contending for the much coveted universal/international standard spot. Although now it seems like wireless charging is right around the corner; and with Apple and Google launching Qi compatible phones, the message is clear and simple. ‘Wireless is the future and future is here.’ Or is it ?

 Qi (Mandarin for ‘material energy’ or ‘inner strength’) is a near-field energy transfer technology that works on the principle of electromagnetic induction. Simply put, the base station (charging matt, pad or dock) has a transmitting coil, which (when connected to an active power source) induces a current into the receiver coil in the phone, which in turn charges the battery. In its early stages, Qi used ‘guided positioning’ which required the device to be placed in a certain alignment on the base station. With some rapid developments over the time, this has been effectively replaced by the ‘free positioning’, which is standard in almost all the recent Qi charging devices. There’s a catch here- the devices must have a transmittable back surface. Glass is currently the most viable option and most Qi compatible smartphones have glass backs. This certainly has its implications though, the obvious one being significantly reduced durability.

Come to think of it, the fact that in order to charge, the device has to be within at the most an inch of the base station sounds counterproductive.  Besides, if the base needs to be connected to a power source, that’s still one cable. So…….. what’s the point ? Well currently the mobility part is more of a grey-area since this technology is still in its transitional phase. Majority of the Qi compatible smartphones still come with a traditional adapter by default and the wireless dock needs to be purchased separately. There are several other issues with near-field charging that need to be addressed, such as-

  •  Longer charging times
  •  Reduced efficiency ~60-70%
  •  high manufacturing costs
  •  higher energy consumption which could lead to increased production costs of electricity
  •  residual electromagnetic waves- a potential health risk
  •  Devices heat up faster compared to traditional adapters >  energy/heat waste
  •  Higher probability of software updates causing bugs

Over the past decade, people have come up with interesting solutions for this, including a charging phone case and even a battery-less phone powered by ambient radio waves and wifi signals. But the most promising option is the pi charging startup which hopes to fix the range issue, by allowing devices to pair with a charging pad within the range of a foot in any direction. The concept is still in its experimental stages and it’s going to be a while before the mid-long range wireless charging technology becomes pervasive standard for smartphones and other IoT devices. Assuming, further progress is made down that road, wireless charging hotspots could be a possibility in the not-very-distant future.

Qi standard despite all its shortcomings has had considerable success in the market and it looks like it’s here to stay for the next few years. A green light by both Apple and Google has given it the necessary boost towards being profitable and wireless pads are gradually finding their way into Cafes, Libraries, Restaurants, Airports etc. Furniture retailers such as Ikea have even started manufacturing wireless charging desks and tables with inductive pads/surfaces built in.  However, switching completely to, and relying solely on inductive wireless charging wouldn’t be the most practical option as of now unless upgrades are made to it, keeping all the major concerns surrounding it in sights. Going fully wireless would mean remodelling the very foundations of conventional means of transmitting electricity. In short, the current Qi standard is not the endgame and it can be seen as more of a stepping stone towards mid-long range charging hotspots.

Categories
Hardware

How Tesla is Revolutionizing Solar Energy

Unless you live under a rock, the chances are that you’ve come across Tesla technology in your daily life. From their very successful car line consisting of some of the sleekest, fastest, most efficient electric cars on the market, or even their ventures in making renewable rocketry in SpaceX, Tesla is making a name for itself in revolutionizing technology for the next era. But one of their ventures has for the most part flied under the radar, despite its huge advantages. That’s right, I’m talking about Tesla Solar.

Formerly a separate entity under the title SolarCity, Tesla purchased the company and made it one of its premier subsidiaries of Tesla Energy in 2016. The mission of Tesla Energy is to bring the power of solar energy into control of the consumer, whether it be a residential or commercial premises. This is a more cost and space effective means of solar energy that doesn’t transform massive amount of space or forests into giant solar farms. This not only gives people control over their energy production and costs, but can also be beneficial to consumers through energy grid buyback, where the grid pays you for excess energy.

But enough about the company; lets talk about the technology behind it. Tesla Energy’s Solar Panels are thin and sleek, allowing them to seamlessly fit in to any roofing style or shape. None of the mounting hardware is visible, allowing the panels to blend into the roof almost as if they were never there. In fact, Tesla Energy takes it another step further, through their Solar Roof. This is a complete roofing unit that takes the technology of solar panels into interlocking shingles, allowing your entire roof to capture energy from solar rays and power your home. If you thought the slim, sleek design of their panels was impressive, Tesla Energy’s Solar Roof takes it to another level. While most would worry about these energy shingles getting damaged, Tesla prides itself with being much stronger than most other roof tile alternatives.

 

You might be thinking, “Capturing all of this energy is cool and all, but how does it all get stored?” Well, Tesla has that covered. Their own technology, Powerwall, seamlessly connects to your Solar inputs and hooks up to the electrical system in your home or business. Not only does this guarantee that you can use your own power you produce, but allows you to have uninterrupted power even when there’s a grid outage, leaving your life uninterrupted and those Christmas lights in your front yard up and running. According to Tesla, Powerwall can leave you with up to 7+ days of power during an outage. If you happen to own a Walmart or large retail store, the capabilities of Powerwall can be expanded into Tesla Energy’s commercial units, where micro-grids and Powerwalls can be built for your commercial needs.

The future of residential and off-the-grid living is here. Through Tesla Energy, people can independently and reliably power their homes and businesses completely grid free. While the costs may be expensive right now, increased competition in the sector can lead to a bigger market, which will lower costs as Tesla is able to push this out to more consumers. Even with Tesla being known widely as the electric car company, the company is making strides in the renewable energy sector which stemmed off of its work in revolutionizing the electric car battery. The future is bright for renewable energy, and the future is even brighter for Tesla.

For more information on Tesla Energy, visit their website, at https://www.tesla.com/energy

All images used in this blog were obtained from tesla.com, all rights reserved.

Categories
Hardware Library Mac OSX Software Windows

A Reflection on Winning The Vive

By Parker Louison 

The Views and Opinions Expressed in This Article Are Those of Parker Louison and Do Not Necessarily Reflect the Official Policy or Position of UMass Amherst IT 

A Note of Intention

I want to start off this article by explaining that I’m not making this in an effort to gloat or brag, and I certainly hope it doesn’t come across that way. I put all of the creative energy I had left this semester into the project I’m about to dissect and discuss, so sadly I won’t be publishing a video this semester (as I’ve done for the past two semesters). One of the reasons I’m making this is because a lot of the reaction towards what I made included people asking how I made it and how long it took me, and trust me, we’ll go in depth on that.

My First Taste

My first experience with high-grade virtual reality was a few weeks before the start of my sophomore year at UMass when my friend Kyle drove down to visit me, bringing along his HTC Vive after finding out that the only experience I’d had with VR was a cheap $20 adapter for my phone. There’s a consensus online that virtual reality as a concept is better pitched through firsthand experience rather than by word of mouth or marketing. The whole appeal of VR relies on subjective perception and organic optical illusions, so I can understand why a lot of people think the whole “you feel like you’re in the game” spiel sounds like nothing but a load of shallow marketing. Remember when Batman: Arkham Asylum came out and nearly every review of it mentioned that it made you feel like Batman? Yeah, well now there’s actually a Batman Arkham VR game, and I don’t doubt it probably does make you actually feel like you’re Batman. The experience I had with VR that night hit me hard, and I came to understand why so many people online were making it out to be such a big deal. Despite my skeptical mindset going in, I found that it’s just as immersive as many have made it out to be. 

This wasn’t Microsoft’s Kinect, where the action of taking away the remote actually limited player expression. This was a genuinely deep and fascinating technological breakthrough that opens the door for design innovations while also requiring programmers to master a whole new creative craft. The rulebook for what does and doesn’t work in VR is still being written, and despite the technology still being in its early stages, I wanted in. I wanted in so badly that I decided to try and save up my earnings over the next semester in an effort to buy one. That went about as well as you’d expect; not just because I was working within a college student’s budget, but also because I’m awful with my money. My Art-Major friend Jillian would tell you it’s because I’m a Taurus, but I think it has more to do with me being a giant man-child who impulse-purchases stupid stuff because the process of waiting for something to arrive via Amazon feels like something meaningful in my life. It’s no wonder I got addicted to Animal Crossing over Spring Break… 

The Task

Anyway, I was sitting in my Comp-Lit discussion class when I got the email about the Digital Media Lab’s new Ready Player One contest, with the first place winner taking home an HTC Vive Headset. I’m not usually one for contests, and I couldn’t picture myself actually winning the thing, but something about the challenge piqued my interest. The task involved creating a pitch video, less than one minute in length, in which I’d have to describe how I would implement Virtual Reality on campus in a meaningful way. 

With Virtual Reality, there are a lot of possible implementations relating to different departments. In the Journalism department, we’ve talked at length in some of my classes about the potential applications of VR, but all of those applications were either for the benefit of journalists covering stories or the public consuming them. The task seemed to indicate that the idea I needed to pitch had to be centered more on benefiting the average college student, rather than benefiting a specific major (at least, that’s how I interpreted it). 

One of my original ideas was a virtual stress-relief dog, but then I realized that people with anxiety would likely only get even more stressed out with having to put on some weird giant headset… and real-life dogs can give hecking good nuzzles that can’t really be simulated. You can’t substitute soft fur with hard plastic. 

I came to college as a journalism major, and a day rarely goes by when I don’t have some doubts about my choice. In High School I decided on journalism because I won this debate at a CT Youth Form thing and loved writing and multi-media, so I figured it seemed like a safe bet. Still, it was a safe bet that was never pitched to me. I had no idea what being a journalist would actually be like; my whole image of what being a reporter entailed came from movies and television. I thought about it for a while, about how stupid and hormonal I was and still am, and realized that I’m kind of stuck. If I hypothetically wanted to switch to chemistry or computer science, I’d be starting from scratch with even more debt to bear. Two whole years of progress would be flushed down the toilet, and I’d have nothing to show for it. College is a place for discovery; where your comfortable environment is flipped on its head and you’re forced to take care of yourself and make your own friends. It’s a place where you work four years for a piece of paper to make your resume look nicer when you put it on an employer’s desk, and you’re expected to have the whole rest of your life figured out when you’re a hormonal teenager who spent his savings on a skateboard he never learned how to ride.

And so I decided that, in this neo-cyberpunk dystopia we’re steadily developing into, it would make sense for simulations to come before rigorous training. Why not create simulated experiences where people could test the waters for free? Put themselves in the shoes of whatever career path they want to explore to see if the shoes fit right, you know?

I mentioned “cyberpunk” there earlier because I have this weird obsession with cyberpunk stuff at the moment and I really wanted to give my pitch video some sort of tongue-in-cheek retrograde 80s hacker aesthetic to mask my cynicism as campy fun, but that had to be cut once I realized I had to make this thing under a minute long.

Gathering My Party and Gear

Anyway, I wrote up a rough script and rented out one of the booths in the Digital Media Lab. With some help from Becky Wandel (the News Editor at WMUA) I was able to cut down my audio to just barely under the limit. With the audio complete, it came time to add visual flair. I originally wanted to do a stop-motion animated thing with flash-cards akin to the intros I’ve made for my Techbytes videos, but I’m slow at drawing and realized that it’d take too much time and effort, which is hilarious because the idea I settled on was arguably even more time-consuming and draining.

I’m the proud owner of a Nikon D80, a hand-me-down DSLR from my mom, which I bring with me everywhere I go, mostly because I like taking pictures, but also because I think it makes me seem more interesting. A while back I got a speck of dust on the sensor, which requires special equipment to clean (basically a glorified turkey baster). I went on a journey to the Best Buy at the Holyoke Mall with two friends to buy said cleaning equipment while documenting the entire thing using my camera. Later, I made a geeky stop-motion video out of all those photos, which I thought ended up looking great, so I figured doing something similar for the pitch video would be kind of cool. I messaged a bunch of my friends, and in a single day I managed to shoot the first 60% of the photos I needed. I then rented out the Vive in the DML and did some photoshoots there. 

At one point while I was photographing my friend Jillian playing theBlu, she half-jokingly mentioned that the simulation made her want to study Marine Biology. That kind of validated my idea and pushed me to make sure I made this video perfect. The opposite effect happened when talking to my friend Rachael, who said she was going to pitch something for disability services, to which I immediately thought “damn, she might win with that.”

I then knew what I had to do. It was too late to change my idea or start over, so I instead decided that my best shot at winning was to make my video so stylistically pleasing and attention-grabbing that it couldn’t be ignored. If I wasn’t going to have the best idea, then gosh darn it (I can’t cuss because this is an article for my job) I was going to have the prettiest graphics I could muster.   

The Boss Fight 

I decided to use a combination of iMovie and Photoshop, programs I’m already familiar with, because teaching myself how to use more efficient software would ironically be less efficient given the short time frame I had to get this thing out the door. Using a drawing tablet I borrowed from my friend Julia, I set out to create the most complicated and ambitious video project I’ve ever attempted to make. 

A few things to understand about me: when it comes to passion projects, I’m a bit of a perfectionist and extremely harsh on myself. I can’t even watch my Freshman Year IT video because I accidentally made it sound like a $100 investment in some less than amazing open back headphones was a reasonable decision on my part, and my other IT video makes me cringe because I thought, at the time, it’d be funny to zoom in on the weird hand motions I make while I talk every five seconds.

So in this case, I didn’t hold back and frequently deleted whole sections of my video just because I didn’t like how a single brush stroke animated (with the exception of the way my name is lopsided in the credits, which will haunt me for the rest of my life). For two weeks, I rigorously animated each individual frame in Photoshop, exported it, and imported it into iMovie. 

(Above) A visual representation of all the files it took to create the video

(Above) Frame by frame, I lined up my slides in iMovie

The most demanding section was, without a doubt, the one involving my friend Matthew, which I spent one out of the two weeks entirely focused on. For that section, I needed it to animate at a speed faster than 0.04 seconds, which is impossible because 0.04 seconds is the shortest you can make a frame in iMovie’s streamlined interface, so I ended up creating a whole new project file, slowing down my audio by half-speed, editing the frames of that section relative to that slowed down audio before exporting it, putting it into the original project file and doubling its speed just to get it to animate smoothly. 

 (Above) Some sections required me to find loopholes in the software to get them to animate faster than iMovie would allow

(Above) Some of the scrap paper I scribbled notes on while editing the video together

Each individual border was drawn multiple times with slight variations and all the on-screen text (with the exception of the works cited) was handwritten by me multiple times over so that I could alternate between the frames of animation to make sure everything was constantly moving. 

(Above) Boarders were individually drawn and cycled through in order to maintain visual momentum

This was one of my major design philosophies during the development of this project: I didn’t want there to be a single moment in the 59 seconds where nothing was moving. I wanted my video to grab the viewer’s attention, and I feared that losing momentum in the visual movement would cause me to lose the viewer’s interest. The song LACool by DJ Grumble came on my Spotify radio coincidentally right when I was listening over the audio for the section I was editing, and I thought it fit so well I bought it from iTunes on the spot and edited it in.

I finished my video on Monday, March 26th, turned it into the Digital Media Lab, stumbled back to my dorm, and went to bed at 6:00 PM by accident. 

The Video

(Above) The final video submission 

The winner wouldn’t be announced until Wednesday, so for two days I nervously waited until 6:00 PM on March 28th, when I sat on my bed in my dorm room refreshing the Digital Media Lab website every 7 seconds like a stalker on an ex’s Facebook page waiting for the winner to finally be posted. At 6:29 PM I got a call from an unrecognized number from Tallahassee, Florida, and almost didn’t answer because I thought it was a sales call. Turns out it was Steve Acquah, the coordinator of the Digital Media Lab, who informed me that my video won. Soon after, the Digital Media Lab Website was also updated with the announcement.

(Above) A screenshot taken of the announcement on the Digital Media Lab Website 

Thank You

Along with the raw joy and excitement came a sort of surreal disbelief. Looking back on those stressful weeks of work, it all felt like it flew by faster than I could’ve realized once I got that phone call. I’m so grateful for not only the reward but the experience. Making that video was a stressful nightmare, but it also forced me to push myself to my creative limits and challenge myself in so many ways. On a night where I would’ve probably just gone home and watched Netflix by myself, I sprinted around campus to meet up with and take photos of my friends. This project got me to get all my friends together and rent out the Vive in the DML, basically forcing me to play video games and have fun with the people I love. While the process of editing it all together drove me crazy, the journey is definitely going to be a highlight of my time at UMass. 

I’m grateful to all of my friends who modeled for me, loaned me equipment, got dinner with me while I was stressing out over editing, played Super Hot VR with me, gave me advice on my audio, pushed me to not give up, and were there to celebrate with me when I won. I’m also immensely grateful to the staff and managers of the DML for providing me with this opportunity, as well as for their compliments and praise for the work I did. This was an experience that means a lot to me and it’s one I won’t soon forget. Thank you.

Epilogue

I picked up my prize the other day at the DML (see photo above the title of this article)! Unfortunately, I have a lot of work going on, so it’s going to be locked up in a safe place until that’s done. Still, it’s not like I could use it right now if I wanted to. My gaming PC hasn’t been touched in ages (since I don’t bring it with me to college) so I’m going to need to upgrade the GPU before I can actually set up the Vive with it. It’s a good thing there isn’t a spike in demand for high-end GPUs at the moment for cryptocurrency mining, right?

(Above) A visual representation of what Bitcoin has done to the GPU market (and my life)

…Oh.

Regardless of when I can actually use the prize I won, this experience was one I’m grateful to have had. The video I made is one I’m extremely proud of, and the journey I went on to create it is one I’ll think about for years to come.

Categories
Hardware Software

A [Mathematical] Analysis of Sample Rates and Audio Quality

 

Digital audio again? Ah yes… only in this article, I will set out to examine a simple yet complicated question: how does the sampling rate of digital audio affect its quality? If you have no clue what the sampling rate is, stay tuned and I will explain. If you know what sampling rate is and want to know more about it, also stay tuned; this article will go over more than just the basics. If you own a recording studio and insist on recording every second of audio in the highest possible sampling rate to get the best quality, read on and I hope inform you of the mathematical benefits of doing so…

What is the Sampling Rate?

In order for your computer to be able to process, store, and play back audio, the audio must be in a discrete-time form. What does this mean? It means that, rather than the audio being stored as a continuous sound-wave (as we hear it), the sound-wave is broken up into a bunch of infinitesimally small points. This way, the discrete-time audio can be represented as a list of numerical values in the computer’s memory. This is all well and good but some work needs to be done to turn a continuous-time (CT) sound-wave into a discrete-time (DT) audio file; that work is called sampling.

 

Sampling is the process of observing and recording the value of a complex signal during uniform intervals of time. Figure 1(a) is ‘analog’ sampling where this recorded value is not modified by the sampling process and figure 1(b) is digital sampling where the recorded value is quarantined so it can be represented with a binary word.

During sampling, the amplitude (loudness) of the CT wave is measured and recorded at regular intervals to create the list of values that make up the DT audio file. The inverse of this sampling interval is known as the sample rate and has a unit of Hertz (Hz). By far, the most common sample rate for digital audio is 44100 Hz; this means that the CT sound-wave is sampled 44100 times every second.

This is a staggering number of data points! On a audio CD, each sample is represented by two bytes; that means that one second of audio will take up over 170 KB of space! Why is all this necessary? you may ask…

The Nyquist-Shannon Sampling Theorem

Some of you more interested readers may have heard already of the Nyquist-Shannon Sampling Theorem (some of you may also know this theorem simply as the Nyquist Theorem). The Nyquist-Shannon Theorem asserts that any CT signal can be sampled, turned into a DT file, and then converted back into a CT signal with no loss in information so long as one condition is met: the CT signal is band-limited at the Nyquist Frequency. Let’s unpack this…

Firstly, what does it mean for a signal to be band-limited? Every complex sound-wave is made up of a whole myriad of different frequencies. To illustrate this point, below is the frequency spectrum (the graph of all the frequencies in a signal) of All Star by Smash Mouth:

Smash Mouth is band-limited! How do we know? Because the plot of frequencies ends. This is what it means for a signal to be band-limited: it does not contain any frequencies beyond a certain point. Human hearing is band-limited too; most humans cannot hear any frequencies above 20,000 Hz!

So, I suppose then we can take this to mean that, if the Nyquist frequency is just right, any audible sound can be represented in digital form with no loss in information? By this theorem, yes! Now, you may ask, what does does the Nyquist frequency have to be for this to happen?

For the Shannon-Nyquist Sampling Theorem to hold, the Nyquist frequency must be greater than twice the highest frequency being sampled. For sound, the highest frequency is 20 kHz; and thus, the Nyquist frequency required for sampled audio to capture sound with no loss in information is… 40 kHz. What was that sample-rate I mentioned earlier? You know, that one that is so common that basically all digital audio uses it? It was 44.1 kHz. Huzzah! Basically all digital audio is a perfect representation of the original sound it is representing! Well…

Aliasing: the Nyquist Theorem’s Complicated Side-Effect

Just because we cannot hear sound about 20 kHz does not mean it does not exist; there are plenty of sound-waves at frequencies higher than humans can hear.

So what happens to these higher sound-waves when they are sampled? Do they just not get recorded? Unfortunately no…

A visual illustration of how under-sampling a frequency results in some unusual side-effects. This unique kind of error is known as ‘aliasing’

So if these higher frequencies do get recorded but frequencies above the Nyquist frequency cannot be sampled correctly, then what happens to them? They are falsely interprated as lower frequencies and superimposed over the correctly sampled frequencies. The distance between the high frequency and the Nyquist frequency govern what lower frequency these high-frequency signals will be interpreted as. To illustrate this point, here is an extreme example…

Say we are trying to sample a signal that contains two frequencies: 1 Hz and 3 Hz. Due to poor planning, the Nyquist frequency is selected to be 2 Hz (meaning we are sampling at a rate of 4 Hz). Further complicating things, the 3 Hz cosine-wave is offset by 180° (meaning the waveform is essentially multiplied by -1). So we have the following two waveforms….

1 Hz cosine waveform
3 Hz cosine waveform with 180° phase offset

When the two waves are superimposed to create one complicated waveform, it looks like this…

Superimposed waveform constructed from the 1 Hz and 3 Hz waves

Pretty, right? Well unfortunately, if we try to sample this complicated waveform at 4 Hz, do you know what we get? Nothing! Zero! Zilch! Why is this? Because when the 3 Hz cosine wave is sampled and reconstructed, it is falsely interpreted as a 1 Hz wave! Its frequency is reflected about the Nyquist frequency of 2 Hz. Since the original 1 Hz wave is below the Nyquist frequency, it is interpreted with the correct frequency. So we have two 1 Hz waves but one of them starts at 1 and the other at -1; when they are added together, they create zero!

Another way we can see this phenomena is by looking at the graph. Since we are sampling at 4 Hz, that means we are observing and recording four evenly-spaced points between zero and one, one and two, three and four, etc… Take a look at the above graph and try to find 4 evenly-space points between zero and one (but not including one). You will find that every single one of these points corresponds with a value of zero! Wow!

So aliasing can be a big issue! However, designers of digital audio recording and processing systems are aware of this and actually provision special filters (called anti-aliasing filters) to get rid of these unwanted effects.

So is That It?

Nope! These filters are good, but they’re not perfect. Analog filters cannot just chop-off all frequencies above a certain point, they have to, more or less, gradually attenuate them. So this means designers have a choice: either leave some high frequencies and risk distortion from aliasing or roll-off audible frequencies before they’re even recorded.

And then there’s noise… Noise is everywhere, all the time, and it never goes away. Modern electronics are rather good at reducing the amount of noise in a signal but they are far from perfect. Furthermore noise tends to be mostly present at higher frequencies; exactly the frequencies that end up getting aliased…

What effect would this have on the recorded signal? Well if we believe that random signal noise is present at all frequencies (above and below the Nyquist frequency), then our original signal would be masked with a layer of infinitely-loud aliased noise. Fortunately for digitally recorded music, the noise does stop at very high frequencies due to transmission-line effects (a much more complicated topic).

What can be Learned from All of This?

The end result of this analysis on sample rate is that the sample rate alone does not tell the whole story about what’s being recorded. Although 44.1 kHz (the standard sample rate for CDs and MP3 files) may be able to record frequencies up to 22 kHz, in practice a signal being sampled at 44.1 kHz will have distortion in the higher frequencies due to high frequency noise beyond the Nyquist frequency.

So then, what can be said about recording at higher sample rates? Some new analog-to-digital converts for musical recording sample at 192 kHz. Most, if not all, of the audio recording I do is done at a sample rate of 96 kHz. The benefit to recording at the higher sample rates is that you can recording high-frequency noise without it causing aliasing and distortion in the audible range. With 96 kHz, you get a full 28 kHz of bandwidth beyond the audible range where noise can exist without causing problems. Since signals with frequencies up to around 9.8 MHz can exist in a 10 foot cable before transmission line effects kick in, this is extremely important!

And with that, a final correlation can be predicted: the greater the sample rate, the less noise will result in aliasing in the audible spectrum. To those of you out there who have insisted that the higher sample rates sound better, maybe now you’ll have some heavy-duty math to back up your claims!

Categories
Hardware

Future Proofing: Spending less and getting more

 

Future proofing, at least when it comes to technology, is a philosophy that revolves around buying the optimal piece of tech at the optimal time. The overall goal of future proofing is to save you money in the long run by purchasing devices that take a long time to become obsolete.

But, you might ask, what exactly is the philosophy? Sure, it’s easy to say that its best to buy tech that will last you a long time, but how do you actually determine that?

There are four basic factors to consider when trying to plan out a future proof purchase.

  1. Does what you’re buying meet your current needs, as well as needs you might have in the foreseeable future?
  2. Can what you’re buying be feasibly upgraded down the line?
  3. Is what you’re buying about to be replaced by a newer, better product?
  4. What is your budget?

I’m going to walk you through each of these 4 ideas, and by the end you should have a pretty good grasp on how to make smart, informed decisions when future-proofing your tech purchases!

Does what you’re buying meet your current needs, as well as needs you might have in the foreseeable future?

 

This is the most important factor when trying to make a future-proof purchase. The first half is obvious: nobody is going to buy anything that doesn’t do everything they need it to do. It’s really the second half which is the most important aspect.

Let’s say you’re buying a laptop. Also, let’s assume that your goal is to spend the minimum amount of money possible to get the maximum benefit. You don’t want something cheap that you’ll get frustrated with in a few months, but you’re also not about to spend a downpayment on a Tesla just so you can have a useful laptop.

Let’s say you find two laptops. They’re mostly identical, albeit for one simple factor: RAM. Laptop A has 4gb of RAM, while Laptop B has 8gb of RAM. Let’s also say that Laptop A is 250 dollars, while Laptop B is 300 dollars. At a difference of 50 dollars, the question that comes to mind is whether or not 4gb of RAM is really worth that.

What RAM actually does is act as short term storage for your computer, most important in determining how many different things your computer can remember at once. Every program you run uses up a certain amount of RAM, with things such as tabs on Google Chrome famously taking up quite a bit. So, essentially, for 50 dollars you’re asking yourself whether or not you care about being able to keep a few more things open.

Having worked retail at a major tech store in my life, I can tell you from experience that probably a little over half of everyone asked this question would opt for the cheaper option. Why? Because they don’t think that more RAM is something that’s worth spending extra money at the cash register. However, lots of people will change their mind on this once you present them with a different way of thinking about it.

Don’t think of Laptop A as being 250 and Laptop B as being 300. Instead, focus only on the difference in price, and whether or not you think you’d be willing to pay that fee as an upgrade.

You see, in half a year, when that initial feeling of spending a few hundred dollars is gone, it’s quite likely that you’ll be willing to drop an extra 50 dollars so you can keep a few more tabs open. While right now it seems like all you’re doing is making an expensive purchase even more expensive, what you’re really doing is making sure that Future_You doesn’t regret not dropping the cash when they had an opportunity.

Don’t just make sure the computer your buying fits your current needs. Make sure to look at an upgraded model of that computer, and ask yourself; 6 months down the line, will you be more willing to spend the extra 50 dollars for the upgrade? If the answer is yes, then I’d definitely recommend considering it. Don’t just think about how much money you’re spending right now, think about how the difference in cost will feel when you wish that you’d made the upgrade.

For assistance in this decision, check the requirements for applications and organizations you make use of. Minimum requirements are just that, and should not be used as a guide for purchasing a new machine. Suggested requirements, such as the ones offered at UMass IT’s website, offer a much more robust basis from which to future-proof your machine.

Can what you’re buying be meaningfully upgraded down the line?

This is another important factor, though not always applicable to all devices. Most smartphones, for example, don’t even have the option to upgrade their available storage, let alone meaningful hardware like the RAM or CPU.

However, if you’re building your own PC or making a laptop/desktop purchase, upgradeability is a serious thing to consider. The purpose of making sure a computer is upgradeable is to ensure that you can add additional functionality to the device while having to replace the fewest possible components.

Custom PCs are the best example of this. When building a PC, one of the most important components that’s often overlooked is the power supply. You want to buy a power supply with a high enough wattage to run all your components, but you don’t want to overspend on something with way more juice than you need, as you could have funneled that extra cash into a more meaningful part.

Lets say you bought a power supply with just enough juice to keep your computer running. While that’s all fine right now, you’ll run into problems once you try to make an upgrade. Let’s say your computer is using Graphics Card A, and you want to upgrade to Graphics Card B. While Graphics Card A works perfectly fine in your computer, Graphics Card B requires more power to actually run. And, because you chose a lower wattage power supply, you’re going to need to replace it to actually upgrade to the new card.

In summary, what you planned to just be a simple GPU swap turned out to require not only that you pay the higher price for Graphics Card B, but now you need to buy a more expensive power supply as well. And, sure, you can technically sell your old power supply, you would have saved much more money (and effort) in the long run by just buying a stronger power supply to start. By buying the absolute minimum that you could to make your computer work, you didn’t leave yourself enough headroom to allow the computer to be upgraded.

This is an important concept when it comes to computers. Can your RAM be upgraded by the user? How about the CPU? Do you need to replace the whole motherboard just to allow for more RAM slots? Does your CPU socket allow for processors more advanced than the one you’re currently using, so you can buy cheap upgrades once newer models come out?

All of these ideas are important when designing a future-proof purchase. By ensuring that your device is as upgradeable as possible, you’re increasing its lifespan by allowing hardware advancements in the future to positively increase your device’s longevity.

Is what you’re buying about to be replaced by a newer, better product?

This is one of the most frustrating, and often one of the hardest-to-determine aspects of future proofing.

We all hate the feeling of buying the newest iPhone just a month before they reveal the next generation. Even if you’re not the type of person that cares about having the newest stuff, it’s to your benefit to make sure you aren’t making purchases too close to the release of the ‘next gen’ of that product. Oftentimes, since older generations become discounted upon the release of a replacement, you’d even save money buying the exact same thing by just waiting for the newer product to be released.

I made a mistake like this once, and it’s probably the main reason I’m including this in the article. I needed a laptop for my freshman year at UMass, so I invested in a Lenovo y700. It was a fine laptop — a little big but still fine — with one glaring issue: the graphics card.

I had bought my y700 with the laptop version of a GTX 960 inside of it, NVidias last-gen hardware. The reason this was a poor decision was because, very simply, the GTX 1060 had already been released. That is, the desktop version had been released.

My impatient self, eager for a new laptop for college, refused to wait for the laptop version of the GTX 1060, so I made a full price purchase on a laptop with tech that I knew would be out of date in a few months. And, lo and behold, that was one of the main reasons I ended up selling my y700 in favor of a GTX 1060 bearing laptop in the following summer.

Release dates on things like phones, computer hardware and laptops can often be tracked on a yearly release clock. Did Apple reveal the current iPhone in November of last year? Maybe don’t pay full price on one this coming October, just in case they make that reveal in a similar time.

Patience is a virtue, especially when it comes to future proofing.

What is your budget?

 

This one is pretty obvious, which is why I put it last. However, I’m including it in the article because of the nuanced nature of pricing when buying electronics.

Technically, I could throw a 3-grand budget at a Best Buy employee’s face and ask them to grab me the best laptop they’ve got. It’ll almost definitely fulfill my needs, will probably not be obsolete for quite awhile, and might even come with some nice upgradeability that you may not get with a cheaper laptop.

However, what if I’m overshooting? Sure, spending 3 grand on a laptop gets me a top-of-the-line graphics card, but am I really going to utilize the full capacity of that graphics card? While the device you buy might be powerful enough to do everything you want it to do, a purchase made by following my previously outlined philosophy on future proofing will also do those things, and possibly save you quite a bit of money.

That’s not to say I don’t advocate spending a lot of money on computer hardware. I’m a PC enthusiast, so to say that you shouldn’t buy more than you need would be hypocritical. However, if your goal is to buy a device that will fulfill your needs, allow upgrades, and be functional in whatever you need it to do for the forseeable future, throwing money at the problem isn’t really the most elegant way of solving it.

Buy smart, but don’t necessarily buy expensive. Unless that’s your thing, of course. And with that said…

 

…throwing money at a computer does come with some perks.

Categories
Hardware

DJI Drones – Which One Is Right for You?

As the consumer drone market becomes increasingly competitive, DJI has emerged as an industry leader of drones and related technologies both on the consumer end, as well as the professional and industrial markets. Today we’re taking a look at DJI’s three newest drones.

https://www4.djicdn.com/assets/images/products/spark/s3/detail1-ac31681ce5417ef8495c58b99d7687ae.png?from=cdnMap

First up is the DJI Spark, DJI’s cheapest consumer drone available at time of writing. The drone is a very small package, using Wi-Fi and the DJI GO Smartphone app to control the drone. The drone features a 12-megapixel camera, capable of 1080p video at 30 fps. The DJI Spark features a 16 minute runtime removable battery. Starting at $399, this drone is best for simple amateur backyard style learners just getting into the drone market. User-friendly and ultra-portable, this drone is limited in advanced functionality and is prone to distance and connectivity problems, but is an essential travel item for the casual and amateur drone user looking to take some photos from the sky without dealing with the hassle of advanced photography and flying skills that are required on some of DJI’s other offerings.

https://petapixel.com/assets/uploads/2018/01/djimavicairfeat-800x420.jpg

DJI’s most recent offering is the DJI Mavic Air, DJI’s intermediate offering for drone enthusiasts. The drone is a compact, foldable package, using Wi-Fi and the DJI GO Smartphone app in conjunction with a controller to control the drone. The drone features a 12-megapixel camera, capable of 4K video at 30 fps. The DJI Mavic Air features a 21-minute runtime removable battery. Starting at $799, this drone is a step up from DJI’s lower priced offerings but bundles a package of features that crater to both the amateur drone photographer and hobbyist/enthusiast drone flyer such as advanced collision avoidance sensors, panorama mode, and internal storage. While heavier and bigger than its smaller brother the DJI Spark, the DJI Mavic Air’s foldability creates an unbelievably portable package with user-friendly features and one of DJI’s best camera sensors to ship in their consumer drone lineup. Also plagued with Wi-Fi limitations, the DJI Mavic Air is an excellent travel drone for more serious photographers and videographers if you don’t venture out too far.

https://product4.djicdn.com/uploads/photos/114/medium_4058afad-4331-40ab-9a4e-30b49c72447b.jpg

One of DJI’s most ambitious and most popular consumer drones is the DJI Mavic Pro, a well-rounded, no compromise consumer drone with advanced photography and flying abilities. The drone is a compact, foldable package like the DJI Mavic Air, using the DJI GO Smartphone app in conjunction with a controller using OcuSync Transmission technology to provide a clear, long range, live feedback video system usually free of interference. The drone features a 12-megapixel camera, capable of 4K video at 30 fps. The DJI Mavic Pro features a 30 minute runtime removable battery. Starting at $999, this drone is not cheap, but is an essential tool for the photographer or drone enthusiast requiring the best flying and photography capture features in DJI’s best portable drone offering.

My DJI Mavic Pro Sample Footage:

Sample 1: https://www.youtube.com/watch?v=2kI1hoIO4x4
Sample 2: https://www.youtube.com/watch?v=ZQgX5J9WOII
Sample 3: https://www.youtube.com/watch?v=z1mDUZWwwxI
Sample 4: https://www.youtube.com/watch?v=hWiHPu-ld78
Sample 5: https://www.youtube.com/watch?v=TOcKi1xRNoE

Disclaimer: Operation of a drone, regardless of recreational or commercial intent, is subject to rules and regulations outlined by the Federal Aviation Administration (FAA). All drone operators should operate aircraft in compliance with local, state, and federal laws. Compliant and suggested practices include operating aircraft with the presence of a spotter, maintaining line of sight on your aircraft, registering your aircraft with the FAA, sharing airspace with other recreational and commercial aircraft, knowing your aircraft and its impact when operating around people & animals, and not flying your aircraft in FAA restricted zones. For more information, please visit the FAA website on Unmanned Aerial Systems as it pertains to you: https://www.faa.gov/uas/faqs/

Categories
Hardware

Portability and the Effects on Device Internals

With the current trend of ever-shrinking tech devices, we have seen an explosion in the abundance of portable electronics. Fifteen years ago Apple launched the iPod, a device so foreign to people that Steve Jobs had to explain you could legally transfer your CD collection to your computer then onto your iPod. Now it is expected that the little (or big) phone in your pocket works as well as any desktop computer with fully developed applications and lasts a full day on one charge. There are many different advances that made this possible, such as the reduction in size of the fabrication nodes, increased battery storage, and much better video display options. But I think one change in design philosophy in particular has driven the current trend in tech.

Due to portability requirements phones have become a microcosm of the tech industry, specifically in the trend of increasing complexity at the cost of repairability. When the first iPhone came out there was no option to change battery or storage configuration, options both available on competitors’ devices. And yet people flocked in droves to Apple’s simpler, less-customizable devices, so much so that now Google produces its own phone, the Pixel, which has a non-removable battery and lacks a microSD slot. Logic dictates that there must be an outside pressure to force a competitor to drop a substantial differentiator from other products on the market; I would argue that factor is thinness.

The size of an SD card slot seems pretty inconsequential on a device the size of a desktop computer but when it takes up 1% of the total space of a device, there are arguments for much better uses of the space. A better cooling system, larger internal battery, or just space for a larger PCB are all uses for the extra space that may make the device better than it could have been with the SD card slot. When you look at the logic boards for the iPhone, this point is illustrated; there is just no space for any extra components.

Driven by space-saving concerns, complexity increases as smaller and smaller traces are used on the PCB and components have to shrink, shuffle or be removed. Proof of this is in the design of larger machines such as the Macbook, a 12-inch laptop with a logic board smaller than its touchpad, which features a mobile CPU and no removable storage.https://tr2.cbsistatic.com/hub/i/2015/04/23/f7db4def-28c8-4625-aa5d-effb6ff56197/c4d819ca18590fc382a2314ab705b2e2/applemacbook2015teardown025.jpg

  Demand for ultra-portability has led to devices that are so small that they are almost impossible to repair or upgrade. However, this trend cannot continue indefinitely. Moore’s law has taken a couple hits in the past couple years as Intel struggles to keep pace with it and PCB manufacturing can only get so small before it is impossible to fit all the components on it. As size becomes less of a differentiator and reaches its physical limits, tech companies will have to look to new innovations to stay relevant, such as increasing battery life or designing new functions for the devices.

Categories
Hardware

A Quick Look at Home Theatre PCs

Are you one of those people that loves watching movies or listening to music while at home? Do you wish you could access that media anywhere in your home without lugging your laptop around your house and messing with cables? If you answered yes to these questions, then a Home Theater PC, or HTPC, may be for you.

An HTPC is a small computer that you can permanently hook up to a TV or home theater system that allows you to store, manage, and use your media whether it is stored locally or streamed from a service like Netflix, Amazon, or Spotify. Although several retailers sell pre-built HTPCs that are optimized for performance at low power, many people use a Raspberry Pi computer because they are small, quiet and relatively inexpensive. These are key features because you don’t want a loud PC with large fans interrupting your media experience, and a large computer won’t fit comfortably in a living room bookshelf or entertainment center.

The HTPC hooks up to your TV via an HDMI cord which will transmit both video and audio for watching movies. If you have a home theater system, your HTPC can connect to that to enable surround sound on movies, or streaming music throughout your home. It would also require a network connection to access streaming services. Although WiFi is convenient, a wired Ethernet connection is ideal because it can support higher speeds and bandwidth which is better for HD media.

The back of a typical AV Receiver.

 

Once you have a basic HTPC set up, you can upgrade your setup with a better TV, speakers, or even a projector for that true movie theater experience. If you want to be able to access your media in several rooms at once, you can set up multiple HTPCs with Network Accessed Storage, or NAS. This is a central storage location that connects directly to your router that all the computers on your home router can access at once. This is a more efficient option than storing all of your media on each computer separately. They can even be set up with internet access so you can stream your media from anywhere.

Categories
Hardware

Review: Grado SR80es

 

Join Parker Louison as he attempts to review a pair of Grado SR80es! You’ll learn the difference between open-back and closed-back as Parker messes up his wording so badly that he ends up sounding like he’s trying to make a $100 pair of headphones with low build quality sound affordable to the average college student!

Categories
Hardware Hotfix Operating System Windows

Setting Roam Aggression on Windows Computers

What is Wireless Roaming?

Access Points

To understand what roaming is, you first have to know what device makes the software function necessary.

If you are only used to household internet setups, the idea of roaming might be a little strange to think about. In your house you have your router, which you connect to, and that’s all you need to do. You may have the option of choosing between 2.4GHz and 5GHz channels, however that’s as complicated as it can get.

Now imagine that your house is very large, let’s say the size of UMass Amherst. Now, from your router in your living room, the DuBois Library, it might be a little difficult to connect to from all the way up in your bedroom on Orchard Hill. Obviously in this situation, the one router will never suffice, and so a new component is needed.

An Access Point (AP for short) provides essentially the same function as a router, except that multiple APs used in conjunction project a Wi-Fi network further than a single router ever could. All APs are tied back to a central hub, which you can think of as a very large, powerful modem, which provides the internet signal via cable from the Internet Service Provider (ISP) out to the APs, and then in turn to your device.

On to Roaming

So now that you have your network set up with your central hub in DuBois (your living room), and have an AP in your bedroom (Orchard Hill), what happens if you want to go between the two? The network is the same, but how is your computer supposed to know that the AP in Orchard Hill is not the strongest signal when you’re in DuBois. This is where roaming comes in. Based on what ‘aggressiveness’ your WiFi card is set to roam at, your computer will test the connection to determine which AP has the strongest signal based on your location, and then connect to it. The network is set up such that it can tell the computer that all the APs are on the same network, and allow your computer to transfer your connection without making you input your credentials every time you move.

What is Roam Aggressiveness?

The ‘aggressiveness’ with which your computer roams determines how frequently and how likely it is for your computer to switch APs. If you have it set very high, your computer could be jumping between APs frequently. This can be a problem as it can cause your connection to be interrupted frequently as your computer authenticates to another AP. Having the aggressiveness set very low, or disabling it, can cause your computer to ‘stick’ to one AP, making it difficult to move around and maintain a connection. The low roaming aggression is the more frequent problem people run into on large networks like eduroam at UMass. If you are experiencing issues like this, you may want to change the aggressiveness to suit your liking. Here’s how:

How to Change Roam Aggressiveness on Your Device:

First, navigate to the Control Panel which can be found in your Start menu. Then click on Network and Internet.

From there, click on Network and Sharing Center. 

Then, you want to select Wi-Fi next to Connections. Note: You may not have eduroam listed next to Wi-Fi if you are not connected or connected to a different network.

Now, select Properties and agree to continue when prompted for Administrator permissions.

After selecting Configure for your wireless card (your card will differ with your device from the one shown in the image above).

Finally, navigate to Advanced, and then under Property select Roaming Sensitivity Level. From there you can change the Value based on what issue you are trying to address.

And that’s all there is to it! Now that you know how to navigate to the Roaming settings, you can experiment a little to find what works best for you. Depending on your model of computer, you may have more than just High, Middle, Low values.

Changing roaming aggressiveness can be helpful for stationary devices, like desktops, too. Perhaps someone near you has violated UMass’ wireless airspace policy and set up and hotspot network or a wireless printer. Their setup may interfere with the AP closest to you, and normally, it could cause packet loss, or latency (ping) spiking. You may not even be able to connect for a brief time. Changing roaming settings can help your computer move to the next best AP while the interference is occurring, resulting in a more continuous experience for you.

Categories
Hardware

RRAM: A Retrospective Analysis of the Future of Memory

Mechanisms of Memory

Since the dawn of digital computation, the machine has only known one language: binary.  This strange concoction of language and math has existed physically in many forms since the beginning.  In its simplest form, binary represents numerical values using only two values, 1 and 0.  This makes mathematical operations very easy to perform with switches. It also makes it very easy to store information in a very compact manor.

Early iterations of data storage employed some very creative thinking and some strange properties of materials.

 

IBM 80-Column Punch Card

One of the older (and simpler) methods of storing computer information was on punch cards.  As the name suggests, punch cards would have sections punched out to indicate different values.  Punch cards allowed for the storage of binary as well as decimal and character values.  However, punch cards had an extremely low capacity, occupied a lot of space, and were subject to rapid degradation.  For these reasons, punch cards became phased out along with black and white TV and drive-in movie theaters.

Macroscopic Image of Ferrite Memory Cores

Digital machines had the potential to view and store data using far less intuitive methods.  King of digital memory from the 1960s unto the mid-to-late 70s was magnetic core memory.  By far one of the prettiest things ever made for the computer, this form of memory was constructed with a lattice of interconnected ferrite beads.  These beads could be magnetized momentarily when a current of electricity passed near them.  Upon demagnetizing, they would induce a current in nearby wire.  This current could be used to measure the binary value stored in that bead.  Current flowing = 1, no current = 0.

Even more peculiar was the delay-line memory used in the 1960s.  Though occasionally implemented on a large scale, the delay-line units were primarily used from smaller computers as there is no way they were even remotely reliable… Data was stored in the form of pulsing twists through a long coil of wire.  This mean that data could be corrupted if one of your fellow computer scientists slammed the door to the laboratory or dropped his pocket protector near the computer or something.  This also meant that the data in the coil had to be constantly read and refreshed every time the twists traveled all the way through the coil which, as anyone who has ever played with a spring before knows, does not take a long time.

Delay-Line Memory from the 1960s

This issue of constant refreshing may seem like an issue of days past, but DDR memory, the kind that is used in modern computers, also has to do this.  The DDR actually stands for double data rate and refers to the number of times every cycle that the data in every binary cell is copied into an adjacent cell and then copied back.  This reduces the amount of useful work per clock cycle that a DDR memory unit can do.  Furthermore, only 64 bits of the 72-bit DIMM connection used for DDR memory are actually used for data (the rest are for Hamming error correction).  So we only use about half the work that DDR memory does for actual computation and it’s still so unreliable that we need a whole 8 bits for error correction; perhaps this explains why most computers now come with three levels of cache memory whose sole purpose is to guess what data the processor will need in the hopes that it will reduce the processor’s need to access the RAM.

DDR Memory Chip on a Modern RAM Stick

Even SRAM (the faster and more stable kind of memory used in cache) is not perfect and it is extremely expensive.  A MB of data on a RAM stick will run you about one cent while a MB of cache can be as costly as $10.  What if there were a better way or making memory that was more similar to those ferrite cores I mentioned earlier?  What if this new form of memory could also be written and read to with speeds orders of magnitude greater than DDR RAM or SRAM cache?  What if this new memory also shared characteristics with human memory and neurons?

 

Enter: Memristors and Resistive Memory

As silicon-based transistor technology looks to be slowing down, there is something new on the horizon: resistive RAM.  The idea is simple: there are materials out there whose electrical properties can be changed by having a voltage applied to them.  When the voltage is taken away, these materials are changed and that change can be measured.  Here’s the important part: when an equal but opposite voltage is applied, the change is reversed and that reversal can also be measured.  Sounds like something we learned about earlier…

The change that takes place in these magic materials is in their resistivity.  After the voltage is applied, the extent to which these materials resist a current of electricity changes.  This change can be measured and therefor binary data can be stored.

A Microscopic Image of a Series of Memristors

Also at play in the coming resistive memory revolution is speed.  Every transistor ever made is subject to something called propagation delay: the amount of time required for a signal to traverse the transistor.  As transistors get smaller and smaller, this time is reduced.  However, transistors cannot get very much smaller because of quantum uncertainty in position: a switch is no use if the thing you are trying to switch on and off can just teleport past the switch.  This is the kind of behavior common among very small transistors.

Because the memristor does not use any kind of transistor, we could see near-speed-or-light propagation delays.  This means resistive RAM could be faster than DDR RAM, faster than cache, and someday maybe even faster than the registers inside the CPU.

There is one more interesting aspect here.  Memristors also have a tendency to “remember” data long after is has been erased and over written.  Now, modern memory also does this but, because the resistance of the memristor is changing, large arrays of memristors could develop sections with lower resistance due to frequent accessing and overwriting.  This behavior is very similar to the human brain; memory that’s accessed a lot tends to be easy to… well… remember.

Resistive RAM looks to be, at the very least, a part of the far-reaching future of computing.  One day we might have computers which can not only recall information with near-zero latency, but possibly even know the information we’re looking for before we request it.

Categories
Hardware Linux Mac OSX Operating System Windows

What is S.M.A.R.T?

Have you ever thought your computer might be dying but you don’t know what? Symptoms that people might be familiar with may include slowing down, increased startup time, programs freezing, constant disk usage, and audible clicking. While these symptoms may happen to a lot of people, they don’t necessarily mean the hard drive is circling the drain. With a practically unlimited number of other things that could make the computer slow down and become unusable, how are you supposed to find out exactly what the problem is? Fortunately, the most common part to fail in a computer, the hard drive (or data drive), has a built-in testing technology that even users can use to diagnose their machines without handing over big bucks to a computer repair store or having to buy an entire new computer if their computer is out of warranty.

Enter SMART (Self-Monitoring, Analysis and Reporting Technology). SMART is a monitoring suite that checks computer drives for a list of parameters that would indicate drive failure. SMART collects and stores data about the drive including errors, failures, times to spin up, reallocated sectors, and read/write abilities. While many of these attributes may be confusing in definition and even more confusing in their recorded numerical values, SMART software can predict a drive failure and even notify the user of the computer that the software has detected a failing drive. The user can then look at the results to verify, or in unsure, bring to a computer repair store for a verification and drive replacement.

So how does one get access to SMART? Many computers include built in diagnostic suites that can be accessed via a boot option when the computer first turns on. Others manufacturers require that you download an application without your operating system that can run a diagnostic test. These diagnostic suites will usually check the SMART status, and if the drive is in fact failing, the diagnostic suite will report a drive is failing or has failed. However, most of these manufacturer diagnostics will simply only say passed or failed, if you want access to the specific SMART data you will have to use a Windows program such as CrystalDiskInfo, a Linux program such as GSmartControl, or SMART Utility for Mac OS.

These SMART monitoring programs are intelligent enough to detect when a drive is failing, to give you ample time to back up your data. Remember, computer parts can always be replaced, lost data is lost forever. However, it should be noted that SMART doesn’t always detect when a drive fails. If a drive suffers a catastrophic failure like a physical drop or water damage while on SMART cannot predict these and the manufacturer is not at fault. Therefore, while SMART is best to be used as a tool to assess whether a drive is healthy or not, it is used most strongly in tandem with a good reliable backup system and not as a standalone protection against data failure.

Categories
Hardware

Transit by Wire – Automating New York’s Aging Subways

When I left New York in January, the city was in high spirits about its extensive Subway System.  After almost 50 years of construction, and almost 100 years of planning, the shiny, new Second Avenue subway line had finally been completed, bringing direct subway access to one of the few remaining underserved areas in Manhattan.  The city rallied around the achievement.  I myself stood with fellow elated riders as the first Q train pulled out of the 96th Street station for the first time; Governor Andrew Cuomo’s voice crackling over the train’a PA system assuring riders that he was not driving the train.

In a rather ironic twist of fate, the brand-new line was plagued, on its first ever trip, with an issue that has been effecting the entire subway system since its inception: the ever present subway delay.

A small group of transit workers gathered in the tunnel in front of the stalled train to investigate a stubborn signal.  The signal was seeing its first ever train, yet its red light seemed as though it had been petrified by 100 years of 24-hour operation, just like the rest of them.

Track workers examine malfunctioning signal on Second Avenue Line

When I returned to New York to participate in a summer internship at an engineering firm near Wall Street, the subway seemed to be falling apart.  Having lived in the city for almost 20 years and having dealt with the frequent subway delays on my daily commute to high school, I had no reason to believe my commute to work would be any better… or any worse.  However, I started to see things that I had never seen: stations at rush hour with no arriving trains queued on the station’s countdown clock, trains so packed in every car that not a single person was able to board, and new conductors whose sole purpose was to signal to the train engineers when it was safe to close the train doors since platforms had become too consistently crowded to reliably see down.

At first, I was convinced I was imagining all of this.  I had been living in the wide-open and sparsely populated suburbs of Massachusetts and maybe I had simply forgotten the hustle and bustle of the city.  After all, the daily ridership on the New York subway is roughly double the entire population of Massachusetts.  However, I soon learned that the New York Times had been cataloging the recent and rapid decline of the city’s subway.  In February, the Times reported a massive jump in the number of train delays per month, from 28,000 per month in 2012 up to 70,000 at the time of publication.

What on earth had happened?  Some New Yorkers have been quick to blame Mayor Bill De’Blasio  However, the Metropolitan Transportation Authority, the entity which owns and operates the city subway, is controlled by the state and thus falls under the jurisdiction of Governor Andrew Cuomo.  However, it’s not really Mr. Cuomo’s fault either.  In fact, it’s no one person’s fault at all!  The subway has been dealt a dangerous cocktail of severe overcrowding and rapidly aging infrastructure.

 

Thinking Gears that Run the Trains

Anyone with an interest in early computer technology is undoubtedly familiar with the mechanical computer.  Before Claude Shannon invented electronic circuitry that could process information in binary, all we had to process information were large arrays of gears, springs, and some primitive analog circuits which were finely tuned to complete very specific tasks.  Some smaller mechanical computers could be found aboard fighter jets to help pilots compute projectile trajectories.  If you saw The Imitation Game last year, you may recall the large computer Alan Turing built to decode encrypted radio transmissions during the Second World War.

Interlocking machine similar to that used in the NYC subway

New York’s subway had one of these big, mechanical monsters after the turn of the century; In fact, New York still has it.  Its name is the interlocking machine and it’s job is simple: make sure two subway trains never end up in the same place at the same time.  Yes, this big, bombastic hunk of metal is all that stands between the train dispatchers and utter chaos.  Its worn metal handles are connected directly to signals, track switches, and little levers designed to trip the emergency breaks of trains that roll past red lights.

The logic followed by the interlocking machine is about as complex as engineers could make it in 1904:

  • Sections of track are divided into blocks, each with a signal and emergency break-trip at their entrance.
  • When a train enters a block, a mechanical switch is triggered and the interlocking machine switches the signal at the entrance of the block to red and activates the break-trip.
  • After the train leaves the block, the interlocking machine switches the track signal back to green and deactivates the break-trip.

Essentially a very large finite-state machine, this interlocking machine was revolutionary back at the turn of the century.  At the turn of the century, however, some things were also acting in the machine’s favor; for instance, there were only three and a half million people living in New York at the time, they were all only five feet tall, and the machine was brand new.

As time moved on, the machine aged and so did too did the society around it.  After the Second World War, we replaced the bumbling network of railroads with an even more extensive network of interstate highways.  The train signal block, occupied by only one train at a time, was replaced by a simpler mechanism: the speed limit.

However, the MTA and the New York subways have lagged behind.  The speed and frequency of train service remains limited by how many train blocks were physically built into the interlocking machines (yes, in full disclosure, there is more than one interlocking machine but they all share the same principles of operation).  This has made it extraordinarily difficult for the MTA to improve train service; all the MTA can do is maintain the again infrastructure.  The closest thing the MTA has to a system-wide software update is a lot of WD40.

 

Full-Steam Ahead

There is an exception to the constant swath of delays…two actually.  In the 1990s and then again recently, the MTA did yank the old signals and interlocking machines from two subway lines and replace them with a fully automated fleet of trains, controlled remotely by a digital computer.  In a odd twist of fate, the subway evolved straight from its Nineteenth Century roots straight to Elon Musk’s age of self-driving vehicles.

The two lines selected were easy targets, both serve large swaths of suburb in Brooklyn and Queens and both are two-track lines, meaning they have no express service.  This made the switch to automated trains easy and very effective for moving large numbers of New Yorkers.  And the switch was effective!  Of all the lines in New York, the two automated lines have seen the least reduction in on-time train service.  The big switch also had some more proactive benefits, like the addition of accurate countdown clocks in stations, a smoother train ride (especially when stopping and taking off), and the ability for train engineers to play Angry Birds during their shifts (yes, I have seen this).

The first to receive the update was the city’s, then obscure, L line.  The L is one of the only two trains to traverse the width of the Manhattan Island and is the transportation backbone for many popular neighborhoods in Brooklyn.  In recent years, these neighborhoods have seen a spike in population due, in part, to frequent and reliable train service.

L train at its terminal station in Canarsie, Brooklyn

The contrast between the automated lines and the gear-box-controlled lines is astounding.  A patron of the subway can stand on a train platform waiting for an A or C train for half an hour… or they could stand on another platform and see two L trains at once on the same stretch of track.

The C line runs the oldest trains in the system, most of them over 50 years old.

The city also elected to upgrade the 7 line; the only other line in the city to traverse the width of Manhattan and one of only two main lines to run through the center of Queens.  Work on the 7 is set to finish soon and the results looks to be promising.

Unfortunately for the rest of the city’s system, the switch to automatic train control for those two lines was not cheap and it was not quick.  In 2005, it was estimated that a system-wide transition to computer controlled trains would not be completed until 2045.  Some other cities, most notably London, made the switch to automated trains years ago.  It is though to say why New York has lagged behind, but it most likely has to do with the immense ridership of the New York system.

New York is the largest American city by population and by land area.  This makes other forms of transportation far less viable when traveling though the city.  After a the public opinion of highways in the city was ruined in the 1960s following the destruction of large swaths of the South Bronx, many of the city’s neighborhoods have been left nearly inaccessible via car.  Although New York is a very walkable city, its massive size makes commuting by foot from the suburbs to Manhattan impractical as well.  Thus the subways must run every day and for every hour of the day.  If the city wants to shut down a line to do repairs, they often cant.  Often times, line are only closed for repairs on weekends and nights for a few hours.

 

Worth the Wait?

Even though it may take years for the subway to upgrade its signals, the city has no other option.  As discussed earlier, the interlocking machine can only support so many trains on a given length of track.  On the automated lines, transponders are placed every 500 feet, supporting many more trains on the same length of track.  Trains can also be stopped instantly instead of having to travel to the next red-signaled block.  With the number of derailments and stalled trains climbing, this unique ability of the remote-controlled trains is invaluable.  Additionally, automated trains running on four-track lines with express service could re-route instantly to adjacent tracks in order to completely bypass stalled trains.  Optimization algorithms could be implemented to have a constant and dynamic flow of trains.  Trains could be controlled more precisely during acceleration and breaking to conserve power and prolong the life of the train.

For the average New Yorker, these changes would mean shorter wait times, less frequent train delays, and a smoother and more pleasant ride.  In the long term, the MTA would most likely save millions of dollars in repair costs without the clunky interlocking machine.  New Yorkers would also save entire lifetimes worth of time on their commutes.  The cost may be high, but unless the antiquated interlocking machines are put to rest, New York will be paying for it every day.

Categories
Hardware

Water Damage: How to prevent it, and what to do if it happens

Getting your tech wet is often one of the most common things that people tend to worry about when it comes to their devices. Rightfully so; water damage is often excluded from manufacturer warranties, can permanently ruin technology under the right circumstances, and is one of the easiest things to do to a device without realizing it.

What if I told you that water, in general, is one of the easiest and least-likely things to ruin your device, if reacted to properly?

Don’t get me wrong; water damage is no laughing matter. It’s the second most common reason that tech ends up kicking the bucket, the most common being drops (but not for the reason you might think). While water can quite easily ruin a device within minutes, most, if not all of its harm can be prevented if one follows the proper steps when a device does end up getting wet.

My goal with this article is to highlight why water damage isn’t as bad as it sounds, and most importantly, how to react properly when your shiny new device ends up the victim to either a spill… or an unfortunate swan dive into a toilet.

_________________

Water is, in its purest form, is pretty awful at conducting electricity. However, because most of the water that we encounter on a daily basis is chock-full of dissolved ions, it’s conductive enough to cause serious damage to technology if not addressed properly.

If left alone, the conductive ions in the water will bridge together several points on your device, potentially allowing for harmful bursts of electricity to be sent places which would result in the death of your device.

While that does sound bad, here’s one thing about water damage that you need to understand: you can effectively submerge a turned-off device in water, and as long as you fully dry the whole thing before turning it on again, there’s almost no chance that the water will cause any serious harm.

Image result for underwater computer

You need to react fast, but right. The worst thing you can do to your device once it gets wet is try to turn it on or ‘see if it still works’. The very moment that a significant amount of water gets on your device, your first instinct should be to fully power off the device, and once it’s off, disconnect the battery if it features a removable one.

As long as the device is off, it’s very unlikely that the water will be able to do anything significant, even less so if you unplug the battery. The amount of time you have to turn off your device before the water does any real damage is, honestly, complete luck. It depends on where the water seeps in, how conductive it was, and how the electricity short circuited itself if a short did occur. Remember, short circuits are not innately harmful, it’s just a matter of what ends up getting shocked.

Once your device is off, your best chance for success is to be as thorough as you possibly can when drying it. Dry any visible water off the device, and try to let it sit out in front of a fan or something similar for at least 24 hours (though please don’t put it near a heater).

Rice is also great at drying your devices, especially smaller ones. Simply submerge the device in (unseasoned!) rice, and leave it again for at least 24 hours before attempting to power it on. Since rice is so great at absorbing liquids, it helps to pull out as much water as possible.

Image result for phone in rice

If the device in question is a laptop or desktop computer, bringing it down to us at the IT User Services Help Center in Lederle A109 is an important option to consider. We can take the computer back into the repair center and take it apart, making sure that everything is as dry as possible so we can see if it’s still functional. If the water did end up killing something in the device, we can also hopefully replace whatever component ended up getting fried.

Overall, there are three main points to be taken from this article:

Number one, spills are not death sentences for technology. As long as you follow the right procedures, making sure to immediately power off the device and not attempt to turn it back on until it’s thoroughly dried, it’s highly likely that a spill won’t result in any damage at all.

Number two is that, when it comes to water damage, speed is your best friend. The single biggest thing to keep in mind is that, the faster you get the device turned off and the battery disconnected, the faster it will be safe from short circuiting itself.

Lastly, and a step that many of us forget about when it comes to stuff like this; take your time. A powered off device that was submerged in water has an really good chance at being usable again, but that chance goes out the window if you try to turn it on too early. I’d suggest that for smartphones and tablets, at the very least, they should get a thorough air drying followed by at least 24 hours in rice. For laptops and desktops, however, your best bet is to either open it up yourself, or bring it down the Help Center so we can open it up and make sure it’s thoroughly dry. You have all the time in the world to dry it off, so don’t ruin your shot at fixing it by testing it too early.

I hope this article has helped you understand why not to be afraid of spills, and what to do if one happens. By following the procedures I outlined above, and with a little bit of luck, it’s very likely that any waterlogged device you end up with could survive it’s unfortunate dip.

Good luck!

Categories
Hardware Microsoft Software Windows

Tips for Gaming Better on a Budget Laptop

Whether you came to college with an old laptop, or want to buy a new one without breaking the bank, making our basic computers faster is something we’ve all thought about at some point. This article will show you some software tips and tricks to improve your gaming experience without losing your shirt, and at the end I’ll mention some budget hardware changes you can make to your laptop. First off, we’re going to talk about in-game settings.

 

In-Game Settings:

All games have built in settings to alter the individual user experience from controls to graphics to audio. We’ll be talking about graphics settings in this section, primarily the hardware intensive ones that don’t compromise the look of the game as much as others. This can also depend on the game and your individual GPU, so it can be helpful to research specific settings from other users in similar positions.

V-Sync:

V-Sync, or Vertical Synchronization, allows a game to synchronize the framerate with that of your monitor. Enabling this setting will increase the smoothness of the game. However, for lower end computers, you may be happy to just run the game at a stable FPS that is less than your monitor’s refresh rate. (Note – most monitors have a 60Hz or 60 FPS refresh rate). For that reason, you may want to disable it to allow for more stable low FPS performance.

Anti-Aliasing:

Anti-Aliasing, or AA for short, is a rendering option which reduces the jaggedness of lines in-game. Unfortunately the additional smoothness heavily impacts hardware usage, and disabling this while keeping other things like texture quality or draw distance higher can make big performance improvements without hurting a game’s appearance too much. Additionally, there are many different kinds of AA options that games might have settings for. MSAA (Multisampling AA), and the even more intensive, TXAA (Temporal AA), are both better smoothing processes that have an even bigger impact on performance. Therefore turning these off on lower-end machines is almost always a must. FXAA (Fast Approximate AA) uses the least processing power, and can therefore be a nice setting to leave on if your computer can handle it.

Anisotropic Filtering (AF):

This setting adds depth of field to a game, by making things further away from your character blurrier. Making things blurrier might seem like it would make things faster, however it actually puts a greater strain on your system as it needs to make additional calculations to initiate the affect. Shutting this off can yield improvements in performance, and some players even prefer it, as it allows them to see distant objects more clearly.

Other Settings:

While the aforementioned are the heaviest hitters in terms of performance, changing some other settings can help increase stability and performance too (beyond just simple texture quality and draw distance tweaks). Shadows and reflections are often unnoticed compared to other effects, so while you may not need to turn them off, turning them down can definitely make an impact. Motion blur should be turned off completely, as it can make quick movements result in heavy lag spikes.

Individual Tweaks:

The guide above is a good starting point for graphics settings; because there are so many different models, there are any equally large number of combinations of settings. From this point, you can start to increase settings slowly to find the sweet spot between performance and quality.

Software:

Before we talk about some more advanced tips, it’s good practice to close applications that you are not using to increase free CPU, Memory, and Disk space. This alone will help immensely in allowing games to run better on your system.

Task Manager Basics:

Assuming you’ve tried to game on a slower computer, you’ll know how annoying it is when the game is running fine and suddenly everything slows down to slideshow speed and you fall off a cliff. Chances are that this kind of lag spike is caused by other “tasks” running in the background, and preventing the game you are running from using the power it needs to keep going. Or perhaps your computer has been on for awhile, so when you start the game, it runs slower than its maximum speed. Even though you hit the “X” button on a window, what’s called the “process tree” may not have been completely terminated. (Think of this like cutting down a weed but leaving the roots.) This can result in more resources being taken up by idle programs that you aren’t using right now. It’s at this point that Task Manager becomes your best friend. To open Task Manager, simply press CTRL + SHIFT + ESC at the same time or press CTRL + ALT + DEL at the same time and select Task Manager from the menu. When it first appears, you’ll notice that only the programs you have open will appear; click the “More Details” Button at the bottom of the window to expand Task Manager. Now you’ll see a series of tabs, the first one being “Processes” – which gives you an excellent overview of everything your CPU, Memory, Disk, and Network are crunching on. Clicking on any of these will bring the process using the highest amount of each resource to the top of the column. Now you can see what’s really using your computer’s processing power. It is important to realize that many of these processes are part of your operating system, and therefore cannot be terminated without causing system instability. However things like Google Chrome and other applications can be closed by right-clicking and hitting “End Task”. If you’re ever unsure of whether you can end a process or not safely, a quick google of the process in question will most likely point you in the right direction.

Startup Processes:

Here is where you can really make a difference to your computer’s overall performance, not just for gaming. From Task Manager, if you select the “Startup” tab, you will see a list of all programs and services that can start when your computer is turned on. Task Manager will give an impact rating of how much each task slows down your computers boot time. The gaming app Steam, for example, can noticeably slow down a computer on startup. A good rule of thumb is to allow virus protection to start with Windows, however everything else is up to individual preference. Shutting down these processes on startup can prevent unnecessary tasks from ever being opened, and allow for more hardware resource availability for gaming.

Power Usage:

You probably know that unlike desktops, laptops contain a battery. What you may not know is that you can alter your battery’s behavior to increase performance, as long as you don’t mind it draining a little faster. On the taskbar, which is by default located at the bottom of your screen, you will notice a collection of small icons next to the date and time on the right, one of which looks like a battery. Left-clicking will bring up the menu shown below, however right-clicking will bring up a menu with an option “Power Options” on it.

 

 

 

 

Clicking this will bring up a settings window which allows you to change and customize your power plan for your needs. By default it is set to “Balanced”, but changing to “High Performance” can increase your computer’s gaming potential significantly. Be warned that battery duration will decrease on the High Performance setting, although it is possible to change the battery’s behavior separately for when your computer is using the battery or plugged in.

Hardware:

Unlike desktops, for laptops there are not many upgrade paths. However one option exists for almost every computer that can have a massive effect on performance if you’re willing to spend a little extra.

Hard Disk (HDD) to Solid State (SSD) Drive Upgrade:

Chances are that if you have a budget computer, it probably came with a traditional spinning hard drive. For manufacturers, this makes sense as they are cheaper than solid states, and work perfectly well for light use. Games can be very demanding on laptop HDDs to recall and store data very quickly, sometimes causing them to fall behind. Additionally, laptops have motion sensors built into them which restrict read/write capabilities when the computer is in motion to prevent damage to the spinning disk inside the HDD. An upgrade to a SSD not only eliminates this restriction, but also has a much faster read/write time due to the lack of any moving parts. Although SSDs can get quite expensive depending on the size you want, companies such as Crucial or Kingston offer a comparatively cheap solution to Samsung or Intel while still giving you the core benefits of a SSD. Although there are a plethora of tutorials online demonstrating how to install a new drive into your laptop, make sure you’re comfortable with all the dangers before attempting, or simply take your laptop into a repair store to have them do it for you. It’s worth mentioning that when you install a new drive, you will need to reinstall Windows, and all your applications from your old drive.

Memory Upgrade (RAM):

Some laptops have an extra memory slot, or just ship with a lower capacity than what they are capable of holding. Most budget laptops will ship with 4GB of memory, which is often not enough to support both the system, and a game.

Upgrading or increasing memory can give your computer more headroom to process and store data without lagging up your entire system. Unlike with SSD upgrades, memory is very specific and it is very easy to buy a new stick that fits in your computer, but does not function with its other components. It is therefore critical to do your research before buying any more memory for your computer; that includes finding out your model’s maximum capacity, speed, and generation. The online technology store, Newegg, has a service here that can help you find compatible memory types for your machine.

Disclaimer: 

While these tips and tricks can help your computer to run games faster, there is a limit to what hardware is capable of. Budget laptops are great for the price point, and these user tricks will help squeeze out all their potential, but some games will simply not run on your machine. Make sure to check a game’s minimum and recommended specs before purchasing/downloading. If your computer falls short of minimum requirements, it might be time to find a different game or upgrade your setup.

Categories
Hardware

Quantum Computers: How Google & NASA are pushing Artificial Intelligence to its limit

qcomp

“If you think you understand quantum physics, you don’t understand quantum physics”. Richard Feynman quoted that statment in relation to the fact that we simply do not yet fully understand the mechanics of the quantum world. NASA, Google, and DWave are trying to figure this out as well by revolutionizing our understanding of physics and computing with the first commercial quantum computer that runs 100 million times faster than traditional computers.

Quantum Computers: How they work

To understand how quantum computers work you must first recognize how traditional computers work. For several decades, a computer processor’s base component is the transistor. A transistor either allows or blocks the flow of electrons (aka electricity) with a gate. This transistor can then be one of two possible values: on or off, flowing or not flowing. The value of a transistor is binary, and is used to represent digital information by representing them in binary digits, or bits for short. Bits are very basic, but paired together can produce exponentially more and more possible values as they are added. Therefore, more transistors means faster data processing. To fit more transistors on a silicon chip we must keep shrinking the size of them. Transistors nowadays have gotten so small they are only the size of 14nm. This is 8x less than the size of an HIV virus and 500x smaller than a red blood cell.

As transistors are getting to the size of only a few atoms, electrons may just transfer through a blocked gate in a concept called quantum tunneling. This is because in the quantum realm physics works differently than what we are used to understanding, and computers start making less and less sense at this point. We are starting to see a physical barrier to the efficiency technological processes, but scientists are now using these unusual quantum properties to their advantage to develop quantum computers.

Introducing the Qubit!

With computers using bits as their smallest unit of information, quantum computers use qubits. Like bits, qubits can represent the values of 0 or 1. This 0 and 1 is determined by a photon and its spin in a magnetic field where polarization represents the value; what separates them from bits is that they can also be in any proportion of both states at once in a property called superpositioning. You can test the value of a photon by passing it through a filter, and it will collapse to be either vertically or horizontally polarized (0 or 1). Unobserved, the qubit is in superposition with probabilities for either state – but the instant you measure it it collapses to one of the definite states, being a game-changer for computing.

201011_qubit_vs_bit

When normal bits are lined up they can represent one of many possible values. For example, 4 bits can represent one of 16 (2^4) possible values depending on their orientation. 4 qubits on the other hand can represent all of these 16 combinations at once, with each added qubit growing the number of possible outcomes exponentially!

Qubits can also feel another property we can entanglement; a close connection that has qubits react to a change in the other’s state instantaneously regardless of the distance between them both. This means when you measure one value of a qubit, you can deduce the value of another without even having to look at it!

Traditional vs Quantum: Calculations Compared

When performing logic on traditional computers it is pretty simple. Computers perform logic on something we call logic gates using a simple set of inputs and producing a single output (based on AND, OR, XOR, and NAND). For example, two bits being 0 (false) and 1 (true) passed through an AND gate is 0 since both bits aren’t true. 0 and 1 being passed through an OR gate will be 1 since either of the two needs to have the value of true for the outcome to remain true. Quantum gates perform this on a much more complex level. They manipulate an input of superpositions (qubits each with probabilities of 0 or 1), rotates these probabilities and produces another superposition as an output; measuring the outcome, collapsing the superpositions into an actual sequence of 0s and 1s for one final definite answer. What this means is that you can get the entire lot of calculations possible with a setup all done at the same time!

quantum-computers-new-generation-of-computers-part1-by-prof-lili-saghafi-17-638

When measuring the result of qubit’s superpositions, they will probably give you the one you want. However you need to be sure that this outcome is correct so you may need to double check and try again. Exploiting the properties of superposition and entanglement can be exponentially more efficient than ever possible on a traditional computer.

What Quantum Computers mean for our future

Quantum computers will most likely not replace our home computers, but they are much more superior. In applications such as data searching in corporate databases, computers may need to search every entry in a table. Quantum computers can do this task in a square root of that time; and for tables with billions of entries this can save a tremendous amount of time and resources. The most famous use of quantum computers is IT security. Tasks such as online banking and browsing your email is kept secure by encryption, where a public key is made for everyone to encode messages only you can decode. Problem is public keys can be used to calculate one’s secret private key, but doing the math on a normal computer would literally take years of trial and error. Quantum computers can do this in a breeze with an exponential decrease in calculation time! Simulations in the quantum world are intense on resources, regular computers lack on resources for bigger structures such as molecules. So why not simulate quantum physics with actual quantum physics? Quantum simulations for instance could lead to insights on proteins that can revolutionize medicine as we know it.

140903112645-google-quantum-computer-1024x576

What’s going on now in Quantum Computing? How NASA & Google are using AI to reveal nature’s biggest secrets.

We’re unsure if quantum computers will only be a specialized tool, or a big revolution for humanity. We do not know the limits for technology but there is only one way to find out. One of the first commercial quantum computers developed by DWave will be stored in Google and NASA’s research center in California. They operate the chip at an incredible temperature at 200 times colder than interstellar space. They are currently focused on using it to solve optimization problems, finding the best outcome given a set of data. For example: finding the best flight path to visit a set of places you’d like to see. Google and NASA are also using artificial intelligence on this computer to further our understanding of the natural world. Since it operates on quantum level mechanics beyond our knowledge, we can ask it questions that we may never otherwise be able to figure out. Questions such as “are we alone?” and “where did we come from?” can be explored. We have evolved into creatures that are able to ask the nature of physical reality, and being able to solve the unknown is even more awesome as a species. We have the power to do it and we must do it, because that is what it means to be human.

Categories
Hardware Software

A Basic Guide to Digital Audio Recording

The Digital Domain

iab-digital-audio-committee-does-dallas-4

Since the dawn of time, humans have been attempting to record music.  For the vast majority of human history, this has been really really difficult.  Early cracks at getting music out of the hands of the musician involved mechanically triggered pianos whose instructions for what to play were imprinted onto long scrolls of paper.  These player pianos were difficult to manufacture (this was prior to the industrial revolution) and not really viable for casual music listening.  There was also the all-important phonograph, which recorded sound itself mechanically onto the surface of a wax cylinder.

If it sounds like the aforementioned techniques were difficult to use and manipulate, it was!  Hardly anyone owned a phonograph since they were expensive, recordings were hard to come by, and they really didn’t sound all that great.  Without microphones or any kind of amplification, bits of dust and debris which ended up on these phonograph records could completely obscure the original recording behind a wall of noise.

Humanity had a short stint with recording sound as electromagnetic impulses on magnetic tape.  This proved to be one of the best ways to reproduce sound (and do some other cool and important things too).  Tape was easy to manufacture, came in all different shapes and sizes, and offered a whole universe of flexibility for how sound could be recorded onto it.  Since tape recorded an electrical signal, carefully crafted microphones could be used to capture sounds with impeccable detail and loudspeakers could be used to play back the recorded sound at considerable volumes.  Also at play were some techniques engineers developed to reduce the amount of noise recorded onto tape, allowing the music to be front and center atop a thin floor of noise humming away in the background.  Finally, tape offered the ability to record multiple different sounds side-by-side and play them back at the same time.  These side-by-side sounds came to be known as ‘tracks’ and allowed for stereophonic sound reproduction.

Tape was not without its problems though.  Cheap tape would distort and sound poor.  Additionally, tape would deteriorate over time and fall apart, leaving many original recordings completely unlistenable.  Shining bright on the horizon in the late 1970s was digital recording.  This new format allowed for low-noise, low cost, and long-lasting recordings.  The first pop music record to be recorded digitally was Ry Cooder’s, Bop till you Drop in 1979.  Digital had a crisp and clean sound that was rivaled only by the best of tape recording.  Digital also allowed for near-zero degradation of sound quality once something was recorded.

Fast-forward to today.  After 38 years of Moore’s law, digital recording has become cheap and simple.  Small audio recorders are available at low cost with hours and hours of storage for recording.  Also available are more hefty audio interfaces which offer studio-quality sound recording and reproduction to any home recording enthusiast.

 

Basic Components: What you Need

Depending on what you are trying to record, your needs may vary from the standard recording setup.  For most users interested in laying down some tracks, you will need the following.

Audio Interface (and Preamplifier): this component is arguably the most important as it connects everything together.  The audio interface contains both analog-to-digital converters and a digital-to-analog convert; these allow it to both turn sound into the language of your computer for recording, and turn the language of your computer back into sound for playback.  These magical little boxes come in many shapes and sizes; I will discus these in a later section, just be patient.

Digital Audio Workstation (DAW) Software: this software will allow your computer to communicate with the audio interface.  Depending on what operating system you have running on your computer, there may be hundreds of DAW software packages available.  DAWs vary greatly in complexity, usability, and special features; all will allow you the basic feature of recording digital audio from an audio interface.

Microphone: perhaps the most obvious element of a recording setup, the microphone is one of the most exciting choices you can make when setting up a recording rig.  Microphones, like interfaces and DAWs, come in all shapes a sizes.  Depending on what sound you are looking for, some microphones may be more useful than others.  We will delve into this momentarily.

Monitors (and Amplifier): once you have set everything up, you will need a way to hear what you are recording.  Monitors allow you to do this.  In theory, you can use any speaker or headphone as a monitor.  However, some speakers and headphones offer more faithful reproduction of sound without excessive bass and can be better for hearing the detail in your sound.

 

Audio Interface: the Art of Conversion

Two channel USB audio interface.
Two channel USB audio interface.

The audio interface can be one of the most intimidating elements of recording.  The interface contains the circuitry to amplify the signal from a microphone or instrument, convert that signal into digital information, and then convert that information back to an analog sound signal for listening on headphones or monitors.

Interfaces come in many shapes and sizes but all do similar work.  These days, most interfaces offer multiple channels of recording at one time and can record in uncompressed CD-audio quality or better.

Once you step into the realm of digital audio recording, you may be surprised to find a lack of mp3 files.  Turns out, mp3 is a very special kind of digital audio format and cannot be recorded to directly; mp3 can only be created from existing audio files in non-compressed formats.

You may be asking yourself, what does it mean for audio to be compressed?  As an electrical engineer, it may be hard for me to explain this in a way that humans can understand, but I will try my best.  Audio takes up a lot of space.  Your average iPhone or Android device maybe has 32 GB of space but most people can keep thousands of songs on their device.  This is done using compression.  Compression is the computer’s way of listening to a piece of music, and removing all the bits and pieces that most people wont notice.  Soft and infrequent noises, like the sound of a guitarist’s fingers scraping a string, are removed while louder sounds, like the sound of the guitar, are left in.  This is done using the Fourier Transform and a bunch of complicated mathematical algorithms that I don’t expect anyone reading this to care about.

When audio is uncompressed, a few things are true: it takes up a lot of space, it is easy to manipulate with digital effects, and it often sounds very, very good.  Examples of uncompressed audio formats are: .wav on Windows, .aif and .aiff on Macintosh, and .flac for all the free people of the Internet.  Uncompressed audio comes in many different forms but all have two numbers which describe their sound quality: ‘word length’ or ‘bit depth’ and ‘sample rate.’

The information for digital audio is contained in a bunch of numbers which indicate the loudness or volume of the sound at a specific time.  The sample rate tells you how many times per second the loudness value is captured.  This number needs to be at least two times higher than the highest audible frequency, otherwise the computer will perceive high frequencies as being lower than they actually are.  This is because of the Shannon Nyquist Theorem which I, again, don’t expect most of you to want to read about.  Most audio is captured at 44.1 kHz, making the highest frequency it can capture 22.05 kHz, which is comfortably above the limits of human hearing.

The word length tells you how many numbers can be used to represent different volumes of loudness.  The number of different values for loudness can be up to 2^word length.  CDs represent audio with a word length of 16 bits, allowing for 65536 different values for loudness.  Most audio interfaces are capable of recording audio with a 24-bit word length, allowing for exquisite detail.  There are some newer systems which allow for recording with a 32-bit word length but these are, for the majority part, not available at low-cost to consumers.

I would like to add a quick word about USB.  There is a stigma, in the business, against USB audio interfaces.  Many interfaces employ connectors with higher bandwidth, like FireWire and Thunderbolt, and charge a premium for it.  It may seem logical, faster connection, better quality audio.  Hear this now: no audio interface will ever be sold which has a connector that is too slow for the quality audio it can record.  This is to say, USB can handle 24-bit audio with a 96 kHz sample rate, no problem.  If you notice latency in your system, it is from the digital-to-analog and analog-to-digital converters as well as the speed of your computer; latency in your recording setup has nothing to do with what connector your interface uses.  It may seem like I am beating a dead horse here, but many people think this and it’s completely false.

One last thing before we move on to the DAW, I mentioned earlier that frequencies above half the recording sample rate will be perceived, by your computer, as lower frequencies.  These lower frequencies can show up in your recording and can cause distortion.  This phenomena has a name and it’s called aliasing.  Aliasing doesn’t just happen with audible frequencies, it can happen with super-sonic sound too.  For this reason, it is often advantageous to record at higher sample rates to avoid having these higher frequencies perceived within the audible range.  Most audio interfaces allow for recording 24-bit audio with a 96 kHz sample rate.  Unless you’re worried about taking up too much space, this format sounds excellent and offers the most flexibility and sonic detail.

 

Digital Audio Workstation: all Out on the Table

Apple's pro DAW software: Logic Pro X
Apple’s pro DAW software: Logic Pro X

The digital audio workstation, or DAW for short, is perhaps the most flexible element of your home-studio.  There are many many many DAW software packages out there, ranging in price and features.  For those of you looking to just get into audio recording, Audacity is a great DAW to start with.  This software is free and simple.  It offers many built-in effects and can handle the full recording capability of any audio interface which is to say, if you record something well on this simple and free software, it will sound mighty good.

Here’s the catch with many free or lower-level DAWs like Audacity or Apple’s Garage Band: they do not allow for non-destructive editing of your audio.  This is a fancy way of saying that once you make a change to your recorded audio, you might not be able to un-make it.  Higher-end DAWs like Logic Pro and Pro Tools will allow you to make all the changes you want without permanently altering your audio.  This allows you to play around a lot more with your sound after its recorded.  More expensive DAWs also tend to come with a better-sounding set of built-in effects.  This is most noticeable with more subtle effects like reverb.

There are so many DAWs out there that it is hard to pick out a best one.  Personally, I like Logic Pro, but that’s just preference; many of the effects I use are compatible with different DAWs so I suppose I’m mostly just used to the user-interface.  My recommendation is to shop around until something catches your eye.

 

The Microphone: the Perfect Listener

Studio condenser and ribbon microphones.
Studio condenser and ribbon microphones.

The microphone, for many people, is the most fun part of recording!  They come in many shapes and sizes and color your sound more than any other component in your setup.  Two different microphones can occupy polar opposites in the sonic spectrum.

There are two common types of microphones out there: condenser and dynamic microphones.  I can get carried away with physics sometimes so I will try not to write too much about this particular topic.

Condenser microphones are a more recent invention and offer the best sound quality of any microphone.  They employ a charged parallel plate capacitor to measure vibrations in the air.  This a fancy way of saying that the element in the microphone which ‘hears’ the sound is extremely light and can move freely even when motivated by extremely quiet sounds.

Because of the nature of their design, condenser microphones require a small amplifier circuit built-into the microphone.  Most new condenser microphones use a transistor-based circuit in their internal amplifier but older condenser mics employed internal vacuum-tube amplifiers; these tube microphones are among some of the clearest and most detailed sounding microphones ever made.

Dynamic microphones, like condenser microphones, also come in two varieties, both emerging from different eras.  The ribbon microphone is the earlier of the two and observes sound with a thin metal ribbon suspended in a magnetic field.  These ribbon microphones are fragile but offer a warm yet detailed quality-of-sound.

The more common vibrating-coil dynamic microphone is the most durable and is used most often for live performance.  The prevalence of the vibrating-coil microphone means that the vibrating-coil is often dropped from the name (sometimes the dynamic is also dropped from the name too); when you use the term dynamic mic, most people will assume you are referring to the vibrating-coil microphone.

With the wonders of globalization, all microphones can be purchase at similar costs.  Though there is usually a small premium to purchase condenser microphones over dynamic mics, costs can remain comfortably around $100-150 for studio-quality recording mics.  This means you can use many brushes to paint your sonic picture.  Often times, dynamic microphones are used for louder instruments like snare and bass drums, guitar amplifiers, and louder vocalists.  Condenser microphones are more often used for detailed sounds like stringed instruments, cymbals, and breathier vocals.

Monitors: can You Hear It?

Studio monitors at Electrical Audio Studios, Chicago
Studio monitors at Electrical Audio Studios, Chicago

When recording, it is important to be able to hear the sound that your system is hearing.  Most people don’t think about it, but there are many kinds of monitors out there: the screen on our phones and computers which allow us to see what the computer is doing, to the viewfinder on a camera which allows us to see what the camera sees.  Sound monitors are just as important.

Good monitors will reproduce sound as neutrally as possible and will only distort at very very high volumes.  These two characteristics are important for monitoring as you record, and hearing things carefully as you mix.  Mix?

Once you have recorded your sound, you may want to change it in your DAW.  Unfortunately, the computer can’t always guess what you want your effects to sound like, so you’ll need to make changes to settings and listen.  This could be as simple as changing the volume of one recorded track or it could be as complicated as correcting an offset in phase of two recorded tracks.  The art of changing the sound of your recorded tracks is called mixing.

If you are using speakers as monitors, make sure they don’t have ridiculously loud bass, like most speakers do.  Mixing should be done without the extra bass; otherwise, someone playing back your track on ‘normal’ speakers will be underwhelmed by a thinner sound.  Sonically neutral speakers make it very easy to hear what you finished product will sound like on any system.

It’s a bit harder to do this with headphones as their proximity to your ears makes the bass more intense.  I personally like mixing on headphones because the closeness to my ear allows me to hear detail better.  If you are to mix with headphones, your headphones must have open-back speakers in them.  This means that there is no plastic shell around the back of the headphone.  With no set volume of air behind the speaker, open-back headphones can effortlessly reproduce detail, even at lower volumes.

closed-vs-open-back-headphones  1

Monitors aren’t just necessary for mixing, they also help to hear what you’re recording as you record it.  Remember when I was talking about the number of different loudnesses you can have for 16-bit and 24-bit audio?  Well, when you make a sound louder than the loudest volume you can record, you get digital distortion.  Digital distortion does not sound like Jimi Hendrix, it does not sound like Metallica, it sounds abrasive and harsh.  Digital distortion, unless you are creating some post-modern masterpiece, should be avoided at all costs.  Monitors, as well as the volume meters in your DAW, allow you to avoid this.  A good rule of thumb is: if it sounds like it’s distorting, it’s distorting.  Sometimes you won’t hear the distortion in your monitors, this is where the little loudness bars on your DAW software come in; those bad boys should never hit the top.

 

A Quick Word about Formats before we Finish

These days, most music ends up as an mp3.  Convenience is important so mp3 does have its place.  Most higher-end DAWs will allow you to make mp3 files upon export.  My advise to any of your learning sound-engineers out there is to just play around with formatting. However, a basic outline of some common formats may be useful…

24-bit, 96 kHz: This is best format most systems can record to.  Because of large files sizes, audio in this format rarely leaves the DAW.  Audio of this quality is best for editing, mixing, and converting to analog formats like tape or vinyl.

16-bit, 44.1 kHz: This is the format used for CDs.  This format maintains about half of the information that you can record on most systems, but it is optimized for playback by CD players and other similar devices.  Its file-size also allows for about 80 minutes of audio to fit on a typical CD.  Herein lies the balance between excellent sound quality, and file-size.

mp3, 256 kb/s: Looks a bit different, right?  The quality of mp3 is measured in kb/s.  The higher this number, the less compressed the file is and the more space it will occupy.  iTunes uses mp3 at 256 kb/s, Spotify probably uses something closer to 128 kb/s to better support streaming.  You can go as high as 320 kb/s with mp3.  Either way, mp3 compression is always lossy so you will never get an mp3 to sound quite as good as an uncompressed audio file.

 

In Conclusion

Recording audio is one of the most fun hobbies one can adopt.  Like all new things, recording can be difficult when you first start out but will become more and more fulfilling over time.  One can create their own orchestras at home now; a feat which would have been near impossible 20 years ago.  The world has many amazing sounds and it is up to people messing around with microphone in bedrooms and closets to create more.

Categories
Hardware Operating System

Hard Drives: How Do They Work?

What’s a HDD?

A Hard Disk Drive (HDD for short) is a type of storage commonly used as the primary storage system both laptop and desktop computers. It functions like any other type of digital storage device by writing bits of data and then recalling them later. It stands to mention that an HDD is what’s referred to as “non-volatile”, which simply means that it can save data without a source of power. This feature, coupled with their large storage capacity and their relatively low cost are the reasons why HDDs are used so frequently in home computers. While HDDs have come a long way from when they were first invented, the basic way that they operate has stayed the same.

How does a HDD physically store info?

Inside the casing there are a series of disk-like objects referred to as “platters”.

The CPU and motherboard use software to tell what’s called the “Read/Write Head” where to move on the platter and where it then provides an electrical charge to a “sector” on the platter. Each sector is an isolated part of the disk containing thousands of subdivisions all capable of accepting a magnetic charge. Newer HDDs have a sector size of 4096 bytes or 32768 bits; Each bit’s magnetic charge translates to a binary 1 or 0 of data. Repeat this stage and eventually you have a string of bits which when read back can give the CPU instructions, whether it be updating your operating system, or opening your saved document in Microsoft Word.

As HDDs have been developed, one key factor that has changed is the orientation of the sectors on the platter. Hard Drives were first designed for “Longitudinal Recording” – meaning the longer side of the platter is oriented horizontally – and since then have utilized a different method called “Perpendicular Recording” where the sectors are stacked on end. This change was made as hard drive manufacturers were hitting a limit on how small they could make each sector due to the “Superparamagnetic Effect.” Essentially, the superparamagnetic effect means that hard drive sectors smaller than a certain size will flip magnetic charge randomly based on temperature. This phenomenon would result in inaccurate data storage, especially given the heat that an operating hard drive emits.

One downside to Perpendicular Recording is increased sensitivity to magnetic fields and read error, creating a necessity for more accurate Read/Write arms.

How software affects how info is stored on disk:

Now that we’ve discussed the physical operation of a Hard Drive, we can look at the differences in how operating systems such as Windows, MacOS, or Linux utilize the drive. However, beforehand, it’s important we mention a common data storage issue that occurs to some degree in all of the operating systems mentioned above.

Disk Fragmentation

Disk Fragmentation occurs after a period of data being stored and updated on a disk. For example, unless an update is stored directly after a base program, there’s a good chance that something else has been stored on the disk. Therefore the update for the program will have to be placed in a different sector farther away from the core program files. Due to the physical time it takes the read/write arm to move around, fragmentation can eventually slow down your system significantly, as the arm will need to reference more and more separate parts on your disk. Most operating systems will come with a built in program designed to “Defragment” the disk, which simply rearranges the data so that all the files for one program are in once place. The process takes longer based on how fragmented the disk has become. Now we can discuss different storage protocols and how they affect fragmentation.

Windows:

Windows uses a base computer language called MS-DOS (Microsoft Disk Operating System) and a file management system called NTFS, or New Technology File System, which has been the standard for the company since 1993. When given a write instruction, an NT file system will place the information as close as possible to the beginning of the disk/platter. While this methodology is functional, it only leaves a small buffer zone in between different files, eventually causing fragmentation to occur. Due to the small size of this buffer zone, Windows tends to be the most susceptible to fragmentation.

Mac OSX:

OSX and Linux are both Unix based operating systems. However their file system are different; Mac uses the HFS+ (Hierarchical File System Plus) protocol, which replaced the hold HFS method. HFS+ differs in that it can handle a larger amount of data at a given time, being 32bit and not 16bit. Mac OSX doesn’t need a dedicated tool for defragmentation like Windows does OSX avoids the issue by not using space on the HDD that has recently been freed up – by deleting a file for example – and instead searches the disk for larger free sectors to store new data. Doing so increases the space older files will have closer to them for updates. HFS+ also has a built in tool called HFC, or Hot File adaptive Clustering, which relocates frequently accessed data to specials sectors on the disk called a “Hot Zone” in order to speed up performance. This process, however, can only take place if the drive is less than 90% full, otherwise issues in reallocation occur.  These processes coupled together make fragmentation a non-issue for Mac users.

Linux:

Linus is an open-source operating system which means that there are many different versions of it, called distributions, for different applications. The most common distributions, such as Ubuntu, use the ext4 file system. Linux has the best solution to fragmentation as it spreads out files all over the disk, giving them all plenty of room to increase in size without interfering with each other. In the event that a file needs more space, the operating system will automatically try to move files around it give it more room. Especially given the capacity of most modern hard drives, this methodology is not wasteful, and results in no fragmentation in Linux until the disk is above roughly 85% capacity.

What’s an SSD? How is it Different to a HDD?

In recent years, a new technology has become available on the consumer market which replaces HDDs and the problems they come with. Solid State Drives (SSDs) are another kind of non-volatile memory that simply store a positive charge or no charge in a tiny capacitor. As a result, SSDs are much faster than HDDs as there are no moving parts, and therefore no time to move the read/write arm around. Additionally, no moving parts increases reliability immensely. Solid state drives do have a few downsides, however. Unlike with hard drives, it is difficult to tell when a solid state is failing. Hard drives will slow down over time, or in extreme cases make audible clicking signifying the arm is hitting the platter (in which case your data is most likely gone) while solid states will simply fail without any noticeable warning. Therefore, we must rely on software such as “Samsung Magician” which ships with Samsung’s solid states. The tool works by writing and reading back a piece of data to the drive and checking how fast it is able to do this. If the time it takes to write that data falls below a certain threshold, the software will warn the user that their solid state drive is beginning to fail.

Do Solid States Fragment Too?

While the process of having data pile on top of itself, and needing to put files for one program in different place is still present, it doesn’t matter with solid states as there is no delay caused by the read/write arm of a hard drive moving back and forth between the different sectors. Fragmentation does not decrease performance the way it does with hard drives, but it does affect the life of the drive. Solid states that have scattered data can have a reduced lifespan. The way that solid states work cause the extra write cycles caused by defragmenting to decrease the overall lifespan of the drive, and is therefore avoided for the most part given its small impact. That being said a file system can still reach a point on a solid state where defragmentation is necessary. It would be logical for a  hard drive to be defragmented automatically every day or week, while a solid state might require only a few defragmentations, if any, throughout its lifetime.

Categories
Hardware Software

Wearable Technology

2016 has given us a lot of exciting new technologies to experiment with and be excited for. As time goes by technology is becoming more and more integrated into our every day lives and it does not seem like we will be stopping anytime soon. Here are some highlights from the past year and some amazing things we can expect to get our hands on in the years to come.

Contact Lenses

That’s right, we’re adding electronic capabilities to the little circles in your eyes. We’ve seen Google Glass, but this goes to a whole other level. Developers are already working on making lenses that can measure your blood sugar, improve your vision and even display images directly on your eye! Imagine watching a movie that only you can see, because it’s inside your face!

Kokoon

Kokoon started out as a Kickstarter that raised over 2 million dollars to fund its sleep sensing headphones. It is the first of its kind, able to help you sleep and monitor when you have fallen asleep to adjust your audio in real time. It’s the insomnia’s dream! You can find more information on the Kokoon here: http://kokoon.io/

Nuzzle

Nuzzle is a pet collar with built in GPS tracking to keep your pet safe in case it gets lost. But it does more than that. Using the collar’s companion app, you can monitor your dogs activity and view wellness statistics. Check it out: http://hellonuzzle.com/

Hearables

Your ears are the perfect place to measure all sorts of important stuff about your body such as your temperature and heart rate. Many companies are working on earbuds that can sit in your ear and keep statistics on these things in real time. This type of technology could save lives, as it could possibly alert you about a heart attack before your heart even knows it.

Tattoos

Thought it couldn’t get crazier than electronic contacts? Think again. Companies like Chaotic Moon and New Deal Design are working on temporary tattoos than can use the electric currents on the surface of your skin to power them up and do all kinds of weird things including open doors. Whether or not these will be as painful as normal tattoos is still a mystery, but we hope not!

VR

Virtual Reality headsets have been around for a while now, but they represent the ultimate form or wearable technology. These headsets are not mainstream yet and are definitely not perfected, but we can expect to be getting access to them within the next couple of years.

Other impressive types of wearable tech have been greatly improved on this year such as smart watches and athletic clothing. We’re even seeing research done on Smart Houses, which can be controlled completely with your Smart Phone, and holographic image displays that don’t require a screen. The future of wearable technology is more exciting than ever, so get your hands on whatever you can and dress to impress!

Categories
Hardware

A Fundamental Problem I See with the Nintendo Switch

Nintendo’s shiny new console will launch on March 3rd…or wait, no…Nintendo’s shiny new handheld will launch on March 3rd…Wait…hold on a second…what exactly do you call it?

The Nintendo Switch is something new and fresh that is really just an iteration on something we’ve already seen before.

In 2012, The Wii U, widely regarded as a commercial flop, operated on the concept that you could play video games at home with two screens rather than one. The controller was a glorified tablet that you couldn’t use as a portable system. At most, if your grandparents wanted to use the television to watch Deal or No Deal, you could take the tablet into the other room and stream the gameplay to its display.

Two months later, Nvidia took this concept further with the Nvidia Shield Portable. The system was essentially a bulky Xbox 360 controller with a screen you could stream your games to from your gaming PC. The system also allowed you to download light games from the Google Play store, so while it wasn’t meant to be treated as a handheld, it could be used as one if you really wanted to.

Then, a full year after the release of the Wii U, Sony came out with the PlayStation 4. Now, if you owned a PlayStation Vita from 2011, you could stream your games from your console to your Vita. Not only would this work locally, but you could also do it over Wi-Fi. So, what you had was a handheld that could also play your PS4 library from anywhere that had a strong internet connection. This became an ultimately unused feature as Sony gave up trying to compete with the 3DS. As of right now, Sony is trying to implement this ability to stream over Wi-Fi to other devices, such as phones and tablets.

Screen Shot 2017-02-15 at 10.23.57 AM

And now we have the Nintendo Switch. Rather than make a system that can stream to a handheld, Nintendo decided to just create a system that can be both. Being both a handheld and a console might seem like a new direction when in reality I’d like to think it’s more akin to moving in two directions at once. The Wii U was a dedicated console with an optional function to allow family to take the TV from you, the Nvidia Shield Portable was an accessory that allowed you to play your PC around the house, and the PlayStation Vita was a handheld that had the ability to connect to a console to let you play games anywhere you want. None of these devices were both a console and a handheld at once, and by trying to be both, I think Nintendo might be setting themselves up for problems down the road.

Screen Shot 2017-02-15 at 10.22.24 AM

Screen Shot 2017-02-15 at 10.39.27 AM

Remember the Wii? In 2006, the Wii was that hot new item that every family needed to have. I still remember playing Wii bowling with my sisters and parents every day for a solid month after we got it for Christmas. It was a family entertainment system, and while you could buy some single player games for it, the only time I ever see the Wii getting used anymore is with the latest Just Dance at my Aunt’s house during family get-togethers. Nobody really played single player games on it, and while that might have a lot to do with the lack of stellar “hardcore” titles, I think it has more to do with Nintendo’s mindset at the time. Nintendo is a family friendly company, and gearing their system towards inclusive party games makes sense.

Screen Shot 2017-02-15 at 10.24.24 AM

Nintendo also has their line of 3DS portable systems. The 3DS isn’t a family system; everyone is meant to have their own individual devices. It’s very personal in this sense; rather than having everyone gather around a single 3DS to play party games on, everyone brings their own. Are you starting to see what I’m getting at here?

 

Nintendo is trying to appeal to both the whole family and create a portable experience for a single member of the family. I remember unboxing the Wii for Christmas with my sisters. The Wii wasn’t a gift from my parents to me; it was a gift for the whole family. I also remember getting my 3DS for Christmas, and that gift had my name on it and my name alone. Now, imagine playing Monster Hunter on your 3DS when suddenly your sisters ask you to hand it over so they can play Just Dance. Imagine having a long, loud fight with your brother over who gets to bring the 3DS to school today because you both have friends you want to play with at lunch. Just substitute 3DS with Nintendo Switch, and you’ll understand why I think the Switch has some trouble on the horizon.

You might argue that if you’re a college student who doesn’t have your family around to steal the switch away, this shouldn’t be a problem. While that might be true, remember that Nintendo’s target demographic is and has always been the family. Unless they suddenly decide to target the hardcore demographic, which it doesn’t look like they’re planning on doing, Nintendo’s shiny new console/handheld will probably tear the family apart more than it will bring them together. When you’re moving in two directions at once, you’re bound to split in half.

 

Categories
Hardware

Organic Light-Emitting Diode Displays

The screen you’re reading this on is most likely a Twisted Nematic, or TN for short, screen. TN screens are the most ubiquitous and oldest screens still used today. TN panels tend to be cheap to produce, have terrible viewing angles where colors quickly become distorted at an angle. But these types of panels generally have low power draw and the ability to produce high frame rates, which make them a popular choice for laptops and gaming screens respectively.

If you’re viewing this on a higher quality screen, or a computer or phone where you’ve spent more than the average price tag, you probably have an In-Plane Switching display, or IPS. These panels offer a wider range of accurate and vibrant colors, and offer them more consistently at angles, making them a good choice for viewing photos, or sharing images or videos with friends all watching on one screen.

However, both these screen technologies share similar inherent disadvantages. Both screens function similiarly, utilizing a backlight to display a colored image to the display. This takes up valuable space, produces more weight, and can be less efficient to display certain ranges of colors.

In come Organic Light-Emitting Diode Displays, or OLED for short. Working without a backlight, OLED displays individually can light up each pixel on an array, creating richer colors and a more vibrant display. For example, to display the color black, the pixel tasked would not turn on at all, creating a much richer black color (instead of it being backlit). Not only can OLED displays can be smaller, but they can be more power efficient when viewing darker colors and blacks, as the pixels don’t have to be on at all. Additionally, OLED displays will be thinner, more power efficient, have better viewing angles, and will have a better response time than any other type of LCD panel.

OLED panels aren’t quite where we want them yet though, as manufactures still work out problems. OLED panels are very expensive, because only a handful of manufacturer’s produce them. Once more manufacturers start seeing the need for a future of OLED panels, manufacturing prices will go down and companies start to invest in the materials and machinery needed to produce such panels. The other issue is battery life in a negative sense. When displaying images that are all black, OLED panels are incredibly power efficient. But with screens that are all white, that require the most amount of power to produce, OLED panels can up to twice as much power to power the screen than a comparable LCD screen. Finally, OLED panels have significant problems with their longevity, as problems such as ghosting, burn-ins, and consistency to display a certain brightness all become problems as the panels age.

Overall, OLED panels will be the future of displays. They have several advantages over modern LCD panels such as TN or IPS displays, but as a relatively new technology, there are many bugs that still must be worked out. Many laptops such as the Thinkpad X1 Yoga, HP Spectre x360, and Dell Alienware 15 all have options for them, there are also a few TVs available with such panels, the Apple Watch and Touchbar on the new MacBook Pro also feature OLED components. So as OLED panels become more ubiquitous in life, you may want to think about spending the extra cash to include one in your newest technology gadget, and enjoy its advantages.

Categories
Hardware Operating System

Bluetooth Headphones: Are you ready to go wireless?

The time has finally come, and Apple has removed the 3.5mm jack from it’s newest line of iPhones entirely. While this will lead to a new generation of lighting connector based headphones, it will also considerably increase the popularity of bluetooth headphones. Like the electric car and alternative forms of energy, bluetooth headphones are something that everyone’s going to have to accept eventually, but that’s not such a bad thing. Over the past few years bluetooth headphones have gotten cheaper, better sounding, and all around more feasible for the average consumer. With the advent of Bluetooth 4.2, the capacity is there for high-fidelity audio streaming. Think about it: as college students we spent a lot of our time walking around (especially on our 1,463 acre campus). Nothing is more annoying than having your headphone cable caught on clothing, creating cable noise, or getting disconnected all together. There are many different form factors of bluetooth headphones to fit any lifestyle and price point. Here are a few choices for a variety of users.

Are you an athlete? Consider the Jaybird Bluebuds X

bluThese around-the-neck IEMs provide incredibly sound quality, and have supports to stay in your ears wether you’re biking, running, or working out. Workout getting too intense and you’re worried about your headphones? Don’t sweat it! The Bluebuds are totally water-proof, with a lifetime warranty if anything does happen.
.
.
.
.


Looking for portable Bluetooth on a budget? The Photive BTH3 is for you

photiveWell reviewed online, these $45 headphones provide a comfortable fit and a surprising sound signature. It’s tough to find good wired headphones for that price, yet the BTH3s sound great with the added bonus of wireless connectivity and handsfree calling. When you’re not using them, they can fold flat and fit into an included hard case to be put into your bag safely.
.
.


.High performance import at a middle of the road price.
s700Full disclosure: These are my headphone of choice. At double the price of the previous option and around 1/4th the price of the Beats Studio wireless, we find these over-ear bluetooth headphones from the makers of the famous ATH-M50. With a light build, comfortable ear cups and amazing sound quality, these headphones take the cake for price-performance in the ~$100 range.
.


Have more money than you know what to do with? Have I got an option for you.

vmoda What you see here are the V-MODA Crossfade Wireless headphones, and they come in at a wallet squeezing $300 MSRP. With the beautiful industrial design and military-grade materials, it’s an easy choice over the more popular Apple wireless headphone offerings. Like other headphones in the V-MODA line, these headphones are bass-oriented, but the overall sound signature is great for on the go listening.