What is S.M.A.R.T?

Have you ever thought your computer might be dying but you don’t know what? Symptoms that people might be familiar with may include slowing down, increased startup time, programs freezing, constant disk usage, and audible clicking. While these symptoms may happen to a lot of people, they don’t necessarily mean the hard drive is circling the drain. With a practically unlimited number of other things that could make the computer slow down and become unusable, how are you supposed to find out exactly what the problem is? Fortunately, the most common part to fail in a computer, the hard drive (or data drive), has a built-in testing technology that even users can use to diagnose their machines without handing over big bucks to a computer repair store or having to buy an entire new computer if their computer is out of warranty.

Enter SMART (Self-Monitoring, Analysis and Reporting Technology). SMART is a monitoring suite that checks computer drives for a list of parameters that would indicate drive failure. SMART collects and stores data about the drive including errors, failures, times to spin up, reallocated sectors, and read/write abilities. While many of these attributes may be confusing in definition and even more confusing in their recorded numerical values, SMART software can predict a drive failure and even notify the user of the computer that the software has detected a failing drive. The user can then look at the results to verify, or in unsure, bring to a computer repair store for a verification and drive replacement.

So how does one get access to SMART? Many computers include built in diagnostic suites that can be accessed via a boot option when the computer first turns on. Others manufacturers require that you download an application without your operating system that can run a diagnostic test. These diagnostic suites will usually check the SMART status, and if the drive is in fact failing, the diagnostic suite will report a drive is failing or has failed. However, most of these manufacturer diagnostics will simply only say passed or failed, if you want access to the specific SMART data you will have to use a Windows program such as CrystalDiskInfo, a Linux program such as GSmartControl, or SMART Utility for Mac OS.

These SMART monitoring programs are intelligent enough to detect when a drive is failing, to give you ample time to back up your data. Remember, computer parts can always be replaced, lost data is lost forever. However, it should be noted that SMART doesn’t always detect when a drive fails. If a drive suffers a catastrophic failure like a physical drop or water damage while on SMART cannot predict these and the manufacturer is not at fault. Therefore, while SMART is best to be used as a tool to assess whether a drive is healthy or not, it is used most strongly in tandem with a good reliable backup system and not as a standalone protection against data failure.

Multiple Desktops in Windows 10

The concept of using multiple desktops isn’t new. Apple incorporated this feature back in 2007 starting with OS X 10.5 Leopard in the form of Spaces, allowing users to have up to 16 desktops at once. Since then, PC users have wondered if/when Microsoft would follow suit. Now, almost a decade later, they finally have.

Having more than one desktop allows you to separate your open windows into different groups and only focus on one group at a time. This makes it much easier to juggle working on multiple projects at once, giving each one a dedicated desktop. It’s also useful for keeping any distractions out of sight as you try to get your work done, while letting you easily shift into break mode at any time.

If you own a Windows computer and didn’t know about multiple desktops, you’re not alone! Microsoft didn’t include the feature natively until Windows 10, and even then they did it quietly with virtually no advertising for it at all. Here’s a quick guide on how to get started.

To access the desktops interface, simply hold the Windows Key and then press Tab. This will bring you to a page which lists the windows you currently have open. It will look something like this:

Here, you can see that I’ve got a few different tasks open. I’m trying to work on my art in MS Paint, but I keep getting distracted by YouTube videos and Moodle assignments. To make things a little easier, I can create a second desktop and divide these tasks up to focus on one at a time.

To create a new desktop, click the New desktop button in the bottom right corner of this screen. You will see the list of open desktops shown at the bottom:

Now you can see I have a clean slate on Desktop 2 to do whatever I want. You can select which desktop to enter by clicking on it. Once you are in a desktop, you can open up new pages there and it will only be open in that desktop. You can also move pages that are already open from one desktop to another. Let’s move my MS Paint window over to Desktop 2.

On the desktops interface, hovering over a desktop will bring up the list of open windows on that desktop. So, since I want to move a page from Desktop 1 to Desktop 2, I hover over Desktop 1 so I can see the MS Paint window. To move pages around, simply click and drag them to the desired desktop.

I dragged my MS Paint window over from Desktop 1 to Desktop 2. Now, when I open up Desktop 2, the only page I see is my beautiful artwork.

Finally, I can work on my art in peace without distractions! And if I decide I need a break and want to watch some YouTube videos, all I have to do is press Windows+Tab and select Desktop 1 where YouTube is already open.

If you’re still looking for a reason to upgrade to Windows 10, this could be the one. The feature really is super useful once you get the hang of it and figure out how to best use it for your needs. My only complaint is that we don’t have the ability to rename desktops, but this is minor and I’m sure it will be added in a future update.

 

An Introduction to Discord: the Latest and Greatest in VoIP for Gamers

PC Gaming continues to grow annually as one of the primary platforms for gamers to enjoy their favorite titles. E-Sports (think MLB/NFL/NBA/NHL-level skills, commentary and viewership, but for video games) also continue to grow, creating a generation of hyper-competitive gamers all vying to rise above the rest. Throughout of the history of PC gaming, players have used a variety of voice communication programs to allow them to communicate with their teammates. Skype, Mumble, Ventrilo, and Teamspeak are just a few of the clients that are still used today, but in late 2015, a new challenger appeared: Discord!

You heard them. It’s time to ditch Skype and Teamspeak!

Discord was created to serve as VoIP platform that can host many users at a time for voice, text, image and file sharing. It’s the perfect solution for users that were looking for a voice chat program that is easy to use, resource-light, and capable of just about anything. 

Here’s what Discord looks like once you’re logged in. In the center of the screen, users can use discord like they would any typical messenger program to send files, links, texts, images, videos, and other files. Slightly to the left, you can connect to channels to communicate with others over chat.

Traveling even further to our left is a list of discord servers you can join. These are specific groups of channels that you usually have to be invited to and are usually filled with members of various online communities. It’s a great way to chat with people who share similar interests! Many subreddits and YouTube communities have dedicated discord servers.

Discord’s popularity is exploding, with over 45 million users as of May 2017. It’s ability to provide services in an easy (and free!) to use platform that others have failed to match in the past makes it a strong contender for the best VoIP program to date. It even boasts fairly robust security features, such as having to confirm a login via email every time you try to log in to discord from a new IP address.

To get started, head on over to https://discord.gg to sign up. Discord is also available as a client application on desktop machines, as well as for mobile devices like iOS and Android.

 

My Top 5 Google Chrome Extensions

A Google Chrome extensions are like apps for your phone, except they’re for your browser. Extensions add functionality for specific things. In this article I will go over the top five extensions that I find myself using the most.

Imagus.
https://lh3.googleusercontent.com/YAt89Udgoyfg6qfIhVO-qvqGXSTVr10NcOJHfKuFs8TPLxklZkVMjiVURjFqCzjuZcYDTGX2uKA=w640-h400-e365
Many websites such as Reddit and Twitter make it very hard to see pictures with out clicking on them, this is where Imagus comes in. Imagus is an extension that makes it easier to see pictures that are too small or maybe cropped due to the layout of the website. When you move your cursor over an image Imagus opens it up to full size next to the cursor, which makes it much easier to see. Not only that Imagus lets you keep the image open without keeping your cursor on the image by simply hitting enter. To make it disappear simply hit enter again. Check it out here https://goo.gl/dm1Q4d.

Magic Actions for Youtube.
https://www.mycinema.pro/i/preview_magic_actions.jpg
Magic Actions adds a lot of much-needed features to the already great site, which is Youtube. Magic Action adds the ability to full screen a window within a tab, something that I constantly find myself doing. It also allows Youtube to be turned to dark mode as well allowing users to take quick screenshots of Youtube videos. Check it out here https://goo.gl/jPHA7f.

Grammarly.
https://www.theedublogger.com/files/2015/02/grammarly-1lzblh1.png
Writing can be hard especially when many websites don’t have a built-in grammar and spell checker. This is where Grammarly comes in. Grammarly brings a spell checker to every text box on the internet. Not only that Grammarly can also catch less obvious errors such as a lack of a comma or a misplaced modifier. Check it out here https://goo.gl/kUSVvZ.

Tab for a Cause.
https://tabforacause-west.s3.amazonaws.com/static-1/img/product-images/sample_page1.0dcb41c8c4a9.jpg
Almost everyone wants to help those in need, but often it can be financially difficult to give money to charity. Tab for a Cause makes it easy to help out. Simply enable the extension and tab for a cause will become the screen that appears every time a new tab opens. On the new screen there is a small ad which is used to generate ad revenue for charity. Every time you open a new tab ad money is generated. If you are like me and constantly open tabs you will be raising a lot of money for charity by simply browsing the web. Check it out here https://goo.gl/sSqhWQ.

goo.gl URL Shortener.
https://lh3.googleusercontent.com/as3S-y6muxSh8gYagkFwjcfFIhA9pmdwmxKHNIvmxmAZWbqL4fSuFwRv-ArHgNUPETMGc2LC-A=w640-h400-e365
Almost every day I copy and paste a URL whether it be to send to someone, put in a document or saving it for later. The problem with standard URLs is they are often long and not very pretty to look at. goo.gl URL Shortner makes it easy to use googles URL shortening website with one click to the icon at the top of Google Chrome. A shortened URL looks like https://goo.gl/B8J7I5 and can be done to any web page. In fact I’ve been using it for every link so far. So check it out here https://goo.gl/DUrXQ.

Welcome Class of 2021!

We at IT User Services would like to extend a warm welcome to all new and returning students!

As you learn and re-learn your way around campus your first month back, many of you will become acquainted with the technology and resources available to UMass students.

We at IT are here to enable your success by making technology the last thing on your mind while you make a home here at UMass, and begin or resume your studies. If you need us (or rather, when), we will be there to answer your questions, remove your malware, and fix your computer. The Help Center, the campus mothership for tech support, is located in room A109 of the Lederle Graduate Research Center (the cream-colored low-rise located across the street from the Northeast Residential Area). The Help Center is open from 8:30AM to 4:45PM Monday through Friday. We have extended service hours at the Technical Support desk in the Learning Commons. Our consultants are available for assistance there as late as midnight, depending on Library hours.

Continue reading

Transit by Wire – Automating New York’s Aging Subways

When I left New York in January, the city was in high spirits about its extensive Subway System.  After almost 50 years of construction, and almost 100 years of planning, the shiny, new Second Avenue subway line had finally been completed, bringing direct subway access to one of the few remaining underserved areas in Manhattan.  The city rallied around the achievement.  I myself stood with fellow elated riders as the first Q train pulled out of the 96th Street station for the first time; Governor Andrew Cuomo’s voice crackling over the train’a PA system assuring riders that he was not driving the train.

In a rather ironic twist of fate, the brand-new line was plagued, on its first ever trip, with an issue that has been effecting the entire subway system since its inception: the ever present subway delay.

A small group of transit workers gathered in the tunnel in front of the stalled train to investigate a stubborn signal.  The signal was seeing its first ever train, yet its red light seemed as though it had been petrified by 100 years of 24-hour operation, just like the rest of them.

Track workers examine malfunctioning signal on Second Avenue Line

When I returned to New York to participate in a summer internship at an engineering firm near Wall Street, the subway seemed to be falling apart.  Having lived in the city for almost 20 years and having dealt with the frequent subway delays on my daily commute to high school, I had no reason to believe my commute to work would be any better… or any worse.  However, I started to see things that I had never seen: stations at rush hour with no arriving trains queued on the station’s countdown clock, trains so packed in every car that not a single person was able to board, and new conductors whose sole purpose was to signal to the train engineers when it was safe to close the train doors since platforms had become too consistently crowded to reliably see down.

At first, I was convinced I was imagining all of this.  I had been living in the wide-open and sparsely populated suburbs of Massachusetts and maybe I had simply forgotten the hustle and bustle of the city.  After all, the daily ridership on the New York subway is roughly double the entire population of Massachusetts.  However, I soon learned that the New York Times had been cataloging the recent and rapid decline of the city’s subway.  In February, the Times reported a massive jump in the number of train delays per month, from 28,000 per month in 2012 up to 70,000 at the time of publication.

What on earth had happened?  Some New Yorkers have been quick to blame Mayor Bill De’Blasio  However, the Metropolitan Transportation Authority, the entity which owns and operates the city subway, is controlled by the state and thus falls under the jurisdiction of Governor Andrew Cuomo.  However, it’s not really Mr. Cuomo’s fault either.  In fact, it’s no one person’s fault at all!  The subway has been dealt a dangerous cocktail of severe overcrowding and rapidly aging infrastructure.

 

Thinking Gears that Run the Trains

Anyone with an interest in early computer technology is undoubtedly familiar with the mechanical computer.  Before Claude Shannon invented electronic circuitry that could process information in binary, all we had to process information were large arrays of gears, springs, and some primitive analog circuits which were finely tuned to complete very specific tasks.  Some smaller mechanical computers could be found aboard fighter jets to help pilots compute projectile trajectories.  If you saw The Imitation Game last year, you may recall the large computer Alan Turing built to decode encrypted radio transmissions during the Second World War.

Interlocking machine similar to that used in the NYC subway

New York’s subway had one of these big, mechanical monsters after the turn of the century; In fact, New York still has it.  Its name is the interlocking machine and it’s job is simple: make sure two subway trains never end up in the same place at the same time.  Yes, this big, bombastic hunk of metal is all that stands between the train dispatchers and utter chaos.  Its worn metal handles are connected directly to signals, track switches, and little levers designed to trip the emergency breaks of trains that roll past red lights.

The logic followed by the interlocking machine is about as complex as engineers could make it in 1904:

  • Sections of track are divided into blocks, each with a signal and emergency break-trip at their entrance.
  • When a train enters a block, a mechanical switch is triggered and the interlocking machine switches the signal at the entrance of the block to red and activates the break-trip.
  • After the train leaves the block, the interlocking machine switches the track signal back to green and deactivates the break-trip.

Essentially a very large finite-state machine, this interlocking machine was revolutionary back at the turn of the century.  At the turn of the century, however, some things were also acting in the machine’s favor; for instance, there were only three and a half million people living in New York at the time, they were all only five feet tall, and the machine was brand new.

As time moved on, the machine aged and so did too did the society around it.  After the Second World War, we replaced the bumbling network of railroads with an even more extensive network of interstate highways.  The train signal block, occupied by only one train at a time, was replaced by a simpler mechanism: the speed limit.

However, the MTA and the New York subways have lagged behind.  The speed and frequency of train service remains limited by how many train blocks were physically built into the interlocking machines (yes, in full disclosure, there is more than one interlocking machine but they all share the same principles of operation).  This has made it extraordinarily difficult for the MTA to improve train service; all the MTA can do is maintain the again infrastructure.  The closest thing the MTA has to a system-wide software update is a lot of WD40.

 

Full-Steam Ahead

There is an exception to the constant swath of delays…two actually.  In the 1990s and then again recently, the MTA did yank the old signals and interlocking machines from two subway lines and replace them with a fully automated fleet of trains, controlled remotely by a digital computer.  In a odd twist of fate, the subway evolved straight from its Nineteenth Century roots straight to Elon Musk’s age of self-driving vehicles.

The two lines selected were easy targets, both serve large swaths of suburb in Brooklyn and Queens and both are two-track lines, meaning they have no express service.  This made the switch to automated trains easy and very effective for moving large numbers of New Yorkers.  And the switch was effective!  Of all the lines in New York, the two automated lines have seen the least reduction in on-time train service.  The big switch also had some more proactive benefits, like the addition of accurate countdown clocks in stations, a smoother train ride (especially when stopping and taking off), and the ability for train engineers to play Angry Birds during their shifts (yes, I have seen this).

The first to receive the update was the city’s, then obscure, L line.  The L is one of the only two trains to traverse the width of the Manhattan Island and is the transportation backbone for many popular neighborhoods in Brooklyn.  In recent years, these neighborhoods have seen a spike in population due, in part, to frequent and reliable train service.

L train at its terminal station in Canarsie, Brooklyn

The contrast between the automated lines and the gear-box-controlled lines is astounding.  A patron of the subway can stand on a train platform waiting for an A or C train for half an hour… or they could stand on another platform and see two L trains at once on the same stretch of track.

The C line runs the oldest trains in the system, most of them over 50 years old.

The city also elected to upgrade the 7 line; the only other line in the city to traverse the width of Manhattan and one of only two main lines to run through the center of Queens.  Work on the 7 is set to finish soon and the results looks to be promising.

Unfortunately for the rest of the city’s system, the switch to automatic train control for those two lines was not cheap and it was not quick.  In 2005, it was estimated that a system-wide transition to computer controlled trains would not be completed until 2045.  Some other cities, most notably London, made the switch to automated trains years ago.  It is though to say why New York has lagged behind, but it most likely has to do with the immense ridership of the New York system.

New York is the largest American city by population and by land area.  This makes other forms of transportation far less viable when traveling though the city.  After a the public opinion of highways in the city was ruined in the 1960s following the destruction of large swaths of the South Bronx, many of the city’s neighborhoods have been left nearly inaccessible via car.  Although New York is a very walkable city, its massive size makes commuting by foot from the suburbs to Manhattan impractical as well.  Thus the subways must run every day and for every hour of the day.  If the city wants to shut down a line to do repairs, they often cant.  Often times, line are only closed for repairs on weekends and nights for a few hours.

 

Worth the Wait?

Even though it may take years for the subway to upgrade its signals, the city has no other option.  As discussed earlier, the interlocking machine can only support so many trains on a given length of track.  On the automated lines, transponders are placed every 500 feet, supporting many more trains on the same length of track.  Trains can also be stopped instantly instead of having to travel to the next red-signaled block.  With the number of derailments and stalled trains climbing, this unique ability of the remote-controlled trains is invaluable.  Additionally, automated trains running on four-track lines with express service could re-route instantly to adjacent tracks in order to completely bypass stalled trains.  Optimization algorithms could be implemented to have a constant and dynamic flow of trains.  Trains could be controlled more precisely during acceleration and breaking to conserve power and prolong the life of the train.

For the average New Yorker, these changes would mean shorter wait times, less frequent train delays, and a smoother and more pleasant ride.  In the long term, the MTA would most likely save millions of dollars in repair costs without the clunky interlocking machine.  New Yorkers would also save entire lifetimes worth of time on their commutes.  The cost may be high, but unless the antiquated interlocking machines are put to rest, New York will be paying for it every day.

Cross Platform Learning- Opinion

Last semester, my Moodle looked a little barren. Only two of my classes actually had Moodle pages. This would be okay if only 2 of my classes had websites. But all of them did. In fact, most of the classes I took had multiple websites that I was expected to check, and memorize, and be a part of throughout the semester. This is the story of how I kept up with:

  1. courses.umass.edu
  2. people.umass.edu
  3. moodle.umass.edu
  4. owl.oit.umass.edu
  5. piazza.com
  6. Flat World Learn On
  7. SimNet
  8. TopHat
  9. Investopedia
  10. Class Capture

 

The Beginning

At the beginning of the semester it was impossible to make a calendar. My syllabi (which weren’t given out in class) were difficult to find. Because I didn’t have a syllabus from which I could look at the link to the teacher’s page, I had to remember the individual links to each professor’s class. This was a total waste of my time. I couldn’t just give up either because that syllabus is where the class textbook was. I felt trapped by the learning curve of new URLs that were being slung at me. I had moments were I questioned my ability to use computers. Was I so bad that I couldn’t handle a few new websites? Has technology already left me in the past?


The Semester

One of the classes I am taking is on technology integration into various parts of your life. The class is an introductory business class with a tech focus. This class is the biggest culprit of too many websites. For homework we need website A, for class we use website B, for lab we use website C, the tests are based on the information from website D, and everything is poorly managed by website E.

Another class is completely a pen on paper note taking class. In the middle of lecture, my professor will reference something on the website and then quickly go back to dictating notes. Reflecting on it, this teaching had a method to using online resources that I enjoyed. Everything I needed to learn for the tests was given to me in class and if I didn’t understand a concept, there were in depth help on the website.

One class has updates on Moodle that just directs me toward the online OWL course. This wasn’t terrible. I am ok with classes that give me a Moodle dashboard so I have one place to start my search for homework and text books. The OWL course described also had the textbook. This was really nice. One stop shopping for one class.

My last class (I know, I am a slacker that only took 4 classes this semester) never used the online resource which meant I never got practice using it. This was a problem when I needed to use it.


The End

I got over the learning curve of the 10 websites for 4 classes I was taking. But next semester I will just have to go through the same thing. I wish that professors at UMass all had a Moodle page that would at least have the syllabus and a link to their preferred website. But they don’t do that.

Automation with IFTTT

Image result for IFTTT

“If This, Then That”, or IFTTT, is a powerful and easy to use automation tool that can make your life easier. IFTTT is an easy way to automate tasks that could be repetitive or inconvenient. It operates on the fundamental idea of if statements from programming. Users can create “applets”, which are simply just scripts, that trigger when an event occurs. These applets can be as simple as “If I take a picture on my phone, upload it to Facebook”, or range to be much more complex. IFTTT is integrated with over 300 different channels,  including major services such as Facebook, Twitter, Dropbox, and many others, which makes automating your digital life incredibly easy.

Getting Started with IFTTT and Your First Applet

Getting started with IFTTT is very easy. Simply head over to the IFTTT website and sign up. After signing up, you’ll be read to start automating by creating your first applet. In this article, we will build a simple example applet to send a text message of today’s weather report every morning.

In order to create an applet, click on “My Applets” at the top of the page, and select “New Applet”.

Now you need to select a service, by selecting the “this” keyword. In our example, we want to send a text message of the weather every morning. This means that the service will be under a “weather” service like Weather Underground. Hundreds of services are connected through IFTTT, so the possibilities are almost limitless. You can create applets that are based off something happening on Facebook, or even your Android/iOS device.

Next, you need to select a trigger. Again, our sample applet is just to send a text message of the weather report to your text in the morning. This trigger is simply “Today’s weather report”. Triggers often have additional fields that need to be filled out. In this particular one, the time of the report needs to be filled out.

Next, an action service must be selected. This is the “that” part of IFTTT. Our example applet is going to send a text message, so the action service is going to fall under the SMS category.

Like triggers, there are hundreds of action services that can be be used in your applets. In this particular action, you can customize the text message using variables called “ingredients”.

Ingredients are simply variables provided by the trigger service. In this example, since we chose Weather Underground as the trigger service, then we are able to customize our text message using weather related variables provided by Weather Underground such as temperature or condition.

After creating an action, you simply need to review your applet. In this case, we’ve just created an applet that will send a text message about the weather every day. If you’re satisfied with what it does, you can hit finish and IFTTT will trigger your applet whenever the trigger event occurs. Even from this simple applet, it is easy to see that the possibilities of automation are limitless!

Water Damage: How to prevent it, and what to do if it happens

Getting your tech wet is often one of the most common things that people tend to worry about when it comes to their devices. Rightfully so; water damage is often excluded from manufacturer warranties, can permanently ruin technology under the right circumstances, and is one of the easiest things to do to a device without realizing it.

What if I told you that water, in general, is one of the easiest and least-likely things to ruin your device, if reacted to properly?

Don’t get me wrong; water damage is no laughing matter. It’s the second most common reason that tech ends up kicking the bucket, the most common being drops (but not for the reason you might think). While water can quite easily ruin a device within minutes, most, if not all of its harm can be prevented if one follows the proper steps when a device does end up getting wet.

My goal with this article is to highlight why water damage isn’t as bad as it sounds, and most importantly, how to react properly when your shiny new device ends up the victim to either a spill… or an unfortunate swan dive into a toilet.

_________________

Water is, in its purest form, is pretty awful at conducting electricity. However, because most of the water that we encounter on a daily basis is chock-full of dissolved ions, it’s conductive enough to cause serious damage to technology if not addressed properly.

If left alone, the conductive ions in the water will bridge together several points on your device, potentially allowing for harmful bursts of electricity to be sent places which would result in the death of your device.

While that does sound bad, here’s one thing about water damage that you need to understand: you can effectively submerge a turned-off device in water, and as long as you fully dry the whole thing before turning it on again, there’s almost no chance that the water will cause any serious harm.

Image result for underwater computer

You need to react fast, but right. The worst thing you can do to your device once it gets wet is try to turn it on or ‘see if it still works’. The very moment that a significant amount of water gets on your device, your first instinct should be to fully power off the device, and once it’s off, disconnect the battery if it features a removable one.

As long as the device is off, it’s very unlikely that the water will be able to do anything significant, even less so if you unplug the battery. The amount of time you have to turn off your device before the water does any real damage is, honestly, complete luck. It depends on where the water seeps in, how conductive it was, and how the electricity short circuited itself if a short did occur. Remember, short circuits are not innately harmful, it’s just a matter of what ends up getting shocked.

Once your device is off, your best chance for success is to be as thorough as you possibly can when drying it. Dry any visible water off the device, and try to let it sit out in front of a fan or something similar for at least 24 hours (though please don’t put it near a heater).

Rice is also great at drying your devices, especially smaller ones. Simply submerge the device in (unseasoned!) rice, and leave it again for at least 24 hours before attempting to power it on. Since rice is so great at absorbing liquids, it helps to pull out as much water as possible.

Image result for phone in rice

If the device in question is a laptop or desktop computer, bringing it down to us at the IT User Services Help Center in Lederle A109 is an important option to consider. We can take the computer back into the repair center and take it apart, making sure that everything is as dry as possible so we can see if it’s still functional. If the water did end up killing something in the device, we can also hopefully replace whatever component ended up getting fried.

Overall, there are three main points to be taken from this article:

Number one, spills are not death sentences for technology. As long as you follow the right procedures, making sure to immediately power off the device and not attempt to turn it back on until it’s thoroughly dried, it’s highly likely that a spill won’t result in any damage at all.

Number two is that, when it comes to water damage, speed is your best friend. The single biggest thing to keep in mind is that, the faster you get the device turned off and the battery disconnected, the faster it will be safe from short circuiting itself.

Lastly, and a step that many of us forget about when it comes to stuff like this; take your time. A powered off device that was submerged in water has an really good chance at being usable again, but that chance goes out the window if you try to turn it on too early. I’d suggest that for smartphones and tablets, at the very least, they should get a thorough air drying followed by at least 24 hours in rice. For laptops and desktops, however, your best bet is to either open it up yourself, or bring it down the Help Center so we can open it up and make sure it’s thoroughly dry. You have all the time in the world to dry it off, so don’t ruin your shot at fixing it by testing it too early.

I hope this article has helped you understand why not to be afraid of spills, and what to do if one happens. By following the procedures I outlined above, and with a little bit of luck, it’s very likely that any waterlogged device you end up with could survive it’s unfortunate dip.

Good luck!

Tips for Gaming Better on a Budget Laptop

Whether you came to college with an old laptop, or want to buy a new one without breaking the bank, making our basic computers faster is something we’ve all thought about at some point. This article will show you some software tips and tricks to improve your gaming experience without losing your shirt, and at the end I’ll mention some budget hardware changes you can make to your laptop. First off, we’re going to talk about in-game settings.

 

In-Game Settings:

All games have built in settings to alter the individual user experience from controls to graphics to audio. We’ll be talking about graphics settings in this section, primarily the hardware intensive ones that don’t compromise the look of the game as much as others. This can also depend on the game and your individual GPU, so it can be helpful to research specific settings from other users in similar positions.

V-Sync:

V-Sync, or Vertical Synchronization, allows a game to synchronize the framerate with that of your monitor. Enabling this setting will increase the smoothness of the game. However, for lower end computers, you may be happy to just run the game at a stable FPS that is less than your monitor’s refresh rate. (Note – most monitors have a 60Hz or 60 FPS refresh rate). For that reason, you may want to disable it to allow for more stable low FPS performance.

Anti-Aliasing:

Anti-Aliasing, or AA for short, is a rendering option which reduces the jaggedness of lines in-game. Unfortunately the additional smoothness heavily impacts hardware usage, and disabling this while keeping other things like texture quality or draw distance higher can make big performance improvements without hurting a game’s appearance too much. Additionally, there are many different kinds of AA options that games might have settings for. MSAA (Multisampling AA), and the even more intensive, TXAA (Temporal AA), are both better smoothing processes that have an even bigger impact on performance. Therefore turning these off on lower-end machines is almost always a must. FXAA (Fast Approximate AA) uses the least processing power, and can therefore be a nice setting to leave on if your computer can handle it.

Anisotropic Filtering (AF):

This setting adds depth of field to a game, by making things further away from your character blurrier. Making things blurrier might seem like it would make things faster, however it actually puts a greater strain on your system as it needs to make additional calculations to initiate the affect. Shutting this off can yield improvements in performance, and some players even prefer it, as it allows them to see distant objects more clearly.

Other Settings:

While the aforementioned are the heaviest hitters in terms of performance, changing some other settings can help increase stability and performance too (beyond just simple texture quality and draw distance tweaks). Shadows and reflections are often unnoticed compared to other effects, so while you may not need to turn them off, turning them down can definitely make an impact. Motion blur should be turned off completely, as it can make quick movements result in heavy lag spikes.

Individual Tweaks:

The guide above is a good starting point for graphics settings; because there are so many different models, there are any equally large number of combinations of settings. From this point, you can start to increase settings slowly to find the sweet spot between performance and quality.

Software:

Before we talk about some more advanced tips, it’s good practice to close applications that you are not using to increase free CPU, Memory, and Disk space. This alone will help immensely in allowing games to run better on your system.

Task Manager Basics:

Assuming you’ve tried to game on a slower computer, you’ll know how annoying it is when the game is running fine and suddenly everything slows down to slideshow speed and you fall off a cliff. Chances are that this kind of lag spike is caused by other “tasks” running in the background, and preventing the game you are running from using the power it needs to keep going. Or perhaps your computer has been on for awhile, so when you start the game, it runs slower than its maximum speed. Even though you hit the “X” button on a window, what’s called the “process tree” may not have been completely terminated. (Think of this like cutting down a weed but leaving the roots.) This can result in more resources being taken up by idle programs that you aren’t using right now. It’s at this point that Task Manager becomes your best friend. To open Task Manager, simply press CTRL + SHIFT + ESC at the same time or press CTRL + ALT + DEL at the same time and select Task Manager from the menu. When it first appears, you’ll notice that only the programs you have open will appear; click the “More Details” Button at the bottom of the window to expand Task Manager. Now you’ll see a series of tabs, the first one being “Processes” – which gives you an excellent overview of everything your CPU, Memory, Disk, and Network are crunching on. Clicking on any of these will bring the process using the highest amount of each resource to the top of the column. Now you can see what’s really using your computer’s processing power. It is important to realize that many of these processes are part of your operating system, and therefore cannot be terminated without causing system instability. However things like Google Chrome and other applications can be closed by right-clicking and hitting “End Task”. If you’re ever unsure of whether you can end a process or not safely, a quick google of the process in question will most likely point you in the right direction.

Startup Processes:

Here is where you can really make a difference to your computer’s overall performance, not just for gaming. From Task Manager, if you select the “Startup” tab, you will see a list of all programs and services that can start when your computer is turned on. Task Manager will give an impact rating of how much each task slows down your computers boot time. The gaming app Steam, for example, can noticeably slow down a computer on startup. A good rule of thumb is to allow virus protection to start with Windows, however everything else is up to individual preference. Shutting down these processes on startup can prevent unnecessary tasks from ever being opened, and allow for more hardware resource availability for gaming.

Power Usage:

You probably know that unlike desktops, laptops contain a battery. What you may not know is that you can alter your battery’s behavior to increase performance, as long as you don’t mind it draining a little faster. On the taskbar, which is by default located at the bottom of your screen, you will notice a collection of small icons next to the date and time on the right, one of which looks like a battery. Left-clicking will bring up the menu shown below, however right-clicking will bring up a menu with an option “Power Options” on it.

 

 

 

 

Clicking this will bring up a settings window which allows you to change and customize your power plan for your needs. By default it is set to “Balanced”, but changing to “High Performance” can increase your computer’s gaming potential significantly. Be warned that battery duration will decrease on the High Performance setting, although it is possible to change the battery’s behavior separately for when your computer is using the battery or plugged in.

Hardware:

Unlike desktops, for laptops there are not many upgrade paths. However one option exists for almost every computer that can have a massive effect on performance if you’re willing to spend a little extra.

Hard Disk (HDD) to Solid State (SSD) Drive Upgrade:

Chances are that if you have a budget computer, it probably came with a traditional spinning hard drive. For manufacturers, this makes sense as they are cheaper than solid states, and work perfectly well for light use. Games can be very demanding on laptop HDDs to recall and store data very quickly, sometimes causing them to fall behind. Additionally, laptops have motion sensors built into them which restrict read/write capabilities when the computer is in motion to prevent damage to the spinning disk inside the HDD. An upgrade to a SSD not only eliminates this restriction, but also has a much faster read/write time due to the lack of any moving parts. Although SSDs can get quite expensive depending on the size you want, companies such as Crucial or Kingston offer a comparatively cheap solution to Samsung or Intel while still giving you the core benefits of a SSD. Although there are a plethora of tutorials online demonstrating how to install a new drive into your laptop, make sure you’re comfortable with all the dangers before attempting, or simply take your laptop into a repair store to have them do it for you. It’s worth mentioning that when you install a new drive, you will need to reinstall Windows, and all your applications from your old drive.

Memory Upgrade (RAM):

Some laptops have an extra memory slot, or just ship with a lower capacity than what they are capable of holding. Most budget laptops will ship with 4GB of memory, which is often not enough to support both the system, and a game.

Upgrading or increasing memory can give your computer more headroom to process and store data without lagging up your entire system. Unlike with SSD upgrades, memory is very specific and it is very easy to buy a new stick that fits in your computer, but does not function with its other components. It is therefore critical to do your research before buying any more memory for your computer; that includes finding out your model’s maximum capacity, speed, and generation. The online technology store, Newegg, has a service here that can help you find compatible memory types for your machine.

Disclaimer: 

While these tips and tricks can help your computer to run games faster, there is a limit to what hardware is capable of. Budget laptops are great for the price point, and these user tricks will help squeeze out all their potential, but some games will simply not run on your machine. Make sure to check a game’s minimum and recommended specs before purchasing/downloading. If your computer falls short of minimum requirements, it might be time to find a different game or upgrade your setup.

PCIe Solid State Drives: What They Are and Why You Should Care

Consumer computers are largely moving away from hard disk drives, mostly because solid state drives have gotten so cheap. Upgrading to a solid state drive is one of the best things that you can do for your computer. Unlike a RAM or CPU upgrade, you will notice a dramatic difference in day-to-day usage coming from a hard drive. The only real benefit of using a traditional hard drive over a solid state drive would be capacity per dollar. If you want anything over 1TB, you’re basically going to have to settle for a hard drive.

Solid-State Drive with SATA bus (compare the gold connectors to the below image)

While SSD prices have come down, SSD technology has also improved dramatically. The latest trend for solid state drives is a move away from SATA to PCIe. Serial ATA, or SATA, is the bus interface that normally connects drives to computers. The latest version of this, SATA 3, has a bandwidth limit of 750 Megabyes per second. This used to be plenty for hard drives and even early SSDs; however modern SSD’s are easily able to saturate that bus. This is why many SSDs have started to move to PCIe. Depending on the implementation, PCIe can do up to 32 Gigabytes per second. (That’s nearly 43 times as fast!) This means that SSDs have plenty of room to grow in the future. There are a couple different technologies and terms related to PCIe SSDs that you may want to make yourself familiar with:

M.2

M.2 is a new interface for connecting SSDs to motherboards. This connector is much smaller than the SATA connector was, and allows SSDs to be much smaller and physically attach to the board instead of connecting via a cable. The confusing thing about M.2 is that it can operate drives over either SATA or PCIe. Most of the newer drivers and motherboards only support the PCIe version. M.2 drives have a few standard lengths, ranging from 16 to 110 millimeters. There are also a few different connector type styles that have varying pins on them. M.2 connectors also support other PCIe devices such as wireless cards.

NVMe

NVM Express is a Host Controller Interface that allows the CPU to talk to the SSD. This standard is meant to replace the current AHCI, which was created in the 1980s. This standard is too slow for managing solid state drives, so NVMe was designed specifically for that purpose. It means that CPUs can communicate with the drive with much lower latency. NVMe is largely the reason that current PCIe SSDs can reach speeds over 3 Gigabytes per second.

Solid State is soon to become a universal standard as older machines are phased out and consumer expectations rise. Don’t get left in the dust.

How to Fund Your Project or Organization with Online Crowdfunding!

Image: Edison Awards, 2016

Most of us remember being in high school, and having people try to sell us candy bars at outrageous prices in order to fund their mission trips, charity organizations, abroad experiences, and other such things. I always remember being impressed at the commitment of people, and confused as to how they managed to raise enough money selling candy bars! Of course, in many of these cases, parents and family members were providing much of the funding.

In this new era of interconnection through social media, it is easier than ever to raise money from your social circle using the internet. This kind of fundraising is called crowdfunding, and most of us know it best through Kickstarter.

Kickstarter is a crowdfunding platform which allows people to generate funds for various projects. These projects range from the mundane such as this (for anyone who doesn’t feel like clicking on the link, that is a man trying to raise $15 to make a french toast pancake waffle) to the brilliant (the Pebble smartwatch) , to the truly disappointing and scandalous (the Yogventures video game).

One cool aspect of Kickstarter is that only successful kick starters are able to keep the money. For instance, if your project requires $100, you must actually raise $100 dollars in order to be granted the money. If it is not successful, the money is returned to the donors. This leads people to be more likely to fund projects, as they know that if the project is not fully funded, the creators will not abscond with their cash. In addition, many Kickstarters include rewards to people based on how much they donate. For example, a video game development project might give a cool exclusive skin to people who donate $5, a signed copy for people who donate $30, and a studio tour for people who donate over $1,000.

Now you may be asking, “this is cool and all, but how does apply to me? I have no intention of creating a video game or developing some huge project.”

Crowdfunding does not need to be limited to projects and startups. For instance, if you are a member of an Registered Student Organization here at UMass, you may (ok you almost definitely do) find yourself thinking that you do not have enough money! Maybe you have a trip to go on, or an event you want to host, or equipment you need to buy. Crowdfunding is a great way to raise some funds! The UMass Minute Fund is a website which allows student groups on campus to crowdsource money. For RSOs, the Minute Fund is a better platform to raise money than places such as Kickstarter, because it does not take any cut of the money raised (as Kickstarter and other for profit companies do). This really works too! Here is a trip that I went on, funded by the Minute Fund. Here is the HackUMass Minute Fund (which was also fully funded).

In short, when your organization is running out of cash, your social circle might be able to sponsor you. Create these pages, share them on Facebook, Twitter, etc, and watch the money for your organization roll in!

How to merge Windows Mail and Calendar with iCloud


 

 

 

 

If you using a Windows PC and an iPhone, you might want to merge the calendar and Mail with iCloud instead of registering a new Microsoft account. The services in Apple become compatible with Windows 10 recently. It is very easy to set it up and easy to use.

STEP 1:

Click the Start button or search Settings in Cotana

STEP 2:

Go to Setting and Click Accounts

STEP 3:

Click Add an account

STEP 4:

Select iCloud

STEP 5:

Enter your iCloud email and password. Note: The password is not regular password of Your Apple ID. You need to generate a new password through two-factor authentication in Apple ID website. How to do that?

  • Go to Appleid.apple.com, and Sign in with your regular email and password
  • Verify your identity with two factor authentication
  • Click ‘Generate Passwords’
  • You are all set to use the new password to Log in Windows 10 Account Service

 

 

How to Import your Academic Moodle Calendar into your Personal Google Calendar

How to Export your Moodle Calendar for calendar subscription
1. Navigate to https://moodle.umass.edu/ and log in with NetID and password
2. Click under your name in the upper right hand corner and click on Dashboard
3. Scroll to the bottom of the page and click on Go to calendar… in the bottom right hand corner
4. Switch the drop down menu to specify whether you want a specific class or all of your classes bundled under one calendar this is important later at step 6
5. Click on the Export calendar button in the middle of the page
6. Some settings will show up in regards to exporting your Umass Moodle Calendar
a. I would recommend under the Export* menu choosing All events if you decided earlier to bundle your classes in one export, otherwise if you’re exporting classes individually I would recommend selecting Events related to courses for this option
b.I would recommend under the for* menu to choose Custom range because it guarantees all the events to be added
7. Click on Get calendar URL and *triple click* on the generated Calendar URL (as it may overlap with the Monthly view column)
8. You can now import this calendar into any calendar client that allows for import by URL

Note: This export may have to be updated in the future because it won’t add new events retroactively.


How to Import this Moodle Calendar into Google Calendar
1. Navigate to https://www.google.com/calendar and log in with your credentials
2. On the left hand side on Other calendars click the down facing carrot symbol and click on Add by URL
3. Paste the copied URL, this step may take 20 or so seconds to load the new calendar
a. This step will fail if the generated calendar URL was not copied it its entirety.
4. You can rename this calendar by clicking on the down facing carrot symbol to the right of it and clicking on Calendar settings, then changing the field Calendar Name:
Happy Google Calendaring!

Quantum Computers: How Google & NASA are pushing Artificial Intelligence to its limit

qcomp

“If you think you understand quantum physics, you don’t understand quantum physics”. Richard Feynman quoted that statment in relation to the fact that we simply do not yet fully understand the mechanics of the quantum world. NASA, Google, and DWave are trying to figure this out as well by revolutionizing our understanding of physics and computing with the first commercial quantum computer that runs 100 million times faster than traditional computers.

Quantum Computers: How they work

To understand how quantum computers work you must first recognize how traditional computers work. For several decades, a computer processor’s base component is the transistor. A transistor either allows or blocks the flow of electrons (aka electricity) with a gate. This transistor can then be one of two possible values: on or off, flowing or not flowing. The value of a transistor is binary, and is used to represent digital information by representing them in binary digits, or bits for short. Bits are very basic, but paired together can produce exponentially more and more possible values as they are added. Therefore, more transistors means faster data processing. To fit more transistors on a silicon chip we must keep shrinking the size of them. Transistors nowadays have gotten so small they are only the size of 14nm. This is 8x less than the size of an HIV virus and 500x smaller than a red blood cell.

As transistors are getting to the size of only a few atoms, electrons may just transfer through a blocked gate in a concept called quantum tunneling. This is because in the quantum realm physics works differently than what we are used to understanding, and computers start making less and less sense at this point. We are starting to see a physical barrier to the efficiency technological processes, but scientists are now using these unusual quantum properties to their advantage to develop quantum computers.

Introducing the Qubit!

With computers using bits as their smallest unit of information, quantum computers use qubits. Like bits, qubits can represent the values of 0 or 1. This 0 and 1 is determined by a photon and its spin in a magnetic field where polarization represents the value; what separates them from bits is that they can also be in any proportion of both states at once in a property called superpositioning. You can test the value of a photon by passing it through a filter, and it will collapse to be either vertically or horizontally polarized (0 or 1). Unobserved, the qubit is in superposition with probabilities for either state – but the instant you measure it it collapses to one of the definite states, being a game-changer for computing.

201011_qubit_vs_bit

When normal bits are lined up they can represent one of many possible values. For example, 4 bits can represent one of 16 (2^4) possible values depending on their orientation. 4 qubits on the other hand can represent all of these 16 combinations at once, with each added qubit growing the number of possible outcomes exponentially!

Qubits can also feel another property we can entanglement; a close connection that has qubits react to a change in the other’s state instantaneously regardless of the distance between them both. This means when you measure one value of a qubit, you can deduce the value of another without even having to look at it!

Traditional vs Quantum: Calculations Compared

When performing logic on traditional computers it is pretty simple. Computers perform logic on something we call logic gates using a simple set of inputs and producing a single output (based on AND, OR, XOR, and NAND). For example, two bits being 0 (false) and 1 (true) passed through an AND gate is 0 since both bits aren’t true. 0 and 1 being passed through an OR gate will be 1 since either of the two needs to have the value of true for the outcome to remain true. Quantum gates perform this on a much more complex level. They manipulate an input of superpositions (qubits each with probabilities of 0 or 1), rotates these probabilities and produces another superposition as an output; measuring the outcome, collapsing the superpositions into an actual sequence of 0s and 1s for one final definite answer. What this means is that you can get the entire lot of calculations possible with a setup all done at the same time!

quantum-computers-new-generation-of-computers-part1-by-prof-lili-saghafi-17-638

When measuring the result of qubit’s superpositions, they will probably give you the one you want. However you need to be sure that this outcome is correct so you may need to double check and try again. Exploiting the properties of superposition and entanglement can be exponentially more efficient than ever possible on a traditional computer.

What Quantum Computers mean for our future

Quantum computers will most likely not replace our home computers, but they are much more superior. In applications such as data searching in corporate databases, computers may need to search every entry in a table. Quantum computers can do this task in a square root of that time; and for tables with billions of entries this can save a tremendous amount of time and resources. The most famous use of quantum computers is IT security. Tasks such as online banking and browsing your email is kept secure by encryption, where a public key is made for everyone to encode messages only you can decode. Problem is public keys can be used to calculate one’s secret private key, but doing the math on a normal computer would literally take years of trial and error. Quantum computers can do this in a breeze with an exponential decrease in calculation time! Simulations in the quantum world are intense on resources, regular computers lack on resources for bigger structures such as molecules. So why not simulate quantum physics with actual quantum physics? Quantum simulations for instance could lead to insights on proteins that can revolutionize medicine as we know it.

140903112645-google-quantum-computer-1024x576

What’s going on now in Quantum Computing? How NASA & Google are using AI to reveal nature’s biggest secrets.

We’re unsure if quantum computers will only be a specialized tool, or a big revolution for humanity. We do not know the limits for technology but there is only one way to find out. One of the first commercial quantum computers developed by DWave will be stored in Google and NASA’s research center in California. They operate the chip at an incredible temperature at 200 times colder than interstellar space. They are currently focused on using it to solve optimization problems, finding the best outcome given a set of data. For example: finding the best flight path to visit a set of places you’d like to see. Google and NASA are also using artificial intelligence on this computer to further our understanding of the natural world. Since it operates on quantum level mechanics beyond our knowledge, we can ask it questions that we may never otherwise be able to figure out. Questions such as “are we alone?” and “where did we come from?” can be explored. We have evolved into creatures that are able to ask the nature of physical reality, and being able to solve the unknown is even more awesome as a species. We have the power to do it and we must do it, because that is what it means to be human.

Bonus Bit: Surviving the Steam Summer Sale

Ahh yes, the steam summer sale, the glorious and magical two weeks of wallet crushing sales and bundles, whether you are new or a grizzled veteran, there is always something to be found at a price you thought was impossible.  But wait, it’s dangerous out there, take a read through this before you head out into the tsunami of sales tags to make sure you get the most out of your summer sale action.  

 

Quick Details on the Summer Sale

What: Large discounts on hundreds of video games from the largest PC gaming platform
Who: Anyone who owns a computer
When: June 22nd 1pm est until July 5th 1pm est
Where: store.steampowered.com

Changes and Updates to the Summer Sale Format

Veterans of Summer Sales will remember daily deals and flash sales, which are missing from this years sale, instead Steam will curate a list of games already on sale that they think you should take a look at.  This unfortunately limits what Valve can do with the sale, instead of like previous years with games for users to play, like the monster clicker game or being split into colored teams, they have decided to release limited summer sale stickers.  What are stickers you ask?  Stickers act in a similar way to Trading Cards, but instead of dropping from time spent in game, they drop based on certain activities that Valve want to encourage (check steam each day during the sale, etc) and if you fill up your sticker book, you may get a special surprise.  Trading cards are also back this year, and seem to be dropping in the same manner as previous sales, based on how much money (currently each $10 increase gets you a card) you have spent during the sale, with a special badge that can be crafted if you collect all the cards.  

 

Tips for New Commers 

Your first Summer Sales is almost always the most memorable sale, seeing hundreds of games that you want for 60%~95% off embeds a nostalgic feeling that is hard to shake.  Many veterans will complain that the sales aren’t like they used to be, but in reality it is more likely that they’ve picked up the games that they want, and as such it seems to loose a bit of luster to them.  But to the newbie it is all brand new and very easy to get lost in the fray.  To keep you from getting burnt out from the first week of sales I suggest you check out the r/steam and r/pcmasterrace (disclaimer: PCMR is a reddit group by and for pc gaming, there are no political allegiances, mac heathens and console peasants are welcomed) subreddits and the Summer Sale megathreads to keep up the special sales and answer any questions that you have.  

Even though it is a bit outdated I suggest keeping this flow chart in mind as planning your purchases can help keep you from breaking the bank.  Another tidbit is that Steam has a refund option, as long as you have owned the game for less than 14 days and have less than 2 hours of playtime you can refund it, but be careful, Steam refunds whole purchases and not single games, so if you buy 5 games on sale and want to refund 1, you will have to refund the other 4 as well.  Once you get down to playing with your new games, don’t forget to include other people, discord/teamspeak/mumble are great ways of voice chatting with your friends if the steam VOIP service doesn’t interest you and can provide structure if you are playing squad MMO’s.

Remember to stay safe out there, it’s a big sale but with a bit of planning and some self control you and your wallet should stay intact.

 

Content Providers and Net Neutrality: A Double-Edged Sword

Source: http://www.thetelecomblog.com/2016/06/15/fccs-net-neutrality-upheld-in-appeals-court-decision/

Source: http://www.thetelecomblog.com/2016/06/15/fccs-net-neutrality-upheld-in-appeals-court-decision/

Net neutrality is the principle that data should be treated equally by internet service providers (ISPs) and without favoring or blocking particular products or websites. Those in favor of net neutrality argue that ISPs should not be able to block access to a website run by their competitor or offer “fast lanes” to deliver data more efficiently for a hefty fee. Imagine if Verizon could stop customers from researching about switching to Comcast, or block access to negative press about their business practices. For ISPs, network inequality is a pretty sweet deal. Broadband providers can charge premiums for customers to access existing network-structures, and control the content viewed by subscribers.

Essentially, a lack of network neutrality actively promotes discrimination against competitors and encourages ISPs to deliberately limit high-speed data access. This form of throttling speeds when there are negligible costs of production after initial development is known as “artificial scarcity.” Supply is intentionally restricted which makes the item, internet access, more valuable.

Without net neutrality, internet providers have free-reign over deciding which content reaches their subscribers. In 2014, this issue came to a head when Comcast and other broadband suppliers intentionally restricted the data transmission for Netflix services. To appease customers with a paid subscription who could no longer watch the streaming service, Netflix agreed to pay the broadband companies tens of millions of dollars a year. Evidently, a lack of net neutrality creates a conflict of interest between wireless service providers and content firms like Google, Facebook, and Netflix. These content providers want consumers to have unfettered access to their services. Tolls for network access create barriers for internet-based services which rely on  ad-revenue and network traffic.

Despite the threat network neutrality poses to content-centric services many tech companies have been hesitant to vehemently oppose restricting data access. Facebook is investing in creating their own ecosystem. With Facebook as a central hub where you can connect with friends, view businesses, listen to music and play games, the company has little incentive to petition for the free and universal flow of information and Web traffic. From a corporate perspective, every web-interaction would ideally be done through Facebook. In a similar vein, Google has been moving closer and closer to becoming an internet provider themselves. Company initiatives like Google Fiber, Project Fi and Project Loon are the stepping-stone to Google dominating both the web-traffic and web-access businesses. This creates a double-edged sword where unrestricted internet access both helps and harms content-providers. While tech companies do not want restricted access to their sites, they would love to restrict consumer-access to that of their rivals. The burden of protecting a free internet and the unrestricted flow of information therefore lies on consumers.

Password Managers and You

Today we’re going to deal with an issue that I’m sure many of us run into on a daily basis: managing passwords. Given that you probably use a bajillion different services, each of which has its own password requirement, and given that UMass makes you change your password once a year, you probably have trouble keeping them all straight. Luckily for you, there is a tool you can use to keep your passwords tracked!

 

For these tools, you can use one super strong password to keep all your other passwords safe, easily searchable, and all in one place. They can often be used to automatically fill in login info on the web.

 

There are many password managers out there. You can find reviews of them simply by googling “password manager.” The ones I am going to mention here are the default chrome password manager, and Lastpass.

 

The first and easiest one, Google Smart Lock is so ubiquitous that you’ve probably been using it all along! Any time your google chrome asks you to “save” a password, it gets stored in Google Smart Lock. If you want to see your passwords, or manually add new ones, simply go to “passwords.google.com” and log in with your (non UMass) Google account. Voila! You can see all of the passwords that you have saved while using Chrome.

Image result for google smart lock

What about if you aren’t a Chrome user? Or maybe you don’t like the idea of Google storing your data… What can you do?

You can use a manager like LastPass. This browser extension/mobile app can also keep your passwords safe and encrypted. You can even set up 2 factor authentication (so that you would have to have 2 devices on you to be able to see your saved passwords). You can find more information here: https://www.lastpass.com/how-it-works but it works in essentially the same way as Google Last Pass. You can save passwords, add new passwords, automatically fill out forms, etc.

img-vault-tour-1-jpg

So get one of these managers, and never worry about forgetting your many many passwords again!

The New Face of the FCC

With any incoming president interest swirls around cabinet nominees and appointees, many set precedent for the departments, perhaps none more so than Ajit Pai, Chairman of the Federal Communications Commission.  An advocate for deregulation of the FCC and free market ideals, Pai has an unique opportunity to shape our world into something vastly new and different.

Born in 1973, Pai graduated from Havard with a BA in Social Studies in 1994 and a J.D. from the University of Chicago in 1997.  After which he clerked for the US District Court for the Eastern District of Louisiana and then working for the Department of Justice Anti-Trust Division where he specialized in mergers and acquisitions.  After which he served as an associate general counsel for Verizon where he dealt with competition matters, regulatory issues, and counseling of business units on broadband initiatives.  From there he served on several subcommittees, until 2007 when he was appointed to work for the general counsel ultimately serving as Deputy General Counsel.  In 2011 he was nominated and unanimously confirmed for the republican party position on the FCC and served until 2016.  

Pai’s controversial stances on net neutrality stem from his view that they are an overly conservative reading of the laws of the responsibilities held by the FCC, claiming that regulations may lead the FCC to regulating political speech.  He advocates for the marketplace of ideas, stating to the Washington Examiner  “I think it’s dangerous, frankly, that we don’t see more often people espousing the First Amendment view that we should have a robust marketplace of ideas where everybody should be willing and able to participate.“  While it will take time for his tenure to have an effect on regulations, he will definitely speed up the pace of work, from a 2012 speech at Carnegie Mellon “we need to start taking our other statutory and internal deadlines more seriously” and “The FCC should be as nimble as the industry we oversee”.  From corporate mergers to changing how radio spectrum is portioned out, changes will be coming.  In the speech Pai shared his view of a different FCC, where the free market is utilized to bring about change and regulations are used to increase competition.  The next 4 years will be written by free market ideals and a furious pace of work, leading to an impact that will hopefully provide better choice and coverage for consumers. 

Pai’s presence as FCC Chairman will leave a lasting change on the history of the committee, some changes will be a step in the right directions, others maybe missteps, but all of them will have the possibility of changing how you interact with the rest of the world.  

Today in “Absurd Tech Stories”: Burger King vs Google

“OK, Google: What is the Whopper burger?”

The internet is all over a story today involving burger giant Burger King and tech giant Google, in which Burger King released a new ad that takes advantage of Google Home, the in home personal assistant created by Google. This device mirrors other in home assistants like Amazon’s Alexa.

Google Home.

The short commercial, titled “BURGER KING® | Connected Whopper®” (shown below), features a Burger King employee using the phrase “OK, Google” to purposefully trigger in home devices or mobile phones with Google Voice capability to conduct a Google search for the Whopper. On the surface, this comes across as a pretty clever marketing ploy by BK, taking advantage of current tech trends to make the commercial more relate-able

However, in true internet fashion, those that wanted to have a little fun caught wind of this ad pretty quickly turned this innocent commercial into something a little more ridiculous.

Asking Google Home the question “OK, Google: What is the Whopper burger?” gives the user a description based on the current Wikipedia article. This rule applies to anything that is searched for in this fashion. Users who wanted to mess around with the first line of the Wikipedia article started to edit the line, making it say things like that the Whopper’s main ingredient was cyanide, and that the Whopper was “cancer-causing”, which would then read out when someone tried to run the voice command.

Within three hours, Google had modified their voice detection to not interact at all with the Burger King commercial. Users could still normally ask the device the same phrase, but it seemed that Google didn’t take too kindly to the small disturbance that this commercial was causing and shut it down as fast as it started.

Stories of internet trolls taking advantage of AI programs are becoming more and more prevalent in recent years. In March of 2016, Twitter users were able to modify TAY.AI, Microsoft’s Twitter chatter bot, to make remarkably inflammatory and inappropriate comments.

 

The commercial can be viewed here:

App Review: Glitché

Fun fact: You can type the “é” character on Mac OS by holding down the “e” key until the following menu pops up:

Screen Shot 2016-11-22 at 3.39.58 PM

From there, simply select the second option with your mouse and you’ll be right as rain. I’m only telling you this because the application I’ll be discussing today is called Glitché, not “Glitche”.

IMG_2882

Glitché is an app that provides users with “a full range of tools and options to turn images into masterpieces of digital art.” That description is from the app’s official website; a website which also proudly displays the following quote:

Screen Shot 2016-11-22 at 4.09.19 PM

Either this quote is outdated or Mr. Knight is putting more emphasis on the word “compared” than I’m giving him credit for. While yes, one could argue that contextually a 0.99¢ application would comparatively seem like a free download to someone purchasing a nearly $400 post-production suite, I might be more inclined to ask how you define the word “free”.

You see, Glitché is actually 0.99¢…unless you want the other features. Do you want Hi-Res Exports? That’ll be $2.99. Do you want to be able to edit videos? Another $2.99, please. Do you want camera filters? $2.99 it is!

IMG_2881

So Glitché is actually more like $9.96, but that doesn’t sound as good as 0.99¢, does it? You might argue that I’m making a big deal out of this, but I’m just trying to put this all in perspective for you. From here on out I want you to understand that the program I’m critiquing charges $10 for the full experience, which is fairly expensive for a phone application.

Another issue I have with this quote and the description given by the website is that Glitché isn’t trying to compete with Adobe Photoshop. Glitché isn’t a replacement for your post-production suite nor is it your one-stop-shop for turning images into masterpieces of digital art; rather, Glitché strives to give you a wide selection of tools to achieve a very specific look. This aesthetic can best be described as a mixture of To Adrian Rodriguez, With Love and a modern take on cyberpunk. Essentially the app warps and distorts a given image to make it look visually corrupted, glitched, or of VHS quality. It’s a bit hard to describe, so here’s a few examples of some of the more interesting filters.

IMG_2884

Unedited photo for reference

IMG_2885

The “GLITCH” filter. Holding down your finger on the screen causes the flickering and tearing to increase. Tapping once stops the flickering.

IMG_2886

The “CHNNLS” filter. Dragging your finger across the screen sends a wave of rainbow colors across it. The color of the distortion can be changed.

IMG_2887

The “SCREEN” filter works like the “CHNNLS” filter, only it distorts the entire image.

IMG_2888

The “GRID” filter turns your image into a 3D abstract object akin to something one might see in an EDM music video.

IMG_2889

The “LCD” filter lets you move the colors with your thumb while the outline of your image remains fixed.

IMG_2890

The “VHS” filter applies VHS scan lines and warps more aggressively if you press your thumb down on the image.

IMG_2891

The “DATAMOSH” filter. The direction of the distortion depends on the green dot you press in the center reticle. The reticle disappears once the image is saved.

IMG_2892

The “EDGES” filter can be adjusted using both the slider below your image and with your thumb.

IMG_2893

The “FISHEYE” filter creates a 3D fisheye overlay you can move around on your image with your thumb.

IMG_2894

The “TAPE” filter works in a similar fashion to the “VHS” filter, only moving your thumb across it creates a more subtle distortion.

Listing off some of the individual filters admittedly isn’t doing the app justice. While you are able to use a singular filter, the app also allows you to combine and overlay multiple filters to achieve different effects. Here’s something I made using a combination of five filters:

IMG_2897

You can also edit video in a similar fashion (after paying the required $2.99).

The interface itself is simplistic and easy to navigate, though the application lacks certain features one might expect. You can’t save and load projects, you can’t favorite filters, and you can’t perform any complex video editing outside of applying a filter. The app has crashed on me a few times in the past, though this is a rare occurrence. The app is regularly updated with new features and filters.

So, 0.99¢ gets you 33 filters and limits you to Lo-Res exports and GIF exports. $9.96 gets you 33 filters, the ability to export in Hi-Res, the ability to export to GIF, the ability to edit videos, and the ability to record video in the actual application while using said filters.

I keep bringing this back to the cost of the app because that’s really the only place where opinions may vary. The app does what it sets out to do, but the price for the full package leaves a lot to be desired. There are definitely people out there who would gladly pay $10 for this aesthetic, and there are plenty more who would shake their head at it. If any of the filters or images I’ve shown you seem worth $10, then I think you’ll enjoy Glitché. However, if you think this app is a bit too simplistic and overpriced for what it is, I recommend you spend your money elsewhere. It really all boils down to the cost, as the app itself works fine for what it is. In my opinion, the app would be a great deal at $3 or even $5; however, $10 is a bit much to ask for in return for a few nifty filters.

 

Browsing the Web Anonymously with a VPN

You may have heard someone say that they use a VPN to protect themselves on the internet. What is a VPN? What does it do? How can you use it to protect yourself?

VPN stands for virtual private network. They are essentially simulations of connections (hence the ‘virtual’ part) to a certain private networks (networks that one can’t normally connect to from outside or over the internet). They allow users to connect to a local private (e.g. corporate) network remotely from, say, their home, or a coffee shop. A VPN allows its users to interact with the local network as if they were normally connected to it. For example, say a developer at a tech startup wanted to work on her project at her local Starbucks instead of commuting into the office, but to protect their intellectual property the startup doesn’t allow anyone to look at their code without being connected to their local onsite network (sometimes referred to as an intranet). However, the developers at the startup aren’t big fans of the cubicle life, and like to roam around and do their work at the library with a book, or at home with their dogs. Fortunately, the startup has a VPN set up so that the developers can log into the intranet and look at their projects remotely. The computer appears as if it actually is physically located in the office and has almost all of the access that it would have if it was literally in the office.

But how does the VPN make sure that only the right people have access to the network? This is where the magic of the VPN is. When you log into your VPN client with your username and password and the server authenticates you, your computer creates a point-to-point encrypted tunnel between you and the VPN server — think of it as a really long tube that runs between your computer and the server in the office that nobody in between can look inside of. That means if you’re sitting at Starbucks and your company uses Comcast as its internet service provider, nobody in your Starbucks can peek into your Wi-Fi signal (this is referred to as a man-in-the-middle attack), and Comcast can’t snoop into what’s in the data that your company is sending to you before it delivers it to you.

Computer Privacy Hood

Just like nobody can see what’s going on here between the computer display and the man’s eyes, nobody over the internet can see what’s going on between the endpoints of a VPN point-to-point encrypted tunnel.

Having a reliable, trustworthy connection to a server over the internet can be a very valuable tool. In a world of big data, hacking, online banking, password leaks, and government surveillance, being able to communicate with anyone securely is very important.

In addition to providing secure connections to remote servers, VPNs provide another incredibly useful ability as a sort of side effect — a VPN can act as a sort of ‘online mask,’ so that you can browse around a website without the website knowing exactly who you are. Generally speaking, your identity to the World Wide Web is your IP address, which can be used to determine your location down to the city/town. When you access a website, you send your IP address to the website’s server (so that the website knows who to send information back to), and your internet service provider (e.g. Comcast) knows that you are communicating with this website (if your connection is unencrypted, Comcast can also see the content of your communications with the website). When you access this website through a VPN server, your request first goes through the encrypted tunnel to the VPN server, and the VPN server then bounces the request along to the website itself (over an unencrypted connection). When the website responds to the VPN server, the server bounces the response back to you over your encrypted tunnel. The website believes that they are just communicating with the VPN server, without any clue that their response is being passed on to anyone else. Comcast may be able to read the communications between the website and the VPN server, but they have no way of knowing that the communication is connected to you.

VPN Server Setup

This diagram shows the path that information travels through between your computer and the internet when you are connected to a VPN server. The encryption between your computer and the VPN server prevents anyone from snooping in on the communications between you and the server.

There are other ways to hide your identity on the internet. You can use a proxy, which appears similar to a VPN on the surface. You can connect to a website through a proxy to hide your IP address from the website, so the proxy also acts like a man-in-the-middle like a VPN does. The difference is that your computer’s connection to the proxy is not encrypted, so from a large enough scope, your communication with the website could be traced back to you. If an internet service provider such as Comcast happened to service both the connection from you to the proxy server, AND from the proxy server to the website, they could piece together that it was you who connected to the website over the proxy, and since the communications aren’t encrypted, they could also see exactly what you were communicating about with the website over the proxy. Proxies also don’t mask your IP address over the entire computer — you have to configure each application individually to send all of it’s internet-based protocols through a proxy server. VPNs are OS-wide, meaning that it protects your entire computer no matter what internet-based protocol is being sent out.

Proxy Server Setup

The layout of a connection to a proxy server. Only individual applications can connect to a proxy server, not the entire computer. Communications are also not encrypted and open to being intercepted.

Thanks to the ability to provide anonymity over the internet, some companies have emerged that make a business out of providing access to their VPN servers. Their business model is that, for a fee, you can connect to their VPN servers to use as an ‘online mask’ however you like, and whatever you do won’t be traced back to you. The catch is whether a particular company is trustworthy or not — some VPN service providers log your activity and give it to authority or sell it to the highest bidder, essentially nullifying the anonymity that a VPN provides. You should always be skeptical and selective when choosing a VPN service provider; and remember, you get what you pay for. There are many free VPN service providers out there that allow you to use their servers for free up to a certain bandwidth; as a general rule of thumb, whether it be regarding free VPN service providers or free social networks, as long as someone is making a profit, if you’re not paying for the product, YOU are the product!

In conclusion, there are many ways to protect yourself over the internet, and selecting the best tool for your needs is the way to go. If you’re abroad and you want to watch a show on Netflix but it’s not available in the country you’re in, you can use a proxy to connect to a US server and stream it over your proxy connection, since encryption isn’t mandatory for this case. If you’re at Dunkin’ Donuts and you’re working on a top-secret project for your startup and you don’t want any tech-savvy thieves stealing your code over your free Wi-Fi connection, you can use a VPN to encrypt your connection between you and your company server. If you want to check your bank account online, but the bank doesn’t have good online business practices and don’t encrypt their web communications by default, you may want to use a VPN when logging into your bank’s website to make sure that nobody successfully fishes for your username and password. And if you’re working on an absolutely, positively, unconditionally classified, top-secret, sensitive, need-to-know-basis document, but you really, really, really want to get a frappuchino, perhaps you should consider getting yourself one of those sweatshirts with the oversized privacy hoods that you can wrap around your computer display, as seen above.

The red iPhone 7, and Why There Should Be More Product Red Products

I recently purchased an iPhone 7 with the Product Red branding. It took a little convincing, but my wallet and I eventually came to an agreement about this. It had been a while since I last upgraded my phone, and the iPhone is the industry standard. And it’s red!

Product Red is an initiative that started 11 years ago, with a goal of engaging companies that sells consumer goods to raise funds to fight AIDS in Africa. Product Red products have a distinctive red branding, and a share of the proceeds go towards the Global Fund.

When Apple announced that they were to ship out iPhones with the Product Red casing, the overall sentiment was that the phone looked good. Real good. Almost makes you wanna trade in your Android good. And if you were already an iPhone owner and was looking to switch to a newer phone, it’s hard to look away and consider otherwise.

Apple has a very rich history with the Product Red initiative, having had branded various iPods with Product Red beginning in 2006. The new iPhone, however, is the biggest slab of red Apple has released so far, and really, it brings up the question: why aren’t there more Product Red phones elsewhere on the market? The only other phone that was ever shipped with Product Red branding was the Motorola RAZR (remember those things?), a decade ago.

Sure, Product Red has its fair share of criticisms. It is, in the end, a marketing ploy, and Apple smartly released this phone a few months before the announcement and release of the next iPhone to drive sales and push out soon to be obsolete hardware from their supply chains. But try and think of the last major product that pledge to donate a portion of the proceeds to any charity of any kind. Unfortunately, they’re few and far between.

Understand that, in today’s world, where the internet should be considered (and is, in some places) a utility, and where our phones and laptops are the main proponents of the internet, it only makes sense that we should demand more products that gives back, even if it’s just a little bit, even if it’s just a marketing ploy. Considering the already questionable ethics of how these devices are produced to begin with, it’s the least that we, as conscientious consumers, can do.

NES Mini

Nintendo recently released the NES Classic, but good luck finding it.

The NES classic is a small, $60, HDMI compatible replica of Nintendo’s iconic first console, the NES, which hit the US market in 1985.  The classic comes with 30 games preinstalled, with the potential for more to be added later.  It includes all of the classics many of us can still remember playing as kids, albeit on our parent’s childhood Consoles.  Now you can play Pac-Man, Super Mario Bros, the Legend of Zelda, Kirby’s Adventure, and more, all in a cute little NES with two controllers (which are compatible with the Wii U) that can fit in the palm of your hand!  Or you could, If it wasn’t completely sold out.

Nostalgia took its toll and Nintendo proved that their games are timeless.  Some stores sold out within 10 minutes of officially selling them, and all preorder lists for stores like Target, Best Buy, Walmart, Gamestop, an even Amazon are long and without a date or shipment size for when they will get their hands on more.

Such a clamor has been made about the new consoles that a site with the sole purpose of tracking mass shipments of them has gotten a nice bump in traffic http://www.nowinstock.net/videogaming/consoles/nesclassicmini/

Some that are ultra desperate to get their hands on the gadget have been shelling out as much as 5 times the original cost(sometimes as much as $300-500) on ebay and craigslist to own the otherwise sold out NES classic.

What’s The Deal With External Graphics Docks?

What is an External Graphics Dock?

Not everyone who likes to play video games has the time, money, or know-how to build their own gaming PC. These people will more often than not opt to get a gaming laptop instead, which with their high cost and TDP/wattage-limited graphics solutions prove unsatisfactory for high intensity gaming. If not a gaming laptop, then they do what they can with their thin & light notebook with integrated graphics that, while great for portability, can not run games very well at all. Using an external graphics dock you can get the best of both worlds! There is minimal assembly required, and you can have your thin and light laptop to bring to class or to work, then when you get home plug into your external graphics dock and have all the gaming horsepower and display outputs you need.

Sounds Great! How Do These External Graphics Docs Work, Then?

egpu
The most basic eGPU dock

The basic concept of an external graphics dock is this: take a regular desktop Graphics Card, plug it into a PCIe slot in a dock, get power to the dock and the Graphics card, then plug that dock into your laptop. After installing the right drivers and performing two or three restarts, hark! High frame rates at high settings are coming your way. The internal GPU is completely bypassed and data is sent from the laptop to the GPU to an external display, and in some cases back to the laptop to power its own internal display. The graphics card will have to be purchased separately, and to see a sizable difference in performance over a dedicated laptop GPU you will be looking at around $200 for that card on top of the cost of the dock. Each commercially available dock has their own benefits and drawbacks, but all of them share some basic properties. They can all accept any single or dual-slot GPU from AMD or Nvidia (cooler size permitting), and have at least two 6+2-pin power connectors to power the graphics card. Along with the GPU support, docks usually also add at least four USB ports to connect peripherals similar to the laptop docks of olde.

So What Are The Performance Numbers Really Like?

In general, performance loss over using that same GPU in a real desktop is 10-15%. This can be due to a reduced bandwidth over the connection to the laptop, or due to bottlenecking from less powerful laptop CPUs. However, even over a dedicated laptop GPU the increase in performance when using an external one is roughly double. Here’s a few benchmarks of recent AAA titles, courtesy of TechSpot. Listed from bottom to top, each graph has performance of the internal GPU, the Graphics Amplifier with a desktop GPU, and that same GPU in a regular desktop PC.

aga bench 1aga bench 3 aga bench 2

 

Let’s Take A Look At What is Available Today:

Alienware Graphics Amplifier (MSRP $199):

aga
Pros – Relatively inexpensive, High bandwidth interface, Good airflow, PSU is user upgradeable
Cons – Only works for Alienware machines (R2 & up), Uses proprietary cable, Requires shutdown to connect / disconnect

Razer Core (MSRP $499):
razercore
Pros – Universal Thunderbolt 3 interface, Adds ethernet jack, Sturdy aluminum construction, Small size
Cons – High cost, Compatibility list with non-Razer computers is short

MSI GS30 Shadow:
gs30shadow

Pros: User upgradeable PSU, Includes support for internal 3.5″ drive, Has integrated speakers
Cons: Only works for one machine, Huge footprint, Dock cannot be purchased separately

Final Thoughts

After seeing all the facts, does using an eGPU sound like the solution for you? If none of the options available sound perfect right now, don’t fret. As the popularity of eGPUs grows, more companies will inevitably put their hats into the ring and make their own solutions. Prices, form factors, and supported laptops will continue changing and improving as time goes on.