Future Proofing: Spending less and getting more


Future proofing, at least when it comes to technology, is a philosophy that revolves around buying the optimal piece of tech at the optimal time. The overall goal of future proofing is to save you money in the long run by purchasing devices that take a long time to become obsolete.

But, you might ask, what exactly is the philosophy? Sure, it’s easy to say that its best to buy tech that will last you a long time, but how do you actually determine that?

There are four basic factors to consider when trying to plan out a future proof purchase.

  1. Does what you’re buying meet your current needs, as well as needs you might have in the foreseeable future?
  2. Can what you’re buying be feasibly upgraded down the line?
  3. Is what you’re buying about to be replaced by a newer, better product?
  4. What is your budget?

I’m going to walk you through each of these 4 ideas, and by the end you should have a pretty good grasp on how to make smart, informed decisions when future-proofing your tech purchases!

Does what you’re buying meet your current needs, as well as needs you might have in the foreseeable future?


This is the most important factor when trying to make a future-proof purchase. The first half is obvious: nobody is going to buy anything that doesn’t do everything they need it to do. It’s really the second half which is the most important aspect.

Let’s say you’re buying a laptop. Also, let’s assume that your goal is to spend the minimum amount of money possible to get the maximum benefit. You don’t want something cheap that you’ll get frustrated with in a few months, but you’re also not about to spend a downpayment on a Tesla just so you can have a useful laptop.

Let’s say you find two laptops. They’re mostly identical, albeit for one simple factor: RAM. Laptop A has 4gb of RAM, while Laptop B has 8gb of RAM. Let’s also say that Laptop A is 250 dollars, while Laptop B is 300 dollars. At a difference of 50 dollars, the question that comes to mind is whether or not 4gb of RAM is really worth that.

What RAM actually does is act as short term storage for your computer, most important in determining how many different things your computer can remember at once. Every program you run uses up a certain amount of RAM, with things such as tabs on Google Chrome famously taking up quite a bit. So, essentially, for 50 dollars you’re asking yourself whether or not you care about being able to keep a few more things open.

Having worked retail at a major tech store in my life, I can tell you from experience that probably a little over half of everyone asked this question would opt for the cheaper option. Why? Because they don’t think that more RAM is something that’s worth spending extra money at the cash register. However, lots of people will change their mind on this once you present them with a different way of thinking about it.

Don’t think of Laptop A as being 250 and Laptop B as being 300. Instead, focus only on the difference in price, and whether or not you think you’d be willing to pay that fee as an upgrade.

You see, in half a year, when that initial feeling of spending a few hundred dollars is gone, it’s quite likely that you’ll be willing to drop an extra 50 dollars so you can keep a few more tabs open. While right now it seems like all you’re doing is making an expensive purchase even more expensive, what you’re really doing is making sure that Future_You doesn’t regret not dropping the cash when they had an opportunity.

Don’t just make sure the computer your buying fits your current needs. Make sure to look at an upgraded model of that computer, and ask yourself; 6 months down the line, will you be more willing to spend the extra 50 dollars for the upgrade? If the answer is yes, then I’d definitely recommend considering it. Don’t just think about how much money you’re spending right now, think about how the difference in cost will feel when you wish that you’d made the upgrade.

For assistance in this decision, check the requirements for applications and organizations you make use of. Minimum requirements are just that, and should not be used as a guide for purchasing a new machine. Suggested requirements, such as the ones offered at UMass IT’s website, offer a much more robust basis from which to future-proof your machine.

Can what you’re buying be meaningfully upgraded down the line?

This is another important factor, though not always applicable to all devices. Most smartphones, for example, don’t even have the option to upgrade their available storage, let alone meaningful hardware like the RAM or CPU.

However, if you’re building your own PC or making a laptop/desktop purchase, upgradeability is a serious thing to consider. The purpose of making sure a computer is upgradeable is to ensure that you can add additional functionality to the device while having to replace the fewest possible components.

Custom PCs are the best example of this. When building a PC, one of the most important components that’s often overlooked is the power supply. You want to buy a power supply with a high enough wattage to run all your components, but you don’t want to overspend on something with way more juice than you need, as you could have funneled that extra cash into a more meaningful part.

Lets say you bought a power supply with just enough juice to keep your computer running. While that’s all fine right now, you’ll run into problems once you try to make an upgrade. Let’s say your computer is using Graphics Card A, and you want to upgrade to Graphics Card B. While Graphics Card A works perfectly fine in your computer, Graphics Card B requires more power to actually run. And, because you chose a lower wattage power supply, you’re going to need to replace it to actually upgrade to the new card.

In summary, what you planned to just be a simple GPU swap turned out to require not only that you pay the higher price for Graphics Card B, but now you need to buy a more expensive power supply as well. And, sure, you can technically sell your old power supply, you would have saved much more money (and effort) in the long run by just buying a stronger power supply to start. By buying the absolute minimum that you could to make your computer work, you didn’t leave yourself enough headroom to allow the computer to be upgraded.

This is an important concept when it comes to computers. Can your RAM be upgraded by the user? How about the CPU? Do you need to replace the whole motherboard just to allow for more RAM slots? Does your CPU socket allow for processors more advanced than the one you’re currently using, so you can buy cheap upgrades once newer models come out?

All of these ideas are important when designing a future-proof purchase. By ensuring that your device is as upgradeable as possible, you’re increasing its lifespan by allowing hardware advancements in the future to positively increase your device’s longevity.

Is what you’re buying about to be replaced by a newer, better product?

This is one of the most frustrating, and often one of the hardest-to-determine aspects of future proofing.

We all hate the feeling of buying the newest iPhone just a month before they reveal the next generation. Even if you’re not the type of person that cares about having the newest stuff, it’s to your benefit to make sure you aren’t making purchases too close to the release of the ‘next gen’ of that product. Oftentimes, since older generations become discounted upon the release of a replacement, you’d even save money buying the exact same thing by just waiting for the newer product to be released.

I made a mistake like this once, and it’s probably the main reason I’m including this in the article. I needed a laptop for my freshman year at UMass, so I invested in a Lenovo y700. It was a fine laptop — a little big but still fine — with one glaring issue: the graphics card.

I had bought my y700 with the laptop version of a GTX 960 inside of it, NVidias last-gen hardware. The reason this was a poor decision was because, very simply, the GTX 1060 had already been released. That is, the desktop version had been released.

My impatient self, eager for a new laptop for college, refused to wait for the laptop version of the GTX 1060, so I made a full price purchase on a laptop with tech that I knew would be out of date in a few months. And, lo and behold, that was one of the main reasons I ended up selling my y700 in favor of a GTX 1060 bearing laptop in the following summer.

Release dates on things like phones, computer hardware and laptops can often be tracked on a yearly release clock. Did Apple reveal the current iPhone in November of last year? Maybe don’t pay full price on one this coming October, just in case they make that reveal in a similar time.

Patience is a virtue, especially when it comes to future proofing.

What is your budget?


This one is pretty obvious, which is why I put it last. However, I’m including it in the article because of the nuanced nature of pricing when buying electronics.

Technically, I could throw a 3-grand budget at a Best Buy employee’s face and ask them to grab me the best laptop they’ve got. It’ll almost definitely fulfill my needs, will probably not be obsolete for quite awhile, and might even come with some nice upgradeability that you may not get with a cheaper laptop.

However, what if I’m overshooting? Sure, spending 3 grand on a laptop gets me a top-of-the-line graphics card, but am I really going to utilize the full capacity of that graphics card? While the device you buy might be powerful enough to do everything you want it to do, a purchase made by following my previously outlined philosophy on future proofing will also do those things, and possibly save you quite a bit of money.

That’s not to say I don’t advocate spending a lot of money on computer hardware. I’m a PC enthusiast, so to say that you shouldn’t buy more than you need would be hypocritical. However, if your goal is to buy a device that will fulfill your needs, allow upgrades, and be functional in whatever you need it to do for the forseeable future, throwing money at the problem isn’t really the most elegant way of solving it.

Buy smart, but don’t necessarily buy expensive. Unless that’s your thing, of course. And with that said…


…throwing money at a computer does come with some perks.

Arch Linux and Eduroam on a Raspberry Pi, No Ethernet Cable Required

Raspbian may be the most common OS on Raspberry Pi devices, but it is definitely not alone in the market. Arch Linux is one such competitor, offering a minimalist disk image that can be customized and specialized for any task, from the ground up – with the help of Arch Linux’s superb package manager, Pacman.

The office website for Arch Linux Arm contains all the necessary files and detailed instructions for the initial setup. After a reasonably straightforward process, plugging in the Raspberry Pi will great you with a command line interface, CLI, akin to old Microsoft DOS.

Luckily for those who enjoy a graphical interface, Arch Linux supports a wide variety in its official repository, but for that, we need the internet.  Plenty of tutorials detail how to connect to a typical home wifi, but Eduroam is a bit more challenging. To save everyone several hours of crawling through wikis and forums, the following shall focus on Eduroam.

To begin, we will need root privilege; by default this can be done with the following command:


After entering the password, we need to make the file:

nano /etc/wpa_supplicant/eduroam

Quick note: The file doesn’t need to be named eduroam.

Now that we’re in the nano text editor we need to write the configuration for eduroam. Everything except the indentity and password field needs to be copied exactly. For the propose of this Tutorial I’ll be John Smith, jsmith@umass.edu, with password Smith12345.


Quick note: the quotation marks are required, this will not work without them.

Now that that’s set, we need to set the file permissions to root only, as its never good to have passwords in plain text, unsecured.

chmod og-r /etc/wpa_supplicant/eduroam

Now just to make sure that everything was set properly, we will run

ls -l /etc/wpa_supplicant | cut -d ' ' -f 1,3-4,9

The correct output should be the following

-rw------- root root eduroam

If you named the config file something other than eduroam, it will show up on the output as that name.

Now that that’s all set, we can finally connect to the internet.

wpa_supplicant -i wlan0 -c /etc/wpa_supplicant/eduroam &

Provided everything is set correctly, you will see “wlan0: link becomes ready” halfway through the last line of the page, hit enter and just one more command.


Now, just to check we’re connected, we’ll ping google

ping google.com -c 5

If everything is set, you should see 5 packets transmitted, 5 packets received.
Now that we’re connected, its best to do a full update

pacman -Syyu

At this point, you are free to do what you’d like with Arch. For the sake of brevity I will leave off here, for extra help I highly recommend the official Arch Linux Wiki. For a graphical UI, I highly recommend setting up XFCE4, as well as a network (wifi) manager.


Example of a customized XFCE4 desktop by Erik Dubois



Disclaimer: UMass IT does not currently offer technical support for Raspberry Pi.

How to use Audacity to Edit Photos


Photo: qubodup on DeviantArt

Glitch art is an increasingly popular form of art that uses digital interference or glitches to make interesting art. In this tutorial I will be showing you how to use Audacity to edit photos as if they are sound, which can create some cool effects.

Here’s what you need:

  • Adobe Photoshop (I use the CC version so your experience may vary.)
  • Audacity (free at Audacity.com)
  • A picture

The first step is to open the image in Photoshop. Go to File> Open > Your_file. After opening, we need to save this file as a format that Audacity can understand. We will use the .tiff format. So go to File>Save As Then go to .tiff next to “Save as type”. See the below photo for an example of how this should look:

Displaying Capture2.PNG

Then Photoshop will ask you about the settings for the .tiff file. Leave everything as it is except “Pixel Order” change it to Per Channel. Per channel splits up where the color data for the photo is stored, allowing us to edit individual parts of the RGB spectrum. See below photo again:

Displaying Capture.PNG

Once the file is saved as a .tiff file, open up Audacity and click File>Import Raw Data then select your .tiff file. Once this is complete Audacity will ask for some settings to import the raw data. Change “encoding” to “U-Law” and “Byte Order” to “Little-endian” then click import. See photo of how it should look below:

Displaying Capture3.PNG

You now have your image in Audacity as a sound file! Here is where the creativity comes in. To glitch up the image, use the effect tab in Audacity and play around with different effects. Most images have a part in the beginning of the file that is needed to open the image so if you get an error trying to open the picture don’t worry; just don’t start the effect so close to the beginning next time. There should also be some noticeable sections in the waveform — these represent the different RGB colors. So if you only select one color, you can make an effect only happen to one color. Once you finish your effects, it’s time to export.

To export go to File>Export. When prompted set the file type to “Other uncompressed files”. See photo of how it should look below:

Displaying Capture.5PNG.PNG

Then click option at the bottom right. For “header” select “RAW (header-less) and for “encoding” select “U-Law” again. Then hit “OK” and save your file. Now you should be able to open the RAW file and see how your work came out. See photo of how it should look below:

Displaying Capture4.PNG

What Do Cryptocurrency Miners Do?

You’ve probably heard of Bitcoin. Maybe you’ve even heard of other cryptocurrencies, like Ethereum. Maybe you’ve heard that these cryptocurrencies are mined, but maybe you don’t understand how exactly a digital coin could be mined. We’re going to discuss what cryptocurrency miners do and why they do it. We will be discussing the Bitcoin blockchain in particular, but keep in mind that Bitcoin has grown several orders of magnitude greater in the 9-10 years it’s been around. Though other cryptocurrencies change some things up a bit, the same general concepts apply to most blockchain-based cryptocurrencies.

What is Bitcoin?

Bitcoin is the first and the most well-known cryptocurrency. Bitcoin came about in 2009 after someone (or someones, nobody really knows) nicknamed Satoshi Nakamoto released a whitepaper describing a concept for a decentralized peer-to-peer digital currency based on a distributed ledger called a blockchain, and created by cryptographic computing. Okay, those are a lot of fancy words, and if you’ve ever asked someone what Bitcoin is then they’ve probably thrown the same word soup at you without much explanation, so let’s break it down a bit:

Decentralized means that the system works without a main central server, such as a bank. Think of a farmer’s market versus a supermarket; a supermarket is a centralized produce vendor whereas a farmer’s market is a decentralized produce vendor.

Peer-to-peer means that the system works by each user communicating directly with other user. It’s like talking to someone face-to-face instead of messaging them through a middleman like Facebook. If you’ve ever used BitTorrent (to download Linux distributions and public-domain copies of the U.S. Constitution, of course), you’ve been a peer on a peer-to-peer BitTorrent network.

Blockchain is a hot topic right now, but it’s one of the harder concepts to describe. A blockchain performs the job of a ledger at a bank, keeping track of what transactions occurred. What makes blockchain a big deal is that it’s decentralized, meaning that you don’t have to trust a central authority with the list of transactions. Blockchains were first described in Nakamoto’s Bitcoin whitepaper, but Bitcoin itself is not equivalent to blockchain. Bitcoin uses a blockchain. A blockchain is made up of a chain of blocks. Each block contains a set of transactions, and the hash of the previous block, thus chaining them together.

Hashing is the one-way (irreversible) process of converting any input into a string of bits. Hashing is useful in computer science and cryptography because it’s really easy to get the hash of something, but it’s almost impossible to find out what input originally made a particular hash. Any input will always have the same output, but any little difference will make a completely different hash. For example, in the hashing algorithm that Bitcoin uses called SHA-256, “UMass” will always be:


but “UMasss” will be completely different:


In this 64-character string, each character represents 4 bits. This hash can also be represented as 256 binary bits:


Those are the general details that you need to know to understand cryptocurrency. Miners are just one kind of participant in cryptocurrency.

Who are miners?

Anybody with a Bitcoin wallet address can participate in the blockchain, but not everybody who participates has to mine. Miners are the ones with the big, beefy computers that run the blockchain network. Miners run a mining program on their computer. The program connects to other miners on the network and constantly requests the current state of the blockchain. The miners all race against each other to make a new block to add to the blockchain. When a miner successfully makes a new block, they broadcast it to the other miners in the network. The winning miner gets a reward of 12.5 BTC for successfully adding to the blockchain, and the miners begin the race again.

Okay, so what are the miners doing?

Miners can’t just add blocks to the blockchain whenever they want. This is where the difficulty of cryptocurrency mining comes from. Miners construct candidate blocks and hash them. They compare that hash against a target.

Now get ready for a little bit of math: Remember those 256-bit hashes we talked about? They’re a big deal because there are 2^256 possible hashes (that’s a LOT!), ranging from all 0’s to all 1’s. The Bitcoin network has a difficulty value that changes over time to make finding a valid block easier or harder. Every time a miner hashes a candidate block, they look at the binary value of the hash, and in particular, how many 0s the hash starts with. When a candidate block fails to meet the target, as they often do, the miner program tries to construct a different block. If the number of 0’s at the start of the hash is at least the target amount specified by the difficulty, then the block is valid!

Remember that changing the block in any way makes a completely different hash, so a block with a hash one 0 short of the target isn’t any closer to being valid than another block with a hash a hundred 0’s short of the target. The unpredictability of hashes makes mining similar to a lottery. Every candidate block has as good of a chance of having a valid hash as any other block. However, if you have more computer power, you have better odds of finding a valid block. In one 10 minute period, a supercomputer will be able to hash more blocks than a laptop. This is similar to a lottery; any lottery ticket has the same odds of winning as another ticket, but having more tickets increases your odds of winning.

Can I become a miner?

You probably won’t be able to productively mine Bitcoin alone. It’s like buying 1 lottery ticket when other people are buying millions. Nowadays, most Bitcoin miners pool their mining power together into mining pools. They mine Bitcoin together to increase the chances that one of them finds the next block, and if one of the miners gets the 12.5 BTC reward, they split their earnings with the rest of the pool pro-rata: based on the computing power (number of lottery tickets) contributed.


The U.S. dollar used to be tied to the supply of gold. A U.S. dollar bill was essentially an I.O.U. from the U.S. Federal Reserve for some amount of gold, and you could exchange paper currency for gold at any time. The gold standard was valuable because gold is rare and you have to mine for it in a quarry. Instead of laboring by digging in the quarries, Bitcoin miners labor by calculating hashes. Nobody can make fraudulent gold out of thin air. Bitcoin employs the same rules, but instead of making the scarce resource gold, they made it computer power. It’s possible for a Bitcoin miner to get improbably lucky and find 8 valid blocks in one day and earn 100 BTC, just like it’s possible but improbable to find a massive golden boulder while mining underground one day. These things are effectively impossible, but it is actually impossible for someone to fake a block on the blockchain (The hash would be invalid!) or to fake a golden nugget. (You can chemically detect fool’s gold!)

Other cryptocurrencies work in different ways. Some use different hashing algorithms. For example, Zcash is based on a mining algorithm called Equihash that is designed to be best mined by the kinds of graphics cards found in gaming computers. Some blockchains aren’t mined at all. Ripple is a coin whose cryptocurrency “token” XRP is mostly controlled by the company itself. All possible XRP tokens already exist and new ones cannot be “minted” into existence, unlike the 12.5 BTC mining reward in Bitcoin, and most XRP tokens are still owned by the Ripple company. Some coins, such as NEO, are not even made valuable by scarcity of mining power at all. Instead of using “proof of work” like Bitcoin, they use “proof of stake” to validate ownership. You get paid for simply having some NEO, and the more you have, the more you get!

Blockchains and cryptocurrencies are have become popular buzzwords in the ever-connected worlds of computer science and finance. Blockchain is a creative new application of cryptography, computer networking, and processing power. It’s so new that people are still figuring out what else blockchains can be applied to. Digital currency seems to be the current trend, but blockchains could one day revolutionize health care record-keeping or digital elections. Research into blockchain technology has highlighted many weaknesses in the concept; papers have been published on doublespend attacks, selfish mining attacks, eclipse attacks, Sybil attacks, etc. Yet the technology still has great potential. Cryptocurrency mining has already brought up concerns over environmental impact (mining uses a lot of electricity!) and hardware costs (graphics card prices have increased dramatically!), but mining is nevertheless an engaging, fun and potentially profitable way to get involved in the newest technology to change the world.

SOS: Emergency Response in the Smartphone Era

By now, we’ve all seen or heard stories about a recent scare in Hawai’i where residents were bombarded (ironically) with an emergency notification warning of a ballistic missile heading towards the isolated island state. Within seconds, the people of Hawai’i panicked, contacting their families, friends, loved ones, and stopping everything that they were doing in their final minutes of their lives.

Of course, this warning turned out to be false.

The chaos that ensued in Hawai’i was the result of an accidental warning fired off by a government employee of the Emergency Management Agency. Not only did this employee send off a massive wave of crisis alert notifications to Hawaiians everywhere. In some cases, it took up to 30+ minutes to signal to people that this was a false flag warning. With the rising tensions between the United States and the trigger-happy North Korea, you could imagine that this could be problematic, to put it simply.

The recent mishap in Hawai’i opens up a conversation about Phone notifications when responding to crisis situations. While Hawaiians, and more broadly Americans, aren’t used to seeing this type of notification appear on their lock screen, this is a common and very effective tool in the middle east, where Israel uses push notifications to warn of nearby short range missiles coming in from Syria and the Gaza Strip/West Bank.

Image result for israel missile defense notification

In a region full hostilities and tense situations, with possible threats from all angles, Israel keeps its land and citizens safe using a very effective system of Red Alert, an element of Israel’s Iron Dome. According to Raytheon, a partner in developing this system, the Iron Dome “works to detect, assess and intercept incoming rockets, artillery and mortars. Raytheon teams with Rafael on the production of Iron Dome’s Tamir interceptor missiles, which strike down incoming threats launched from ranges of 4-70 km.” With this system comes the Red Alert, which notifies Israelis in highly populated areas of incoming attacks, in case the system couldn’t stop the missile in time. Since implementation in 2011 and with more people receiving warnings due to growing cell phone use, Israelis have been kept safe and are notified promptly, leading to a 90% success rate of the system and keeping civilian injuries/casualties at very low levels.

If this Hawaiian missile alert was true, this could have saved many lives. In an instant, everyone was notified and people took their own precautions to be aware of the situation at hand. This crucial muff in the alert system can be worked on in the future, leading to faster, more effective approaches to missile detection, protection, and warnings, saving lives in the process.

In an era of constant complaint about the ubiquity of cell phone use, some of the most positive implications of our connected world have been obscured. Think back to 1940: London bombing raids were almost surprises, with very late warnings and signals that resulted in the destruction of London and many casualties. With more advanced weapons, agencies are designing even more advanced defense notification systems, making sure to reach every possible victim as fast as possible. In an age where just about everyone has a cell phone, saving lives has never been easier.


For more reading, check out these articles on Washington Post and Raytheon:



Types of SSDs and Which Ones to Buy


Photo: partitionwizard.com

By now it’s likely you’ve heard of Solid State Drives, or SSDs as a blazing fast storage drive to speed up old computers, or provide reliable uptime compared to their replacement, Hard Drives, or HDDs. But there are countless options available, so what is the best drive?


Photo: Asus

There are several connector types that SSDs use to interface with a computer, including SATA, PCIe, M.2, U.2, mSATA, SATA Express, and even none, as some SSDs now come soldered to the board. For a consumer, the most common options are SATA and M.2. SATA is known as the old two-connector system that hard drives used, including a SATA Power and SATA data cable. SATA-based SSDs are best for older computers that lack newer SSD connector types and have only SATA connections. A great way to boost the speed of an older computer with a spinning hard drive is to clone the drive to an SSD, and replace the Hard Drive with an SSD, increasing the computer’s ability to read/write data, possibly by tenfold. However it should be noted that these SATA drives are capped at a maximum theoretical transfer speed of 600MB/s, whereas other un-bottlenecked SSDs have recently exceeded 3GB/s, nearly five times the SATA maximum. This means SATA-based SSDs cannot utilize the speed and efficiency of newer controllers such as NVMe.


Photo: Amazon.com

NVMe, or Non-Volatile Memory Express, is a new controller used to replace AHCI, or Advance Host Controller Interface. AHCI is the controller that Hard Drives traditionally use to interface between the SATA bus of a Hard Drive and the computer it is connected to. AHCI as a controller also provides a bottleneck to SSDs in the form of latency the same way the SATA bus provides a bandwidth bottleneck to an SSD. The AHCI controller was never intended for use with SSDs, where the NVMe controller was built specifically with SSDs only in mind. NVMe promises lower latency by operating with higher efficiency, working with Solid State’s parallelization abilities by being able to run more than two thousand times more commands to or from the drive than compared to a drive on the AHCI controller. To get the optimal performance out of an NVMe drive, make sure it uses PCIe (Peripheral Component Interconnect Express) as a bus which alleviates all the bottlenecks that would come with using SATA as a bus.


If the latest and greatest speeds and efficiencies that come with an NVMe SSD is a must have, then there’s a couple things to keep in mind. First, make sure the computer receiving the drive has the M.2 connector type for that type of drive. Most consumer NVMe drives only support the M.2 “M” key (5 pins), which is the M.2 physical edge connector. SATA based SSDs use the “B” key (6 pins) but there are some connectors that feature “B + M” which can accept both a SATA and NVMe drive. Second, the computer needs to be compatible with supporting and booting to an NVMe drive. Many older computers and operating systems may not support booting to or even recognize an NVMe drive due to how new it is. Third, expect to pay a premium. The PCIe NVMe drives are the newest and greatest of the SSD consumer market, so cutting edge is top price. And finally, make sure an NVMe drive fits the usage case scenario. The performance improvement will only be seen with large read/writes to and from the drive or large amounts of small read/writes. Computers will boot faster, files will transfer and search faster, programs will boot faster, but it won’t make a Facebook page load any faster.

In conclusion, SSDs are quickly becoming ubiquitous in the computing world and for good reason. Their prices are plummeting, their speeds are unmatched, they’re smaller fitting into thinner systems, and they’re far less likely to fail, especially after a drop or shake of the device. If you have an old computer with slow loading times in need of a performance boost, a great speed-augmenting solution is to buy a SATA SSD. But if being cutting edge and speed is what is what you’re looking for, nothing that beats a PCIe NVMe M.2 drive.

Finding a Job in a Digital World

Image result for interview stock photo

When I listen to a podcast, there is often an ad for ZipRecruiter. ZipRecruiter “is the fastest way to find great people,” or so it says on the homepage of their website. Essentially, employers post a job to ZipRecruiter and the job posting gets pushed to all sorts of job searching websites like Glassdoor, job.com, geebo, and a bunch of others I have never heard of before. You just fill out the information once and your job gets posted to 200 different sites. That’s kind of cool. But there is a big problem with that. HR now has to deal with hundreds of applications; and if you are applying to a company that uses ZipRecruiter, they are probably having a robot go through your resume and cover letter to look for words like “manage”, “teamwork”, or “synergize.”
But I don’t want my resume looked at by a bot. I want my resume to be looked at by a real human being. I have applied through these websites before and I don’t even get a rejection letter from the company in question, yet alone an idea that someone printed out my carefully crafted resume and cover letter and then read them. This is where you reach a hurdle on the path to post-graduation-job-nirvana. I want to find jobs, so I look on Glassdoor, job.com, & geebo but then I want to stand out from the pack. How do I do that? I have no idea. Instead I am offering a solution to avoid those websites.

1. The other day I was sitting, looking at a magazine, when I realized something great about the thing in my hand. Everyone in the industry takes part in the magazine. Let’s say you are a psychology major looking for an internship. Why not pick up the latest version of Psychology Today and go through the pages and check out companies that advertise? My point is that your favorite magazines already reflect your passions, why not go through the pages of your passions to look for the company that you didn’t think to apply to?

2. Now that you’ve identified where you want to apply, keep a list. There are some tutorials out there on the internet on how to keep a proper list of applications. I don’t really like those. They include things like: application deadline, if you’ve completed the cover letter, other application materials, and people in the company you may know.
I really disagree with this strategy. Most employers announce in advance when the postings are going up and most employers have already found a match by the end of the deadline. Instead of a “application deadline” field, I prefer a “check during ___ (season)” field. Then, once the application is open, I write the cover letter and send of my resume in one sitting. Just to get it out of the way. I don’t need to check in with my checklist.

3. Everyone always says that the only sure way to get a job is through people you know. While I can agree that networking is probably the most consistent way to get your foot in the door, it isn’t always possible for all people. That’s why I’ve been using UMass career fairs as pure networking opportunities. Instead of spamming my resume across the career fair, I talk to a few recruiters that I know are just as passionate as I am on finding a job that’s the right fit.

4. City websites are my other secret weapon to avoid ZipRecruiter. I will search things like “Best Places to work in Seattle” and then I apply to all of those. Or I will search “Businesses with offices in the Prudential Building, Boston” because I dream of one day working there. I am always just looking for more names to put on my list that don’t get hundreds of applicants that all sound exactly like me.

5. I also tend to look at the products around me that I don’t necessarily think about. Odwalla and IMAX are both companies that I see all the time, but I wouldn’t think of applying to those because I don’t write them down.

There are ways to avoid your resume getting lost in a stack a mile high, it just takes some planning and forethought to avoid it.

DJI Drones – Which One Is Right for You?

As the consumer drone market becomes increasingly competitive, DJI has emerged as an industry leader of drones and related technologies both on the consumer end, as well as the professional and industrial markets. Today we’re taking a look at DJI’s three newest drones.


First up is the DJI Spark, DJI’s cheapest consumer drone available at time of writing. The drone is a very small package, using Wi-Fi and the DJI GO Smartphone app to control the drone. The drone features a 12-megapixel camera, capable of 1080p video at 30 fps. The DJI Spark features a 16 minute runtime removable battery. Starting at $399, this drone is best for simple amateur backyard style learners just getting into the drone market. User-friendly and ultra-portable, this drone is limited in advanced functionality and is prone to distance and connectivity problems, but is an essential travel item for the casual and amateur drone user looking to take some photos from the sky without dealing with the hassle of advanced photography and flying skills that are required on some of DJI’s other offerings.


DJI’s most recent offering is the DJI Mavic Air, DJI’s intermediate offering for drone enthusiasts. The drone is a compact, foldable package, using Wi-Fi and the DJI GO Smartphone app in conjunction with a controller to control the drone. The drone features a 12-megapixel camera, capable of 4K video at 30 fps. The DJI Mavic Air features a 21-minute runtime removable battery. Starting at $799, this drone is a step up from DJI’s lower priced offerings but bundles a package of features that crater to both the amateur drone photographer and hobbyist/enthusiast drone flyer such as advanced collision avoidance sensors, panorama mode, and internal storage. While heavier and bigger than its smaller brother the DJI Spark, the DJI Mavic Air’s foldability creates an unbelievably portable package with user-friendly features and one of DJI’s best camera sensors to ship in their consumer drone lineup. Also plagued with Wi-Fi limitations, the DJI Mavic Air is an excellent travel drone for more serious photographers and videographers if you don’t venture out too far.


One of DJI’s most ambitious and most popular consumer drones is the DJI Mavic Pro, a well-rounded, no compromise consumer drone with advanced photography and flying abilities. The drone is a compact, foldable package like the DJI Mavic Air, using the DJI GO Smartphone app in conjunction with a controller using OcuSync Transmission technology to provide a clear, long range, live feedback video system usually free of interference. The drone features a 12-megapixel camera, capable of 4K video at 30 fps. The DJI Mavic Pro features a 30 minute runtime removable battery. Starting at $999, this drone is not cheap, but is an essential tool for the photographer or drone enthusiast requiring the best flying and photography capture features in DJI’s best portable drone offering.

My DJI Mavic Pro Sample Footage:

Sample 1: https://www.youtube.com/watch?v=2kI1hoIO4x4
Sample 2: https://www.youtube.com/watch?v=ZQgX5J9WOII
Sample 3: https://www.youtube.com/watch?v=z1mDUZWwwxI
Sample 4: https://www.youtube.com/watch?v=hWiHPu-ld78
Sample 5: https://www.youtube.com/watch?v=TOcKi1xRNoE

Disclaimer: Operation of a drone, regardless of recreational or commercial intent, is subject to rules and regulations outlined by the Federal Aviation Administration (FAA). All drone operators should operate aircraft in compliance with local, state, and federal laws. Compliant and suggested practices include operating aircraft with the presence of a spotter, maintaining line of sight on your aircraft, registering your aircraft with the FAA, sharing airspace with other recreational and commercial aircraft, knowing your aircraft and its impact when operating around people & animals, and not flying your aircraft in FAA restricted zones. For more information, please visit the FAA website on Unmanned Aerial Systems as it pertains to you: https://www.faa.gov/uas/faqs/

Why Macs have taken over College Campuses

If you ever visit a college campus you will notice the plethora of Apple laptops. Apple seems to supply a huge percentage of college students’ laptops, but why?

To start off with, Apple has a brand image that few other companies can match. From my experience in IT many people think that Apple machines “last longer” and “won’t break as easily” when compared to their PC rivals. And from my experience that isn’t necessarily false. Certainly in terms of build quality the average Mac will beat the average PC, but it’s not really a fair comparison. Macs cost far more than the average PC, and this higher build quality is priced-in. That said, even some higher-end PCs have build qualities that seem to degrade over time in a way that Macs don’t. My guess for why this happens is that PCs are constantly trying new things to differentiate themselves from the pack of similarly specced competitors which leads to constant experimentation. Trying new things isn’t bad, but with this throw-everything-at-the-wall mentality there will certainly be a few products that weren’t truly tested and that may have been pushed out to quickly. Apple on the other hand has a handful of laptops which for the most part have been around for years. They have mastered the art of consistently making reliable laptops. It’s that consistency that is really important. In all likelihood every major laptop manufacturer has made a very reliable computer, but very few have the track record that Apple has. It’s this track recorded that makes people trust that their new computer will last all four years.

To add to their reliability Apple also has the upper hand in its physical locations. Every urban area in the U.S. has an Apple store, somewhere to take your device if it’s acting up or check out a new laptop before you buy it. I think this plays a big role in Apples success. Being able to try out a product before buying it is a clear advantage. People get to know what the product will be like in person, which might make them more likely to buy it.  Secondly, knowing that if anything happens to your device there is a physical location where you can bring it can be very reassuring. If you buy an Apple laptop no longer will you have to wait on the phone for 3 hours trying to get ahold of someone helpful.  Just walk into the store and you’ll get the assistance you need.

I am not the only one to notice that stores are a big part of Apple’s success, as Microsoft has been building more and more stores to help compete. They realized that Apple would always have better customer service if they didn’t make there own stores. This has become even more important for Microsoft as they built up there hardware.

Finally one of the biggest reasons in my mind is that people buy Apple laptops, because they have Apple phones. It’s seems logical that one would buy more products from a company if they are satisfied with the one they have. I think this is what is happening with Apple computers. iPhones are incredibly popular with college-aged kids, so naturally they will gravitate towards the laptop manufacturer that makes their phones. Furthermore iPhones and Apple laptops work together in a way that a PC and an iPhone can’t. Apple devices can send iMessages, they integrate with iCloud seamlessly, and they share similar programs which can make picking either one up faster.


NEWS FLASH: GM Buys Self-Driving Startup

The era of self driving cars is coming soon, as we all know, and GM accordingly bought a small startup called, Strobe, Inc. which now has a very large influence and is considered a dominant force in the movement towards autonomous driving. Strobe is a very youthful startup that has recently sold ‘Lider’ (laser radar), a piece of technology that is crucial to the autonomy of the self-driving car to General Motors for an undisclosed amount. As the article says, “…technology is according to many in the incipient self-driving world critical to vehicles that will someday achieve full autonomy and be able to drive themselves with no human input…” We can see that many different companies such as Tesla Autopilot, Cadillac Super Cruise and Google’s Waymo are involved in the process of developing self driving cars. The race to autonomy is on and we will soon see the result!

What is Statcast?

The Technological Marvel that is Statcast

Next time you go to a baseball game look towards the press boxes; you may just spot a black box that looks inconspicuous. That black box is the reason Disney paid 1.1 Billion for a third of MLB Advanced Media. That black box collects data for a program called Statcast.

statcast blackbox

For those of you that aren’t aware of Statcast, you can think of it as a way to track everything that goes on in the baseball stadium. For those of you who are paranoid that you are always being watched, don’t worry: MLB didn’t spend millions of dollars to see how far and how fast you spill your drink. The Statcast tracking system is a combination of two other systems, one a system developed by Trackman that is based on the Doppler radar (flight paths of baseballs are infinitely easier to track than storms) and a few cameras that help with the three-dimensional aspect of the game. Statcast provides a better fan experience by helping the common view to see the subtleties that allow each player to make a catch, hit a home run, or fool a batter. This isn’t just for the viewer though, it is also for the player and other personnel actively part of the game. The players can use this data to determine where to play, to either side or further back or forward,


or can help a smaller batter to realize that at the launch angle they usually hit the ball all they have to do is hit the ball just a bit harder and their home run numbers would increase

.Stat cast

For those front office executives that build teams it helps them determine which pitcher throws pitches with a higher spin rate (the spin of the ball that goes from a pitcher’s hand to the glove of the catcher in a matter of milliseconds), which regardless of the prior results should trend to more strikeouts.


Even though this was introduced in 2013 teams are still figuring out how to take full advantage of the new data. At a sabermetric seminar (a gathering of some of the brightest minds in baseball) some teams’ executives were actively trying to find the best uses of the hitting portion of the Statcast.

Android Auto

Most cars these days offer some form of phone syncing capabilities. Usually, though, they don’t offer much support beyond hands free phone and text. Any app support is often native to the software and doesn’t interface with the app on your phone. Map support is either a separate navigation system in the car or limited to direction readout from the phone, with no accompanying visual.

Android Auto give you all these capabilities and more. Although initial native support was small, currently almost 87 2015 cars come with it built in and there are plans to expand that number to almost 150 for the 2017 models. Many major aftermarket headunit brands include Android Auto as well, including Pioneer, Kenwood and Alpine.

Google developed Android Auto to comply with common safety standards including the National Highway Traffic Safety Administration (NHTSA). In order to do this, all apps supported by Android Auto must be checked by Google to ensure that they comply with those standards. They currently list 53 compatible apps on their play store which can be found here. A great example of this safety focus is text messages: no texts are displayed on the screen, instead they are read back to you. In addition voice commands , including “OK, Google” commands, are heavily relied on to maximize its hands free capabilities.

Right now in order to use Android Auto you have to plug it into your car via USB cable. When this happens all interaction is either through your car’s stereo or via voice commands (it’s unclear whether the phone’s microphone can be used for this or if the car has to have one). Google recognized this convenience and announced in May that they would be working providing Android Auto just through the phone. The interface would still look relatively similar, and all the voice commands would remain.

Although the platform is still relatively young, it looks like a promising app for drivers. At a time when distract

Datamoshing What it is and How it works

Modern video formats have been designed in such a way as to minimize the storage they take up while maximizing things like resolution and frame rate. To achieve this goal they have developed some clever techniques that can look very cool when they don’t work as they should.

Let’s start with frames. Each frame of a video is like a picture. Most videos very between 24 and 60 frames per second and as you can imagine having 60 pictures for only one second of a video would take up a huge amount of space. So what the developers of modern video formats did was only have full pictures when absolutely necessary. If you think about it a lot of the frames in a video are just a very similar picture with slight differences. So what many formats do is simply tell the old pixels on the screen where to go to make the new picture instead of creating a whole new picture. This process allows for much smaller file sizes for videos as well as allowing datamoshing.

What datamoshing does is it gets rid of the new full picture frames and instead only keeps the frames that tell the pixels where to go. What results is a new video moving based on another videos directions or an image from the same video where the pixels go in directions they’re not suppose to. This process can lead to some very cool and unique glitch effects that have been used to various degrees within different mediums to create an interesting and unique effect.

Here are some examples:

Top Ten Most Useful Keyboard Shortcuts for Mac

Keyboard shortcuts can be a great way to save small bits of time and to optimize the time you do spend on the computer. In this blog article, I will talk about the top ten most useful keyboard shortcuts for mac. In the spirit of the old OS X naming scheme, the editor is adding cat pictures.

#1 Hide

Hide completely hides the program you are currently on. It doesn’t minimize it or close it, instead, it is hidden. To use hide just hit Command+H.

#2 Minimize

Minimize does what it says. It will minimize whatever program you are currently using. To use minimize hit Command+M.


#3 Spotlight

Spotlight allows you to instantly open up a search bar to check through all your files and applications, which can be extremely useful when looking for a file or application on the fly. To access spotlight hit Spacebar+Command


#4 Adjust Levels More Precisely

When you adjust volume and brightness on your mac you see 16 little rectangles that represent the brightness/volume level. You might think that there are only 16 levels of adjustability, that’s where this shortcut comes in. It allows you to get far more precise with your levels. Just hold down Option+Shift; then use the volume/brightness keys as you usually would.


#5 Switching to Last Used Program

Sometimes when multitasking it is useful to be able to switch between two applications quickly.  To do so hold down Command+Tab.


#6 Switching Between Programs

Similar to #5 it can often be useful to move between programs that may not have been the last used. To do this hold down Command+Tab again, but keep holding Command down. Then use your arrow keys to move left and right


#7 Force Quit

Sometimes a program will freeze up or stop functioning and oftentimes the best way to fix it is to force quit it and then reopen it. Unfortunately sometimes the program can make it hard or impossible to do on screen, so it’s useful to know the keyboard shortcut. To force quit an application hit Option+Command+Escape. To normally quit a program, hit Command+Q.


#8 Taking a Screenshot

Taking a screenshot of your screen can often be very useful if you are having a technical problem as well as generally being a great way to show others things from your perspective. To take a Screenshot hit Command+Shift+3 or to capture only a certain area use Command+Shift+4

#9 Adding Emojis

If you ever felt like you needed more emojis this shortcut is for you. It allows you to open up the emoji window on your mac. Hold Command+Ctrl+Spacebar


#10 Open Preferences

Each Application has its own preferences that let you make the application work how you want; to access them quickly hit Command key and the , key at the same time.

5 Cloud-based Applications You Can Host at Home

Do you have an old laptop lying around that you don’t know what to do with? Are you concerned about your data given recent tech company security breaches? Or maybe you’re just bored and want to fiddle around on some computers. Either way here are five free applications that you can host yourself:

  1. https://upload.wikimedia.org/wikipedia/commons/thumb/6/60/Nextcloud_Logo.svg/1200px-Nextcloud_Logo.svg.pngNextcloud – For those who don’t have access to unlimited cloud storage, or those who aren’t comfortable not being in control of their files, you can host your own cloud storage. Nextcloud provides similar functionality to storage providers like Google Drive and Box allowing for file sharing and online editing. There are client apps for all major phones and computers and even provides the option to enable a calendar app. Although Nextcloud is relatively new, it is based on Owncloud which is relatively established, although not quite as modern.

  2. https://forum.gitlab.com/uploads/default/original/1X/277d9badcbd723e913b3a41e64e8d2f3d2c80598.pngGitlab
     – For the developers out there that don’t want to pay for private repositories there’s gitlab. This is a very mature product that is packed full of features like Gitlab Continuous Integration, code snippets, and project wikis. Gitlab can integrate with many external applications as well such as Visual Studio, Jenkins, KanBan and Eclipse. For those that don’t have a free computer to run it on, they also provide hosting for both repository storage and continuous integration runners, although those options do cost money.
  3. https://tr4.cbsistatic.com/hub/i/r/2013/10/04/f874e321-e469-4527-be52-9a568fa20d8d/resize/770x/af17703c07c5ff25c9f4c64b36aba944/dokuwiki.logo.jpg
     – If you constantly find yourself looking up the same information or you just want a place to organize notes Docuwiki is the app for you. It supports a markup formatting style, multiple namespaces to organize your information, and diff report viewer to see view page changes. If the outdated UI doesn’t really appeal to you then Confluence is another option. It is geared more towards the enterprise environment, but for $10 (one time, not a subscription) you can host Confluence for up to ten users.

  4. Mail-in-a-Box
     – There are a lot of email providers out there, but if this is something you’re interested in hosting Mail-in-a-Box is a great solution. Although the setup of the the application itself is fairly easy, there isn’t much customization that can be done. For a more robust solution iRedMail might be the way to go. Note hosting email can be tricky, and generally is not possible from home internet connections.
  5. http://www.etny.net/wp-content/uploads/2011/03/Subsonic_Logo_Large_v21.png
     – All the audiophiles will appreciate Subsonic, an alternative to Google Play and iTunes. You can now store all your music yourself rather than being restricted to the Google or Apple music clients. With apps for all computers and phones you can listen to your music wherever you are. Subsonic includes support for playlists, most major music file formats, and customized themes.

Portability and the Effects on Device Internals

With the current trend of ever-shrinking tech devices, we have seen an explosion in the abundance of portable electronics. Fifteen years ago Apple launched the iPod, a device so foreign to people that Steve Jobs had to explain you could legally transfer your CD collection to your computer then onto your iPod. Now it is expected that the little (or big) phone in your pocket works as well as any desktop computer with fully developed applications and lasts a full day on one charge. There are many different advances that made this possible, such as the reduction in size of the fabrication nodes, increased battery storage, and much better video display options. But I think one change in design philosophy in particular has driven the current trend in tech.

Due to portability requirements phones have become a microcosm of the tech industry, specifically in the trend of increasing complexity at the cost of repairability. When the first iPhone came out there was no option to change battery or storage configuration, options both available on competitors’ devices. And yet people flocked in droves to Apple’s simpler, less-customizable devices, so much so that now Google produces its own phone, the Pixel, which has a non-removable battery and lacks a microSD slot. Logic dictates that there must be an outside pressure to force a competitor to drop a substantial differentiator from other products on the market; I would argue that factor is thinness.

The size of an SD card slot seems pretty inconsequential on a device the size of a desktop computer but when it takes up 1% of the total space of a device, there are arguments for much better uses of the space. A better cooling system, larger internal battery, or just space for a larger PCB are all uses for the extra space that may make the device better than it could have been with the SD card slot. When you look at the logic boards for the iPhone, this point is illustrated; there is just no space for any extra components.

Driven by space-saving concerns, complexity increases as smaller and smaller traces are used on the PCB and components have to shrink, shuffle or be removed. Proof of this is in the design of larger machines such as the Macbook, a 12-inch laptop with a logic board smaller than its touchpad, which features a mobile CPU and no removable storage.https://tr2.cbsistatic.com/hub/i/2015/04/23/f7db4def-28c8-4625-aa5d-effb6ff56197/c4d819ca18590fc382a2314ab705b2e2/applemacbook2015teardown025.jpg

  Demand for ultra-portability has led to devices that are so small that they are almost impossible to repair or upgrade. However, this trend cannot continue indefinitely. Moore’s law has taken a couple hits in the past couple years as Intel struggles to keep pace with it and PCB manufacturing can only get so small before it is impossible to fit all the components on it. As size becomes less of a differentiator and reaches its physical limits, tech companies will have to look to new innovations to stay relevant, such as increasing battery life or designing new functions for the devices.

How To Find Your Device’s MAC/Physical Address

The Physical address of a device is an unchanging number/letter combination which identifies your device on a network. It is also referred to as a Media Access Control Address (MAC Address). You may need it if you’re having issues with the campus network and UMass IT wants to see if the network itself is the problem.

To find the MAC/Physical address on a Windows 10 device:

Right click on the Start button to make a menu appear:


Select Command Prompt from the menu.

In the window that appears, type “ipconfig /all” without the quotes:


The resulting text displays information about the parts in your computer which communicate with the network. You’ll want to find the one that says “WLAN adapter” and look under that heading for the Physical Address:











To Find the MAC Address of a Apple/Macintosh computer:

Click the apple menu in the top left of the screen and click System Preferences:


In the window that appears, click “Network”:


Highlight WiFi on the left-hand side and click advanced:


Navigate to the Hardware tab to find your MAC address:


To Find the MAC address of an iPhone:

Use the Settings app, go to General>About and the MAC Address is listed as “WiFi Address”:


To find the MAC address of an Android device:

The location of the MAC address on an android device is unique to the device, but almost all versions will show it if you navigate to Settings>Wireless and Network; the MAC address will be listed on the same page or in the Advanced section:


You may also be able to find the MAC address in the “About Phone” section of the setting menu:



CyberGIS: Software

Image result for carto cool map
(Superbowl Twitter Map — CARTO)

What is CyberGIS (Cyber Geographical Information Science and Systems)? CyberGIS is a domain of geography that bridges computer science and geographic information science and technology (GIST). It is the development and utilization of software that integrates cyber infrastructure, GIS, and spatial analysis/ modeling capabilities. In this TechBytes article I will discuss two current and popular CyberGIS software for academic, industry, and government use.

CARTO: Unlock the potential of your location data. CARTO is the platform for turning location data into business outcomes.

CARTO Builder was created for users with no previous knowledge in coding or in extrapolating patterns in data. A simple user interface comprised of widgets allows the user to upload their data and instantly analyze a specific subset of the data (by category, by histogram, by formula, or by time series). From calculating clusters of points, detecting outliers and clusters, and predicting trends and volatility with the simple press of a button — CARTO Builder is truly made with efficiency and simplicity in mind. While CARTO builder “can be used in every industry, we are targeting financial services, to help them predict the risk of investments in specific areas, and telecom companies,” Javier de la Torre, CEO at CARTO.

For more information about CARTO from TechCrunch , click here.


 Mapbox: Build experiences for exploring the world. Add location into any application with our mapping, navigation, and location search SDKs. 

Unlike Carto, Mapbox was built for developers and cartography enthusiasts. While the graphical interface is easy to navigate (similar to photoshop or illustrator) Mapbox’s goal was to “create something equally useful for tech developers who have no idea how to design and designers who have no idea how to code” (Wired). While MapBox lacks the data analytics features of CARTO Builder, it makes up in its ability to manipulate a map any way the user likes. Based in both DC and San Francisco, Mapbox is partnered with large companies such as The Weather Channel, National Geographic, and CNN. Mapbox is optimized for navigation, search, and vector maps rendered in real time.

For more information about Mapbox from Wired, click here.

As a CyberGIS geographer myself, I use both CARTO Builder and Mapbox in my classes and in my research. When I have a dataset that needs to be geo-referenced on a map and not necessarily analyzed — Mapbox is my first choice. The ability to not only alter the color scheme to highlight the various features of the map, but to choose fonts and for labeling is something I take for granted. When using CARTO Builder those features are still present but are quite limited  — and when using ArcGIS online those features are non-existent. If an assignment requires more analysis on a given set of data, CARTO Builder is a simple way to parse data and run the specific algorithms.

Links to the CyberGIS software:



Thinkpad is known throughout the enterprise and consumer markets as Lenovo’s rugged, minimalistic, and business-oriented laptops, tablets, and mobile workstations division. Started under International Business Machines (IBM) in 1992, Lenovo acquired the division in 2005 and has owned the company ever since.  For 25 years, Thinkpads have been beloved by power users, demanding businesses & corporate environments, enthusiasts and even astronauts on the International Space Station (ISS). Today we take a brief look at the Thinkpad 25 Anniversary Edition, and the features that have persisted through the years of one of the longest continual laptop series.

Looking at the Thinkpad 25, there appear to be more similarities with modern Thinkpad laptops than the older era of Thinkpads it is supposed to be reminiscent of. The Thinkpad 25 comes with ULV 15w 7th gen Intel Processors, NVMe storage, a 1080p display, Nvidia 940MX dedicated graphics, the beloved trackpoint, and the distinctive minimalist black matte finish. The Thinkpad 25 also comes with a separate box of documentation and items that look back upon the series’ history and development, 25 years of such.

Source: laptopmag.com

The biggest difference in the Thinkpad 25 has to be the keyboard. The inclusion of a seven-row keyboard in the Thinkpad 25 when almost all modern computers are six row keyboards is nothing short an industry nod to when the seven-row keyboard reigned supreme. The Thinkpad 25 keyboard also has other references to earlier models, such as the blue enter key, dedicated page up and down keys, the delete “reference” key and traditional, non-island styled/chiclet keys. Omitted from the Thinkpad 25 are several antiquated technologies from over the years, such as the Thinklight, legacy ports (Serial, VGA, expresscard), and handle batteries.

To many enthusiasts, the Thinkpad 25 was a letdown; essentially a T470 with a seven-row backlit illuminated row keyboard.  The Thinkpad 25 is also expensive, at nearly $2,000 fully configured, and with such minimal specifications, many businesses will shy away from these devices. So, who is the Thinkpad 25 meant for then? This device was nothing but a limited-quantity device, for enthusiasts and collectors who yearn for a nostalgic legacy; for those who stubbornly resist modern design and technology implementations such as shiny plastic or brushed aluminum with a certain illuminated fruit. For those that have stood by the Thinkpad line through two and a half decades of cutting-edge innovation and performance, and are willing to pay the price for a computer that nods to this era of computing, then the Thinkpad 25 may be a worthwhile investment.

How To Create A Helpful Tech Tutorial: The Tutorial

Have you ever found yourself watching tech tutorials online? Nothing to be ashamed of, as everyone has run into an issue they need help solving at some point in their lives. Now, have you ever found yourself watching a BAD tech tutorial online? You know, one where the audio sounds like it’s being dragged across concrete and the video is literally a blurry recording of a computer screen? It ironically feels like a lot of the time the people who make tech tutorials need a tech tutorial on how to make good quality tech tutorials.

So join me, Parker Louison, as I wave my hands around awkwardly for ten minutes while trying my best to give helpful tips for making your tech tutorial professional, clean, and stand out among all the low effort content plaguing the internet!

A Quick Look at Home Theatre PCs

Are you one of those people that loves watching movies or listening to music while at home? Do you wish you could access that media anywhere in your home without lugging your laptop around your house and messing with cables? If you answered yes to these questions, then a Home Theater PC, or HTPC, may be for you.

An HTPC is a small computer that you can permanently hook up to a TV or home theater system that allows you to store, manage, and use your media whether it is stored locally or streamed from a service like Netflix, Amazon, or Spotify. Although several retailers sell pre-built HTPCs that are optimized for performance at low power, many people use a Raspberry Pi computer because they are small, quiet and relatively inexpensive. These are key features because you don’t want a loud PC with large fans interrupting your media experience, and a large computer won’t fit comfortably in a living room bookshelf or entertainment center.

The HTPC hooks up to your TV via an HDMI cord which will transmit both video and audio for watching movies. If you have a home theater system, your HTPC can connect to that to enable surround sound on movies, or streaming music throughout your home. It would also require a network connection to access streaming services. Although WiFi is convenient, a wired Ethernet connection is ideal because it can support higher speeds and bandwidth which is better for HD media.

The back of a typical AV Receiver.


Once you have a basic HTPC set up, you can upgrade your setup with a better TV, speakers, or even a projector for that true movie theater experience. If you want to be able to access your media in several rooms at once, you can set up multiple HTPCs with Network Accessed Storage, or NAS. This is a central storage location that connects directly to your router that all the computers on your home router can access at once. This is a more efficient option than storing all of your media on each computer separately. They can even be set up with internet access so you can stream your media from anywhere.

What’s KRACK, and Why Should It Bother You?

You may have recently noticed a new headline on the IT Newsreel you see when logging into a UMass service. The headline reads “Campus Wireless Infrastructure Patched Against New Cybersecurity Threat (Krack Attack)“. It’s good to know that UMass security actively protects us from threats like Krack, but what is it?

The KRACK exploit is a key reinstallation attack against the WPA2 protocol. That’s a lot of jargon in one sentence, so let’s break it down a little. WPA2 stands for Wi-Fi Protected Access Version 2. It is a security protocol that is used by all sorts of wireless devices in order to securely connect to a Wi-Fi network. There are other Wi-Fi security protocols, such as WPA and WEP, but WPA2 is the most common.

WPA2 is used to secure wireless connections between the client, such as your smartphone or laptop, and the router/access point that transmits the network. If you have a Wi-Fi network at home, then you have a router somewhere that transmits the signal. It’s a small box that connects to a modem – another small black box – which might connect to a large terminal somewhere in your house called the ONT, and which eventually leads to the telephone poles and wiring outside in your neighborhood. Secure connections have to be implemented at every level of your connection, which can range from the physical cables that are maintained by your internet service provider, all the way to the web browser running on your computer.

In order to create a secure connection between the router and the client, the two devices have to encrypt the data that they send to each other. In order to encrypt and decrypt the data they exchange, the two devices have to exchange keys when they connect. The two devices then use these keys to encrypt the messages that they send to each other, so that in transit they appear like gibberish, and only the two devices themselves know how to decipher it; they use these same keys for the duration of their communications.

WPA2 is just a protocol, meaning that is a series of rules and guidelines that a system must adhere to in order to support the protocol. WPA2 must be implemented in the software of a wireless device in order to be used. Most modern wireless devices support the WPA2 protocol. If you have a device that can connect to eduroam, the wireless network on the UMass Amherst campus, then that device supports WPA2.

This KRACK exploit is a vulnerability in the WPA2 protocol that was discovered by two Belgian researchers. They were able to get WPA2-supporting devices to send the same encrypted information over and over again and crack the key by deciphering known encrypted text content. They were able to get WPA2-supporting Android and Linux devices to reset their WPA2 keys to all zeroes, which made it even easier to crack encrypted content.

The real concern is that this is a vulnerability in the WPA2 protocol itself, not just any one implementation of it. Any software’s implementation of WPA2 that is correct is vulnerable to this exploit (newsflash – most are). That means essentially all wireless-enabled devices need to be updated to patch this vulnerability. This can be especially cumbersome because many internet-of-things devices (think of security webcams, web-connected smart home tools like garage doors) are rarely ever updated, if at all. Their software is assumed to just work without needing regular maintenance. All of those devices are vulnerable to attack. This WIRED article addresses the long-term impact that the KRACK exploit may have on the world.

The good news is that many software implementation patches are already available for your most critical devices. UMass Amherst has already updated all of our wireless access points with a patch to protect against the KRACK exploit. Also, with the exception of Android & Linux devices which are vulnerable to key resets, it is not very easy to exploit this vulnerability on most networks. One would need to generally know what they are looking for in order to crack the encryption key, but an attacker may be able to narrow down possibilities with social cues, such as if they see you at Starbucks shopping for shoes on Amazon.

The general takeaway is that you should update all of your wireless devices as soon as possible. If you are interested in learning more about KRACK, how it works on a technical level, and see a demonstration of an attack, check out the researchers’ website.

Use Windows 10 Taskbar for Google Search

Search is a versatile feature in Windows 10. This tool allows you to browse files or programs on your computer, answer basic questions using Cortana, Microsoft’s personal assistant tool, and browse the web. The latter feature is what we will be focusing in this blog. By publishing this article, I do not intend to make a statement about which search engine or browser is better. It is simply a way for users to customize their PC so that it aligns with their search preferences.

Browsing the web is one of the most important features in a modern PC user, but Microsoft restricts web searches in the taskbar to use it’s own search engine, Bing, and will use the Microsoft Edge browser by default for any web links. Many Windows users install Google Chrome or another alternative to Microsoft’s default browser, and the best way for them to search the web with Windows 10 would be if it was using their preferred browser and search engine combo.

This How-To will mainly focus on using the search feature with Google search on Google Chrome. Again, I do not mean this article as an endorsement of one browser / search combo over another, and will specifically reference Google Chrome, because it is the most widely-used browser in the United States, and can re-direct searches using specific extensions not available on other browsers.

Step 1: Change Default Browser

First make sure you have Google Chrome browser installed on your Windows 10 machine.

Next, go to the bottom left and click the windows icon. From here, you can access the Windows search. Type “default” and you should be provided with an icon for “default app settings.” Alternatively, you can open the settings app and navigate to System, then Default Apps.

From here, scroll down to the “Web browser” section, and make sure that Google Chrome is selected.

At this point, any web search through the Windows search feature will open in Google Chrome (or your browser of choice). However, these links will still be performed using Bing, while the majority of people use Google as their default. Redirecting Bing searches to Google will be handled via a Google Chrome extension in the next step.


Step 2: Download an extension to redirect Bing queries to Google

To re-route searches from Bing to Google in the Windows search bar, you can use a third-party extensions, Chrometana. Chrometana will automatically redirect bing searches to your prefered search engine when you type in a query and are presented with an option that says “see search results.”

That’s it! From now on, any web search in the Windows search bar will open up a new Google search in Google Chrome. Hopefully you find this feature useful to you and allows you to browse the web the way that works best for you.