How To Find Your Device’s MAC/Physical Address

The Physical address of a device is an unchanging number/letter combination which identifies your device on a network. It is also referred to as a Media Access Control Address (MAC Address). You may need it if you’re having issues with the campus network and UMass IT wants to see if the network itself is the problem.

To find the MAC/Physical address on a Windows 10 device:

Right click on the Start button to make a menu appear:

windows-key-x-menu

Select Command Prompt from the menu.

In the window that appears, type “ipconfig /all” without the quotes:

cmd

The resulting text displays information about the parts in your computer which communicate with the network. You’ll want to find the one that says “WLAN adapter” and look under that heading for the Physical Address:

WLAN

 

 

 

 

 

 

 

 

 

To Find the MAC Address of a Apple/Macintosh computer:

Click the apple menu in the top left of the screen and click System Preferences:

system-preferences-hotkey

In the window that appears, click “Network”:

preferences_overview

Highlight WiFi on the left-hand side and click advanced:

wifiadvanced

Navigate to the Hardware tab to find your MAC address:

hardware

To Find the MAC address of an iPhone:

Use the Settings app, go to General>About and the MAC Address is listed as “WiFi Address”:

iphoneMAC

To find the MAC address of an Android device:

The location of the MAC address on an android device is unique to the device, but almost all versions will show it if you navigate to Settings>Wireless and Network; the MAC address will be listed on the same page or in the Advanced section:

Find-MAC-Address

You may also be able to find the MAC address in the “About Phone” section of the setting menu:

wifi-mac-address-android

 

CyberGIS: Software

Image result for carto cool map
(Superbowl Twitter Map — CARTO)

What is CyberGIS (Cyber Geographical Information Science and Systems)? CyberGIS is a domain of geography that bridges computer science and geographic information science and technology (GIST). It is the development and utilization of software that integrates cyber infrastructure, GIS, and spatial analysis/ modeling capabilities. In this TechBytes article I will discuss two current and popular CyberGIS software for academic, industry, and government use.

CARTO: Unlock the potential of your location data. CARTO is the platform for turning location data into business outcomes.

CARTO Builder was created for users with no previous knowledge in coding or in extrapolating patterns in data. A simple user interface comprised of widgets allows the user to upload their data and instantly analyze a specific subset of the data (by category, by histogram, by formula, or by time series). From calculating clusters of points, detecting outliers and clusters, and predicting trends and volatility with the simple press of a button — CARTO Builder is truly made with efficiency and simplicity in mind. While CARTO builder “can be used in every industry, we are targeting financial services, to help them predict the risk of investments in specific areas, and telecom companies,” Javier de la Torre, CEO at CARTO.

For more information about CARTO from TechCrunch , click here.

 ://blogs.umass.edu/it-07/files/2017/10/Screen-Shot-2017-10-28-at-5.37.47-PM.png”>

 Mapbox: Build experiences for exploring the world. Add location into any application with our mapping, navigation, and location search SDKs. 

Unlike Carto, Mapbox was built for developers and cartography enthusiasts. While the graphical interface is easy to navigate (similar to photoshop or illustrator) Mapbox’s goal was to “create something equally useful for tech developers who have no idea how to design and designers who have no idea how to code” (Wired). While MapBox lacks the data analytics features of CARTO Builder, it makes up in its ability to manipulate a map any way the user likes. Based in both DC and San Francisco, Mapbox is partnered with large companies such as The Weather Channel, National Geographic, and CNN. Mapbox is optimized for navigation, search, and vector maps rendered in real time.

For more information about Mapbox from Wired, click here.

As a CyberGIS geographer myself, I use both CARTO Builder and Mapbox in my classes and in my research. When I have a dataset that needs to be geo-referenced on a map and not necessarily analyzed — Mapbox is my first choice. The ability to not only alter the color scheme to highlight the various features of the map, but to choose fonts and for labeling is something I take for granted. When using CARTO Builder those features are still present but are quite limited  — and when using ArcGIS online those features are non-existent. If an assignment requires more analysis on a given set of data, CARTO Builder is a simple way to parse data and run the specific algorithms.

Links to the CyberGIS software:

Mapboxhttps://www.mapbox.com/ 

Cartohttps://carto.com/

Thinkpad is known throughout the enterprise and consumer markets as Lenovo’s rugged, minimalistic, and business-oriented laptops, tablets, and mobile workstations division. Started under International Business Machines (IBM) in 1992, Lenovo acquired the division in 2005 and has owned the company ever since.  For 25 years, Thinkpads have been beloved by power users, demanding businesses & corporate environments, enthusiasts and even astronauts on the International Space Station (ISS). Today we take a brief look at the Thinkpad 25 Anniversary Edition, and the features that have persisted through the years of one of the longest continual laptop series.

Looking at the Thinkpad 25, there appear to be more similarities with modern Thinkpad laptops than the older era of Thinkpads it is supposed to be reminiscent of. The Thinkpad 25 comes with ULV 15w 7th gen Intel Processors, NVMe storage, a 1080p display, Nvidia 940MX dedicated graphics, the beloved trackpoint, and the distinctive minimalist black matte finish. The Thinkpad 25 also comes with a separate box of documentation and items that look back upon the series’ history and development, 25 years of such.

https://www.laptopmag.com/images/uploads/5280/g/lenovo-thinkpad-25-004.jpg
Source: laptopmag.com

The biggest difference in the Thinkpad 25 has to be the keyboard. The inclusion of a seven-row keyboard in the Thinkpad 25 when almost all modern computers are six row keyboards is nothing short an industry nod to when the seven-row keyboard reigned supreme. The Thinkpad 25 keyboard also has other references to earlier models, such as the blue enter key, dedicated page up and down keys, the delete “reference” key and traditional, non-island styled/chiclet keys. Omitted from the Thinkpad 25 are several antiquated technologies from over the years, such as the Thinklight, legacy ports (Serial, VGA, expresscard), and handle batteries.

To many enthusiasts, the Thinkpad 25 was a letdown; essentially a T470 with a seven-row backlit illuminated row keyboard.  The Thinkpad 25 is also expensive, at nearly $2,000 fully configured, and with such minimal specifications, many businesses will shy away from these devices. So, who is the Thinkpad 25 meant for then? This device was nothing but a limited-quantity device, for enthusiasts and collectors who yearn for a nostalgic legacy; for those who stubbornly resist modern design and technology implementations such as shiny plastic or brushed aluminum with a certain illuminated fruit. For those that have stood by the Thinkpad line through two and a half decades of cutting-edge innovation and performance, and are willing to pay the price for a computer that nods to this era of computing, then the Thinkpad 25 may be a worthwhile investment.

How To Create A Helpful Tech Tutorial: The Tutorial

Have you ever found yourself watching tech tutorials online? Nothing to be ashamed of, as everyone has run into an issue they need help solving at some point in their lives. Now, have you ever found yourself watching a BAD tech tutorial online? You know, one where the audio sounds like it’s being dragged across concrete and the video is literally a blurry recording of a computer screen? It ironically feels like a lot of the time the people who make tech tutorials need a tech tutorial on how to make good quality tech tutorials.

So join me, Parker Louison, as I wave my hands around awkwardly for ten minutes while trying my best to give helpful tips for making your tech tutorial professional, clean, and stand out among all the low effort content plaguing the internet!

A Quick Look at Home Theatre PCs

Are you one of those people that loves watching movies or listening to music while at home? Do you wish you could access that media anywhere in your home without lugging your laptop around your house and messing with cables? If you answered yes to these questions, then a Home Theater PC, or HTPC, may be for you.

An HTPC is a small computer that you can permanently hook up to a TV or home theater system that allows you to store, manage, and use your media whether it is stored locally or streamed from a service like Netflix, Amazon, or Spotify. Although several retailers sell pre-built HTPCs that are optimized for performance at low power, many people use a Raspberry Pi computer because they are small, quiet and relatively inexpensive. These are key features because you don’t want a loud PC with large fans interrupting your media experience, and a large computer won’t fit comfortably in a living room bookshelf or entertainment center.

The HTPC hooks up to your TV via an HDMI cord which will transmit both video and audio for watching movies. If you have a home theater system, your HTPC can connect to that to enable surround sound on movies, or streaming music throughout your home. It would also require a network connection to access streaming services. Although WiFi is convenient, a wired Ethernet connection is ideal because it can support higher speeds and bandwidth which is better for HD media.

The back of a typical AV Receiver.

 

Once you have a basic HTPC set up, you can upgrade your setup with a better TV, speakers, or even a projector for that true movie theater experience. If you want to be able to access your media in several rooms at once, you can set up multiple HTPCs with Network Accessed Storage, or NAS. This is a central storage location that connects directly to your router that all the computers on your home router can access at once. This is a more efficient option than storing all of your media on each computer separately. They can even be set up with internet access so you can stream your media from anywhere.

What’s KRACK, and Why Should It Bother You?

You may have recently noticed a new headline on the IT Newsreel you see when logging into a UMass service. The headline reads “Campus Wireless Infrastructure Patched Against New Cybersecurity Threat (Krack Attack)“. It’s good to know that UMass security actively protects us from threats like Krack, but what is it?

The KRACK exploit is a key reinstallation attack against the WPA2 protocol. That’s a lot of jargon in one sentence, so let’s break it down a little. WPA2 stands for Wi-Fi Protected Access Version 2. It is a security protocol that is used by all sorts of wireless devices in order to securely connect to a Wi-Fi network. There are other Wi-Fi security protocols, such as WPA and WEP, but WPA2 is the most common.

WPA2 is used to secure wireless connections between the client, such as your smartphone or laptop, and the router/access point that transmits the network. If you have a Wi-Fi network at home, then you have a router somewhere that transmits the signal. It’s a small box that connects to a modem – another small black box – which might connect to a large terminal somewhere in your house called the ONT, and which eventually leads to the telephone poles and wiring outside in your neighborhood. Secure connections have to be implemented at every level of your connection, which can range from the physical cables that are maintained by your internet service provider, all the way to the web browser running on your computer.

In order to create a secure connection between the router and the client, the two devices have to encrypt the data that they send to each other. In order to encrypt and decrypt the data they exchange, the two devices have to exchange keys when they connect. The two devices then use these keys to encrypt the messages that they send to each other, so that in transit they appear like gibberish, and only the two devices themselves know how to decipher it; they use these same keys for the duration of their communications.

WPA2 is just a protocol, meaning that is a series of rules and guidelines that a system must adhere to in order to support the protocol. WPA2 must be implemented in the software of a wireless device in order to be used. Most modern wireless devices support the WPA2 protocol. If you have a device that can connect to eduroam, the wireless network on the UMass Amherst campus, then that device supports WPA2.

This KRACK exploit is a vulnerability in the WPA2 protocol that was discovered by two Belgian researchers. They were able to get WPA2-supporting devices to send the same encrypted information over and over again and crack the key by deciphering known encrypted text content. They were able to get WPA2-supporting Android and Linux devices to reset their WPA2 keys to all zeroes, which made it even easier to crack encrypted content.

The real concern is that this is a vulnerability in the WPA2 protocol itself, not just any one implementation of it. Any software’s implementation of WPA2 that is correct is vulnerable to this exploit (newsflash – most are). That means essentially all wireless-enabled devices need to be updated to patch this vulnerability. This can be especially cumbersome because many internet-of-things devices (think of security webcams, web-connected smart home tools like garage doors) are rarely ever updated, if at all. Their software is assumed to just work without needing regular maintenance. All of those devices are vulnerable to attack. This WIRED article addresses the long-term impact that the KRACK exploit may have on the world.

The good news is that many software implementation patches are already available for your most critical devices. UMass Amherst has already updated all of our wireless access points with a patch to protect against the KRACK exploit. Also, with the exception of Android & Linux devices which are vulnerable to key resets, it is not very easy to exploit this vulnerability on most networks. One would need to generally know what they are looking for in order to crack the encryption key, but an attacker may be able to narrow down possibilities with social cues, such as if they see you at Starbucks shopping for shoes on Amazon.

The general takeaway is that you should update all of your wireless devices as soon as possible. If you are interested in learning more about KRACK, how it works on a technical level, and see a demonstration of an attack, check out the researchers’ website.

Use Windows 10 Taskbar for Google Search

Search is a versatile feature in Windows 10. This tool allows you to browse files or programs on your computer, answer basic questions using Cortana, Microsoft’s personal assistant tool, and browse the web. The latter feature is what we will be focusing in this blog. By publishing this article, I do not intend to make a statement about which search engine or browser is better. It is simply a way for users to customize their PC so that it aligns with their search preferences.

Browsing the web is one of the most important features in a modern PC user, but Microsoft restricts web searches in the taskbar to use it’s own search engine, Bing, and will use the Microsoft Edge browser by default for any web links. Many Windows users install Google Chrome or another alternative to Microsoft’s default browser, and the best way for them to search the web with Windows 10 would be if it was using their preferred browser and search engine combo.

This How-To will mainly focus on using the search feature with Google search on Google Chrome. Again, I do not mean this article as an endorsement of one browser / search combo over another, and will specifically reference Google Chrome, because it is the most widely-used browser in the United States, and can re-direct searches using specific extensions not available on other browsers.

Step 1: Change Default Browser

First make sure you have Google Chrome browser installed on your Windows 10 machine.

Next, go to the bottom left and click the windows icon. From here, you can access the Windows search. Type “default” and you should be provided with an icon for “default app settings.” Alternatively, you can open the settings app and navigate to System, then Default Apps.

From here, scroll down to the “Web browser” section, and make sure that Google Chrome is selected.

At this point, any web search through the Windows search feature will open in Google Chrome (or your browser of choice). However, these links will still be performed using Bing, while the majority of people use Google as their default. Redirecting Bing searches to Google will be handled via a Google Chrome extension in the next step.

 

Step 2: Download an extension to redirect Bing queries to Google

To re-route searches from Bing to Google in the Windows search bar, you can use a third-party extensions, Chrometana. Chrometana will automatically redirect bing searches to your prefered search engine when you type in a query and are presented with an option that says “see search results.”

That’s it! From now on, any web search in the Windows search bar will open up a new Google search in Google Chrome. Hopefully you find this feature useful to you and allows you to browse the web the way that works best for you.

Maximizing your Windows 10 Battery Life

Maximize your Windows 10 Battery Life and Reduce your Device Performance, featuring X1 Carbon 2nd Gen.

Recently I was preparing for a trip to a music festival while taking classes over the summer. I knew that I needed to keep up with my courses but I also knew that I wasn’t going to be able to charge my computer’s battery very often, so I decided to write a short article on how you can maximize your computer’s battery life beyond normal power-saving methods.

After this guide you’ll be saving battery like nobody’s business and your laptop will be significantly less usable then before! Before we get started it’s important that you’re aware of my computer’s specs; depending on your computer’s specifications and application usage, results may vary.

The make of my computer is Lenovo and the model is the X1 Carbon 2nd Gen.

OS: Windows 10 Pro

Version: 1607 build 14383.1198

Processor: Intel Core i5-4300U at 1.90 Ghz – Turboboost to 2.49 Ghz

Ram: 8.00 GB (7.68 Usable) DDR3 at 1600 MHz

Hard Drive: 256GB M.2 SSD eDrive Opal capable

Wireless: Intel Dual Band Wireless-AC 7260 (2×2, 802.11ac) with Bluetooth® 4.0

Integrated Lithium Polymer 8-cell (45Wh) RapidCharge battery

Also note that the only application that I was using was Microsoft Edge – to save battery over using Google Chrome.

First head over to Device Manager (Note: you’ll require internet for this step). This can be accessed from the Windows Power User menu by pressing the Windows Key + X at the same time. From the Device Manager menu go through every device and make sure that the drivers for each device are up to date. This should ensure that all of your devices are using the best possible drivers that are more efficient for your system’s battery; out of date drivers can adversely affect your systems performance as well.

While in Device Manager we’re also going to make a few more changes. Depending on your how you use your machine, you may want to adjust these settings to your needs. Click on the “Network adapters” drop down menu and double click on the Intel Dual Band Wireless-AC (this may be named differently depending on your device’s wireless card). Click over to the Advanced tab and change the “Preferred Band” to 5.2 GHz, “Roaming Aggressiveness” to a lower setting (lower is better unless in a congested wireless area). Now click over to the “Power Management” tab and make sure that the “Allow the computer to turn off this device to save power” is checked. Click the “OK” button and move on to the “Intel Ethernet Connection I218-LM (also may be different on your device) and double click on this as well. Make sure that “Enable PME” is set to enabled, “Energy Efficient Ethernet” is set to on and “System Idle Power Saver” is set to enabled. After that, navigate over to the “Power Management” tab and make sure again that the “Allow the computer to turn off this device to save power” is checked again.

After going through your drivers, head over to the Power & Sleep settings for your laptop. This can be accessed by pressing the Windows key, navigating to Settings -> System (Display notifications, apps, power) -> Power & Sleep. I’d recommend setting your Screen to turn off after at maximum of 5 minutes and setting your computer to Sleep after a maximum of 15 minutes. Then, navigate to the bottom of that page and click on Additional power settings. This will bring your to your computer’s Power Options.

You may want to switch over to the Power saver plan, which should automatically drop your computer down to a more efficient battery saving mode, but we want to push that even further. Click on “Change plan settings” to make some changes.

Consider changing “Adjust plan brightness” to the minimum usable brightness, as it’s one of the biggest aspects of battery saving. I however made sure that the computer’s brightness was always at minimum possible level was a must to keep my laptop alive.. Primarily I used the computer in the early morning or late at night so that I could keep the screen at the minimum brightness while still being able to use the laptop.

After changing your brightness to the minimum, click on “Change advanced power settings”. Here’s where you can adjust the fine controls for different hardware and software’s battery usage. Make sure that the top drop down menu says “Power saver [Active]” and move on the the main table of items. I would recommend changing this to your own personal preferences but there are a few major aspects I would recommend adjusting in this panel.

In “Desktop background settings” -> “Slide show” I would recommend setting this to paused while on battery power.

In “Wireless Adapter Settings” -> “Power Saving Mode” switch this over to Maximum Power Saving on battery power as well.

In “Sleep” -> “Sleep after” make sure these are set to the values you set earlier, around 5 and 15 respectively to On battery and Plugged in. Also in “Allow hybrid sleep” is set to off for both options, this is because hybrid sleep is more taxing on the battery. In “Hibernate after” set these to slightly higher values than your “Sleep after” values. This will allow your PC to conserve more battery than typical sleep. Also set “Allow wake timers” to disabled on battery power. We don’t want anything taking your laptop away from it’s beauty sleep.

In “Intel CPPC Energy Efficiency Settings” -> “Enable Energy Efficient Optimization” and make this enabled for both options. Also in “Energy Efficiency Aggressiveness” and set both options to 100%.

In “USB settings” -> “USB selective suspend setting” set both of these options to enabled.

In “Intel Graphics Settings” -> “Intel Graphics Power Plan” set both of these options to maximum battery life.

In “PCI Express” -> “Link State Power Management” set both of these options to Maximum power savings.

In “Processor power management” -> “Minimum processor state” set both options to 5%. This is the minimum percentage that your processor will run at. I wouldn’t recommend setting this to below 5% for minimum operation. Also in “System cooling policy” change both options to Passive cooling, which will slow your CPU before slowing your fans. Also in “Maximum processor state” set this to below 100%. I personally set my computer to a maximum of 50%, but depending on your use case, this will vary.

In “Display” most of these setting we’ve already touched earlier, but in “Enable adaptive brightness” and disable this setting. We don’t want the system to decide it wants a brighter screen and eat up valuable battery resources.

In “Battery” I would recommend just making sure that hibernation comes on in your “Critical battery action” settings and that your critical battery level is set to around 7%.

A couple additional changes that I made is to upscale the resolution on the computer so that it’s not having to display content in native 2K on the X1’s screen. This depends on the machine that you are using however, and your preference of how you want your machine’s screen to look.

Now there are a few things left to be changed, if I haven’t missed anything in Windows 10. For these you’ll want to shut down your computer and enter its BIOS settings. On the X1 Carbon that I was using, this is done by hitting Enter repeatedly after hitting the power button.

BIOS settings user interfaces tend to vary dramatically across computers and manufacturers, but for the X1 Carbon that I was working with it looked something like this

Image result for x1 carbon gen 2 bios(aside from the fact that this isn’t a Gen 2, it’s a very similar interface.)

In the BIOS I was working with, it doesn’t recognize mouse or trackpad input, so you’ll likely have to navigate with arrow keys, enter and escape; bear with me.

Navigate over to the “Config” tab and down arrow down to the “> USB option”. Make sure that the “USB UEFI BIOS Support” is enabled, “Always on USB” is disabled, and “USB 3.0 Mode” is set to auto. Now hit escape and down arrow down to the “> Power” option. Hit enter and I would recommend switching all of the settings over to battery optimized settings. For this X1 specifically, make sure that “Intel SpeedStep technology” is set to Enabled, “Mode for AC” is set to battery optimized, “Mode for Battery” is set to battery optimized. Also, make sure to switch the settings under “Adaptive Thermal Management”, “Scheme for AC” is set to balanced and “Scheme for Battery” is set to balanced. Now under “CPU Power Management”, make sure this is set to enabled, and make sure that “Intel Rapid Start Technology” is set to disabled. After modifying all these settings, hit escape again.

Depending on your personal use, you can head over to the “> Virtualization” settings and disable the Intel Virtualization and VT-d features, although this may adversely affect performance and prevent operating system virtualization entirely, so use at your discretion.

Thanks for bearing with me until now. Now you should have a remarkably effective battery-saving laptop that performs significantly worse than it did before. This worked out great for me working on course assignments while on a camping trip. I hope this works out well for you as well!

Setting Roam Aggression on Windows Computers

What is Wireless Roaming?

Access Points

To understand what roaming is, you first have to know what device makes the software function necessary.

If you are only used to household internet setups, the idea of roaming might be a little strange to think about. In your house you have your router, which you connect to, and that’s all you need to do. You may have the option of choosing between 2.4GHz and 5GHz channels, however that’s as complicated as it can get.

Now imagine that your house is very large, let’s say the size of UMass Amherst. Now, from your router in your living room, the DuBois Library, it might be a little difficult to connect to from all the way up in your bedroom on Orchard Hill. Obviously in this situation, the one router will never suffice, and so a new component is needed.

An Access Point (AP for short) provides essentially the same function as a router, except that multiple APs used in conjunction project a Wi-Fi network further than a single router ever could. All APs are tied back to a central hub, which you can think of as a very large, powerful modem, which provides the internet signal via cable from the Internet Service Provider (ISP) out to the APs, and then in turn to your device.

On to Roaming

So now that you have your network set up with your central hub in DuBois (your living room), and have an AP in your bedroom (Orchard Hill), what happens if you want to go between the two? The network is the same, but how is your computer supposed to know that the AP in Orchard Hill is not the strongest signal when you’re in DuBois. This is where roaming comes in. Based on what ‘aggressiveness’ your WiFi card is set to roam at, your computer will test the connection to determine which AP has the strongest signal based on your location, and then connect to it. The network is set up such that it can tell the computer that all the APs are on the same network, and allow your computer to transfer your connection without making you input your credentials every time you move.

What is Roam Aggressiveness?

The ‘aggressiveness’ with which your computer roams determines how frequently and how likely it is for your computer to switch APs. If you have it set very high, your computer could be jumping between APs frequently. This can be a problem as it can cause your connection to be interrupted frequently as your computer authenticates to another AP. Having the aggressiveness set very low, or disabling it, can cause your computer to ‘stick’ to one AP, making it difficult to move around and maintain a connection. The low roaming aggression is the more frequent problem people run into on large networks like eduroam at UMass. If you are experiencing issues like this, you may want to change the aggressiveness to suit your liking. Here’s how:

How to Change Roam Aggressiveness on Your Device:

First, navigate to the Control Panel which can be found in your Start menu. Then click on Network and Internet.

From there, click on Network and Sharing Center. 

Then, you want to select Wi-Fi next to Connections. Note: You may not have eduroam listed next to Wi-Fi if you are not connected or connected to a different network.

Now, select Properties and agree to continue when prompted for Administrator permissions.

After selecting Configure for your wireless card (your card will differ with your device from the one shown in the image above).

Finally, navigate to Advanced, and then under Property select Roaming Sensitivity Level. From there you can change the Value based on what issue you are trying to address.

And that’s all there is to it! Now that you know how to navigate to the Roaming settings, you can experiment a little to find what works best for you. Depending on your model of computer, you may have more than just High, Middle, Low values.

Changing roaming aggressiveness can be helpful for stationary devices, like desktops, too. Perhaps someone near you has violated UMass’ wireless airspace policy and set up and hotspot network or a wireless printer. Their setup may interfere with the AP closest to you, and normally, it could cause packet loss, or latency (ping) spiking. You may not even be able to connect for a brief time. Changing roaming settings can help your computer move to the next best AP while the interference is occurring, resulting in a more continuous experience for you.

Scout’s Honor: Will the Rise of Sabermetrics and Data Replace the Role of Baseball Scouts?

As popularized by the book-turned-movie Moneyball, a large portion of baseball relies on sabermetrics, a newer analysis of statistics to gauge players and teams. Where in the past RBIs and batting average were heavily relied on as be-all indicators, newer statistics have emerged as more accurate in measuring ability, since the game has changed so much in the 100+ years since its inception. With all these new statistics and measurements, software has been developed over the past 30 years to calculate and simulate different scenarios. If you are curious, Out of the Park Baseball (OOTP) is a yearly video game series in which you manage a baseball team, simulating games and poring over stats just like loads of front-offices do. Alongside the software, there are devices in most ballparks measuring every pitch (called PITCHF/x), and even players on the field, just collecting all the data possible for use.

An example of a spread chart from PITCHF/x

With all the tools available now to measure just about every fine detail about a player, some teams are cutting back on their scouting team. Scouts have been essential for baseball teams since the beginning of the sport, looking at prospects, or talent on other teams, measuring them and seeing if they are of any interest to the team. The Astros just cut eight scouts from their team, so scouts around the league are becoming weary about the future. Some teams still heavily rely on scouts and find them indispensable, with teams like the Brewers using them for players in the lower minor leagues, where there are not enough stats to fully screen players. There is also some information that can’t just be measured through just data and film.

Scouts with their radar guns ready

In a similar vein, all this technology also frees up the scouts and allows them to just watch the game for qualitative factors. Whenever a scout is recording the reading off a radar gun, or writing down the time for a sprint, they aren’t looking at the game. This issue is the same for sports reporters, because whenever you’re writing something down, you’re not paying attention to play as it happens. With the advancements of PITCHF/x and the like, scouts don’t need to spend time doing the busy work of recording numbers, where devices can do them automatically. This frees the scouts from tedious tasks, and watch the players as they play and how they interact in detail. They can see how other players react to a play not directed at them, see their energy while playing, and just their general dispositions. Used properly, modern baseball technology might free up and help scouts, maybe not replace them entirely.

Politweets: The Growing Use of Twitter and Social Media for Politics

Now more than ever, politicians turn to Twitter and other Social Media platforms.

This isn’t anything new. Since the beginning of Twitter, Facebook, and other forms of social media, politicians have increased their presence on the internet to reach out to many potential supporters and keep up with constituents. Today, when running a political campaign, it is almost necessary to have a web presence in order to make your name and positions on certain key issues known to voters. But can there be too much twitter? Too much of a presence online? And could social media be hurting the game of politics for the future?

(Source: New York Observer, 2016)

During the 2016 election cycle, Twitter became the go-to platform for ranting and discussing politics to endless users doing the same exact thing. Politicians noticed this, and ran with it, tweeting non-stop and even directly at their political opponents. Most notably, the three main candidates, Donald Trump, Hillary Clinton and Bernie Sanders, all used twitter the most to rally support, attack each other, and tweet their stances on many issues. Even our previous president, Barack Obama, has an active web presence, logging 95 million followers and being the most followed notable politician worldwide.

Politicians like President Trump and Hillary Clinton have taken politics on twitter to the next level. Not only would they tweet about the latest story to rile up their respective sides, but they would also use it to directly mud-sling each other on the platform. Whether it takes the form of Trump’s long rants about “Crooked Hillary” or Hillary’s simple but effective “Delete your account” response, these tweets start a flood of supporters from each side going at it in the replies and in the twitter universe in general.

As the British website “The Guardian” points out, Twitter is relatively small in the political sphere, mainly used by politicians’ key and sturdiest supporters that help push an agenda, politicians use twitter to start a discussion and get into the mainstream news on TV, in newspapers, magazines, and even other websites. These sometimes outrageous claims and tweets by politicians make it on all of these platforms, furthering discussion of their agendas and somehow still making it into the minds of people who don’t even use social media at all.

This trend isn’t limited to twitter, as this carries over to Facebook and even YouTube as well. Facebook has also become a hot-bed for political debates and agenda-pushing. Despite the negative stigma around social media and politics, it seems to be working. According to the Pew Research Center, “one-in-five social media users have changed their minds about a political issue or about a candidate for office, because of something they saw on social media”. 

That number is astounding. A simple post supporting one candidate, one policy, and movement could have a huge adverse effect. Theoretically, if someone has 500 facebook friends/twitter followers and they make posts concerning on topic of political discussion or supporting a candidate, a good amount of those followers would see that post. Say 100 people see that post out of 500 friends, 20 people of that 100 would be change their mind on an issue or candidate.

Whether you like it or not, Politicians using social media to further the political discussion is working, and is here to stay. President Trump will continue to push his agenda; his opponents/supporters will continue to spread their beliefs across the platform, and tweets by any politician will filter through the world of social media into everyday news outlets. This is a trend that is only expected to grow in the next coming elections and years, and a trend that could potentially either help or hurt the political sphere as a whole.

For more reading, you could visit either of these sources based around the political discussion that I used to research this article:

https://www.theguardian.com/technology/2016/jul/31/trash-talk-how-twitter-is-shaping-the-new-politics

2. The tone of social media discussions around politics

The Scroll Lock Key, A Brief History

Long ago in the year 1981, IBM released their original PC keyboard. It came with a set of 83 keys which have since become industry standard. However, the keyboard has evolved much over the 36 years since its creation. Most have changed key placements and adding, combining or removing keys due to evolving technological needs

But there is a key that has managed to hang on through history, despite most people not knowing what it even does. Today, we explpore the Scroll Lock  key.

 

Image result for scroll lock

 

The Original Purpose

Back when the Scroll Lock key was first invented, mice and graphical operating systems were still not mainstream like they are today. Today when when typing documents, we can use our mouse to point and click to move the typing cursor. Back then, the arrow keys were used to move your typing cursor, or to scroll the page. Toggling the Scroll Lock key would disable scrolling with the arrow keys, and allow you to move your typing cursor through the page.

But mice are widespread now, so why is the key still there?

 

Uses
There are two very popular uses for the Scroll Lock key today:

Microsoft Excel
In Excel, the arrow keys navigate cells by default. However, when the Scroll Lock key is toggled, the arrow keys will now scroll the entire spreadsheet either vertically or horizontally. This allows for more advanced users to have both hands on the keyboard at all times, decreasing the time it takes to use the spreadsheet.

“Free” Key
Another popular use for the Scroll Lock key is as a “Free” key. What this means is that people will remap the key to perform other functions and macros. For example, if I wanted to create a New Incognito Window in Google Chrome, I could hit CTRL+SHFT+N or I could remap the whole shortcut to the Scroll Lock and have it be done in one press.

The Scroll Lock key is a vestige of an older time that has remained a standard since the dawn of the keyboard, and managed to carve out relevance, staying useful long after its original purpose expired. In a world of constantly evolving technology, where many feel the need to update their skill-sets to fit new fads and trends, we can all learn a lot from the Scroll Lock key, finding new and exciting ways to apply our talents.

RRAM: A Retrospective Analysis of the Future of Memory

Mechanisms of Memory

Since the dawn of digital computation, the machine has only known one language: binary.  This strange concoction of language and math has existed physically in many forms since the beginning.  In its simplest form, binary represents numerical values using only two values, 1 and 0.  This makes mathematical operations very easy to perform with switches. It also makes it very easy to store information in a very compact manor.

Early iterations of data storage employed some very creative thinking and some strange properties of materials.

 

IBM 80-Column Punch Card

One of the older (and simpler) methods of storing computer information was on punch cards.  As the name suggests, punch cards would have sections punched out to indicate different values.  Punch cards allowed for the storage of binary as well as decimal and character values.  However, punch cards had an extremely low capacity, occupied a lot of space, and were subject to rapid degradation.  For these reasons, punch cards became phased out along with black and white TV and drive-in movie theaters.

Macroscopic Image of Ferrite Memory Cores

Digital machines had the potential to view and store data using far less intuitive methods.  King of digital memory from the 1960s unto the mid-to-late 70s was magnetic core memory.  By far one of the prettiest things ever made for the computer, this form of memory was constructed with a lattice of interconnected ferrite beads.  These beads could be magnetized momentarily when a current of electricity passed near them.  Upon demagnetizing, they would induce a current in nearby wire.  This current could be used to measure the binary value stored in that bead.  Current flowing = 1, no current = 0.

Even more peculiar was the delay-line memory used in the 1960s.  Though occasionally implemented on a large scale, the delay-line units were primarily used from smaller computers as there is no way they were even remotely reliable… Data was stored in the form of pulsing twists through a long coil of wire.  This mean that data could be corrupted if one of your fellow computer scientists slammed the door to the laboratory or dropped his pocket protector near the computer or something.  This also meant that the data in the coil had to be constantly read and refreshed every time the twists traveled all the way through the coil which, as anyone who has ever played with a spring before knows, does not take a long time.

Delay-Line Memory from the 1960s

This issue of constant refreshing may seem like an issue of days past, but DDR memory, the kind that is used in modern computers, also has to do this.  The DDR actually stands for double data rate and refers to the number of times every cycle that the data in every binary cell is copied into an adjacent cell and then copied back.  This reduces the amount of useful work per clock cycle that a DDR memory unit can do.  Furthermore, only 64 bits of the 72-bit DIMM connection used for DDR memory are actually used for data (the rest are for Hamming error correction).  So we only use about half the work that DDR memory does for actual computation and it’s still so unreliable that we need a whole 8 bits for error correction; perhaps this explains why most computers now come with three levels of cache memory whose sole purpose is to guess what data the processor will need in the hopes that it will reduce the processor’s need to access the RAM.

DDR Memory Chip on a Modern RAM Stick

Even SRAM (the faster and more stable kind of memory used in cache) is not perfect and it is extremely expensive.  A MB of data on a RAM stick will run you about one cent while a MB of cache can be as costly as $10.  What if there were a better way or making memory that was more similar to those ferrite cores I mentioned earlier?  What if this new form of memory could also be written and read to with speeds orders of magnitude greater than DDR RAM or SRAM cache?  What if this new memory also shared characteristics with human memory and neurons?

 

Enter: Memristors and Resistive Memory

As silicon-based transistor technology looks to be slowing down, there is something new on the horizon: resistive RAM.  The idea is simple: there are materials out there whose electrical properties can be changed by having a voltage applied to them.  When the voltage is taken away, these materials are changed and that change can be measured.  Here’s the important part: when an equal but opposite voltage is applied, the change is reversed and that reversal can also be measured.  Sounds like something we learned about earlier…

The change that takes place in these magic materials is in their resistivity.  After the voltage is applied, the extent to which these materials resist a current of electricity changes.  This change can be measured and therefor binary data can be stored.

A Microscopic Image of a Series of Memristors

Also at play in the coming resistive memory revolution is speed.  Every transistor ever made is subject to something called propagation delay: the amount of time required for a signal to traverse the transistor.  As transistors get smaller and smaller, this time is reduced.  However, transistors cannot get very much smaller because of quantum uncertainty in position: a switch is no use if the thing you are trying to switch on and off can just teleport past the switch.  This is the kind of behavior common among very small transistors.

Because the memristor does not use any kind of transistor, we could see near-speed-or-light propagation delays.  This means resistive RAM could be faster than DDR RAM, faster than cache, and someday maybe even faster than the registers inside the CPU.

There is one more interesting aspect here.  Memristors also have a tendency to “remember” data long after is has been erased and over written.  Now, modern memory also does this but, because the resistance of the memristor is changing, large arrays of memristors could develop sections with lower resistance due to frequent accessing and overwriting.  This behavior is very similar to the human brain; memory that’s accessed a lot tends to be easy to… well… remember.

Resistive RAM looks to be, at the very least, a part of the far-reaching future of computing.  One day we might have computers which can not only recall information with near-zero latency, but possibly even know the information we’re looking for before we request it.

5 Microsoft OneNote Features that Make You a Productivity Machine

You may or may not know that all UMass students get access to Microsoft Office 365 for free! Sign up is super simple and can be found here: https://www.umass.edu/it/software/microsoft-office-365-education

Microsoft OneNote is a versatile note taking software that has transformed the way I participate in class and take notes. Maybe it can do the same for you!

Here are some of its features:

  1. Sync Notes on All Devices – Notes you take in class on your computer can appear on your phone and iPad almost instantly, and the other way around!  There is no need to worry about your laptop dying half way through class if you can pull your tablet out and continue right where you left off! To make it better, you don’t even need the app.  OneNote has a web browser version as well!  Now you can access and add to your notes on ANY device by logging in with your UMass account.  Study sessions can happen anywhere at any time.
  2. Complete worksheets and Syllabi Digitally – I present to you now my favorite feature of Microsoft OneNote: Insert File PrintOut.  Any assignments posted on Moodle can be inserted directly into your OneNote notebook, next to your notes, and completed right on your computer. Then you can print out the completed sheet.  Or how about putting the class syllabus and assignment schedule right in the front of your digital notes.  No more clogging up your downloads folder with ClassSyllabus(8).pdf!
  3. Hand Write Notes – Many laptops are touch and stylus enabled!  Digital notes are often criticized because studies show that hand writing information is a superior way to commit information to memory.  If you have a Microsoft Surface, An HP Spectre, A Lenovo ThinkPad or YogaBook, or a bunch of other models, you might be able to hand write notes directly into OneNote!
  4. Create To-Do Lists Right Next to Today’s Class Notes – Among the one million other ways OneNote lets you format your info is a to-do list.  After taking your class notes, make a home work to-do list right where you leave off.
  5. Share your notes with others – Finally, sharing your notes, to-do lists, work sheets, or even entire notebooks is super easy.  You can email specific pages, invite other OneNote Users to collaborate on the same page, and take screenshots and share them quickly, with no hassle!

 

I was always a traditional pen and paper student until I found Microsoft OneNote. Now all of my notes are taken either by typing or handwriting with my laptop’s stylus, and I can access them quickly on my phone, iPad, or any web browser.

Anti-Virus on Linux

Do I even need one?

Linux has many benefits that make many people want to use it as their main operating system. One of these benefits is strong security. This security mostly stems from the fact that programs are typically run as a user instead of as root (admin) so the damage a malicious program could do is somewhat limited. It also stems from Linux‘s very nature; it‘s an open source operating system to which many people contribute their time to improve and packages are not rushed by a central corporate authority before they are truly finished. Linux is not often targeted with malicious programs and the average user will likely never encounter a malicious program during their Linux use. Nevertheless, having an anti-virus that can scan both your Linux OS and a Windows installation, among other things, can be very useful.

What else?

While you yourself may not encounter malicious programs that will affect your Linux machine, you could encounter ones that could affect others’ machines. To that end, some anti-virus programs support scanning Windows based machines (as well as others on the same network), scanning E-mail attachments before you forward/send them to others, and any other files that you plan on sharing otherwise.

Okay, so what do I use?

There are many anti-virus programs available through whatever package manager you may be using. Some popular ones include:

  • ClamAV
  • AVG Antivirus
  • Avast! Linux Home Edition
  • Comodo Antivirus for Linux
  • BitDefender Antivirus

Installing these programs is very straight forward. Just go to your package manger or download and install them. Alternatively, you can refer to their respective websites and use terminal.

It should be noted that some anti-virus programs on Linux do not have a GUI (Graphical User Interface) so they must be accessed through terminal commands. When choosing an anti-virus program, make sure you‘re choosing one that has a a user interface that you‘re comfortable with.

You should now be well on your way to improving the security of both your system and those of the people around you. Farewell and browse safely!

What is S.M.A.R.T?

Have you ever thought your computer might be dying but you don’t know what? Symptoms that people might be familiar with may include slowing down, increased startup time, programs freezing, constant disk usage, and audible clicking. While these symptoms may happen to a lot of people, they don’t necessarily mean the hard drive is circling the drain. With a practically unlimited number of other things that could make the computer slow down and become unusable, how are you supposed to find out exactly what the problem is? Fortunately, the most common part to fail in a computer, the hard drive (or data drive), has a built-in testing technology that even users can use to diagnose their machines without handing over big bucks to a computer repair store or having to buy an entire new computer if their computer is out of warranty.

Enter SMART (Self-Monitoring, Analysis and Reporting Technology). SMART is a monitoring suite that checks computer drives for a list of parameters that would indicate drive failure. SMART collects and stores data about the drive including errors, failures, times to spin up, reallocated sectors, and read/write abilities. While many of these attributes may be confusing in definition and even more confusing in their recorded numerical values, SMART software can predict a drive failure and even notify the user of the computer that the software has detected a failing drive. The user can then look at the results to verify, or in unsure, bring to a computer repair store for a verification and drive replacement.

So how does one get access to SMART? Many computers include built in diagnostic suites that can be accessed via a boot option when the computer first turns on. Others manufacturers require that you download an application without your operating system that can run a diagnostic test. These diagnostic suites will usually check the SMART status, and if the drive is in fact failing, the diagnostic suite will report a drive is failing or has failed. However, most of these manufacturer diagnostics will simply only say passed or failed, if you want access to the specific SMART data you will have to use a Windows program such as CrystalDiskInfo, a Linux program such as GSmartControl, or SMART Utility for Mac OS.

These SMART monitoring programs are intelligent enough to detect when a drive is failing, to give you ample time to back up your data. Remember, computer parts can always be replaced, lost data is lost forever. However, it should be noted that SMART doesn’t always detect when a drive fails. If a drive suffers a catastrophic failure like a physical drop or water damage while on SMART cannot predict these and the manufacturer is not at fault. Therefore, while SMART is best to be used as a tool to assess whether a drive is healthy or not, it is used most strongly in tandem with a good reliable backup system and not as a standalone protection against data failure.

Multiple Desktops in Windows 10

The concept of using multiple desktops isn’t new. Apple incorporated this feature back in 2007 starting with OS X 10.5 Leopard in the form of Spaces, allowing users to have up to 16 desktops at once. Since then, PC users have wondered if/when Microsoft would follow suit. Now, almost a decade later, they finally have.

Having more than one desktop allows you to separate your open windows into different groups and only focus on one group at a time. This makes it much easier to juggle working on multiple projects at once, giving each one a dedicated desktop. It’s also useful for keeping any distractions out of sight as you try to get your work done, while letting you easily shift into break mode at any time.

If you own a Windows computer and didn’t know about multiple desktops, you’re not alone! Microsoft didn’t include the feature natively until Windows 10, and even then they did it quietly with virtually no advertising for it at all. Here’s a quick guide on how to get started.

To access the desktops interface, simply hold the Windows Key and then press Tab. This will bring you to a page which lists the windows you currently have open. It will look something like this:

Here, you can see that I’ve got a few different tasks open. I’m trying to work on my art in MS Paint, but I keep getting distracted by YouTube videos and Moodle assignments. To make things a little easier, I can create a second desktop and divide these tasks up to focus on one at a time.

To create a new desktop, click the New desktop button in the bottom right corner of this screen. You will see the list of open desktops shown at the bottom:

Now you can see I have a clean slate on Desktop 2 to do whatever I want. You can select which desktop to enter by clicking on it. Once you are in a desktop, you can open up new pages there and it will only be open in that desktop. You can also move pages that are already open from one desktop to another. Let’s move my MS Paint window over to Desktop 2.

On the desktops interface, hovering over a desktop will bring up the list of open windows on that desktop. So, since I want to move a page from Desktop 1 to Desktop 2, I hover over Desktop 1 so I can see the MS Paint window. To move pages around, simply click and drag them to the desired desktop.

I dragged my MS Paint window over from Desktop 1 to Desktop 2. Now, when I open up Desktop 2, the only page I see is my beautiful artwork.

Finally, I can work on my art in peace without distractions! And if I decide I need a break and want to watch some YouTube videos, all I have to do is press Windows+Tab and select Desktop 1 where YouTube is already open.

If you’re still looking for a reason to upgrade to Windows 10, this could be the one. The feature really is super useful once you get the hang of it and figure out how to best use it for your needs. My only complaint is that we don’t have the ability to rename desktops, but this is minor and I’m sure it will be added in a future update.

 

An Introduction to Discord: the Latest and Greatest in VoIP for Gamers

PC Gaming continues to grow annually as one of the primary platforms for gamers to enjoy their favorite titles. E-Sports (think MLB/NFL/NBA/NHL-level skills, commentary and viewership, but for video games) also continue to grow, creating a generation of hyper-competitive gamers all vying to rise above the rest. Throughout of the history of PC gaming, players have used a variety of voice communication programs to allow them to communicate with their teammates. Skype, Mumble, Ventrilo, and Teamspeak are just a few of the clients that are still used today, but in late 2015, a new challenger appeared: Discord!

You heard them. It’s time to ditch Skype and Teamspeak!

Discord was created to serve as VoIP platform that can host many users at a time for voice, text, image and file sharing. It’s the perfect solution for users that were looking for a voice chat program that is easy to use, resource-light, and capable of just about anything. 

Here’s what Discord looks like once you’re logged in. In the center of the screen, users can use discord like they would any typical messenger program to send files, links, texts, images, videos, and other files. Slightly to the left, you can connect to channels to communicate with others over chat.

Traveling even further to our left is a list of discord servers you can join. These are specific groups of channels that you usually have to be invited to and are usually filled with members of various online communities. It’s a great way to chat with people who share similar interests! Many subreddits and YouTube communities have dedicated discord servers.

Discord’s popularity is exploding, with over 45 million users as of May 2017. It’s ability to provide services in an easy (and free!) to use platform that others have failed to match in the past makes it a strong contender for the best VoIP program to date. It even boasts fairly robust security features, such as having to confirm a login via email every time you try to log in to discord from a new IP address.

To get started, head on over to https://discord.gg to sign up. Discord is also available as a client application on desktop machines, as well as for mobile devices like iOS and Android.

 

My Top 5 Google Chrome Extensions

A Google Chrome extensions are like apps for your phone, except they’re for your browser. Extensions add functionality for specific things. In this article I will go over the top five extensions that I find myself using the most.

Imagus.
https://lh3.googleusercontent.com/YAt89Udgoyfg6qfIhVO-qvqGXSTVr10NcOJHfKuFs8TPLxklZkVMjiVURjFqCzjuZcYDTGX2uKA=w640-h400-e365
Many websites such as Reddit and Twitter make it very hard to see pictures with out clicking on them, this is where Imagus comes in. Imagus is an extension that makes it easier to see pictures that are too small or maybe cropped due to the layout of the website. When you move your cursor over an image Imagus opens it up to full size next to the cursor, which makes it much easier to see. Not only that Imagus lets you keep the image open without keeping your cursor on the image by simply hitting enter. To make it disappear simply hit enter again. Check it out here https://goo.gl/dm1Q4d.

Magic Actions for Youtube.
https://www.mycinema.pro/i/preview_magic_actions.jpg
Magic Actions adds a lot of much-needed features to the already great site, which is Youtube. Magic Action adds the ability to full screen a window within a tab, something that I constantly find myself doing. It also allows Youtube to be turned to dark mode as well allowing users to take quick screenshots of Youtube videos. Check it out here https://goo.gl/jPHA7f.

Grammarly.
https://www.theedublogger.com/files/2015/02/grammarly-1lzblh1.png
Writing can be hard especially when many websites don’t have a built-in grammar and spell checker. This is where Grammarly comes in. Grammarly brings a spell checker to every text box on the internet. Not only that Grammarly can also catch less obvious errors such as a lack of a comma or a misplaced modifier. Check it out here https://goo.gl/kUSVvZ.

Tab for a Cause.
https://tabforacause-west.s3.amazonaws.com/static-1/img/product-images/sample_page1.0dcb41c8c4a9.jpg
Almost everyone wants to help those in need, but often it can be financially difficult to give money to charity. Tab for a Cause makes it easy to help out. Simply enable the extension and tab for a cause will become the screen that appears every time a new tab opens. On the new screen there is a small ad which is used to generate ad revenue for charity. Every time you open a new tab ad money is generated. If you are like me and constantly open tabs you will be raising a lot of money for charity by simply browsing the web. Check it out here https://goo.gl/sSqhWQ.

goo.gl URL Shortener.
https://lh3.googleusercontent.com/as3S-y6muxSh8gYagkFwjcfFIhA9pmdwmxKHNIvmxmAZWbqL4fSuFwRv-ArHgNUPETMGc2LC-A=w640-h400-e365
Almost every day I copy and paste a URL whether it be to send to someone, put in a document or saving it for later. The problem with standard URLs is they are often long and not very pretty to look at. goo.gl URL Shortner makes it easy to use googles URL shortening website with one click to the icon at the top of Google Chrome. A shortened URL looks like https://goo.gl/B8J7I5 and can be done to any web page. In fact I’ve been using it for every link so far. So check it out here https://goo.gl/DUrXQ.

Welcome Class of 2021!

We at IT User Services would like to extend a warm welcome to all new and returning students!

As you learn and re-learn your way around campus your first month back, many of you will become acquainted with the technology and resources available to UMass students.

We at IT are here to enable your success by making technology the last thing on your mind while you make a home here at UMass, and begin or resume your studies. If you need us (or rather, when), we will be there to answer your questions, remove your malware, and fix your computer. The Help Center, the campus mothership for tech support, is located in room A109 of the Lederle Graduate Research Center (the cream-colored low-rise located across the street from the Northeast Residential Area). The Help Center is open from 8:30AM to 4:45PM Monday through Friday. We have extended service hours at the Technical Support desk in the Learning Commons. Our consultants are available for assistance there as late as midnight, depending on Library hours.

Continue reading

Transit by Wire – Automating New York’s Aging Subways

When I left New York in January, the city was in high spirits about its extensive Subway System.  After almost 50 years of construction, and almost 100 years of planning, the shiny, new Second Avenue subway line had finally been completed, bringing direct subway access to one of the few remaining underserved areas in Manhattan.  The city rallied around the achievement.  I myself stood with fellow elated riders as the first Q train pulled out of the 96th Street station for the first time; Governor Andrew Cuomo’s voice crackling over the train’a PA system assuring riders that he was not driving the train.

In a rather ironic twist of fate, the brand-new line was plagued, on its first ever trip, with an issue that has been effecting the entire subway system since its inception: the ever present subway delay.

A small group of transit workers gathered in the tunnel in front of the stalled train to investigate a stubborn signal.  The signal was seeing its first ever train, yet its red light seemed as though it had been petrified by 100 years of 24-hour operation, just like the rest of them.

Track workers examine malfunctioning signal on Second Avenue Line

When I returned to New York to participate in a summer internship at an engineering firm near Wall Street, the subway seemed to be falling apart.  Having lived in the city for almost 20 years and having dealt with the frequent subway delays on my daily commute to high school, I had no reason to believe my commute to work would be any better… or any worse.  However, I started to see things that I had never seen: stations at rush hour with no arriving trains queued on the station’s countdown clock, trains so packed in every car that not a single person was able to board, and new conductors whose sole purpose was to signal to the train engineers when it was safe to close the train doors since platforms had become too consistently crowded to reliably see down.

At first, I was convinced I was imagining all of this.  I had been living in the wide-open and sparsely populated suburbs of Massachusetts and maybe I had simply forgotten the hustle and bustle of the city.  After all, the daily ridership on the New York subway is roughly double the entire population of Massachusetts.  However, I soon learned that the New York Times had been cataloging the recent and rapid decline of the city’s subway.  In February, the Times reported a massive jump in the number of train delays per month, from 28,000 per month in 2012 up to 70,000 at the time of publication.

What on earth had happened?  Some New Yorkers have been quick to blame Mayor Bill De’Blasio  However, the Metropolitan Transportation Authority, the entity which owns and operates the city subway, is controlled by the state and thus falls under the jurisdiction of Governor Andrew Cuomo.  However, it’s not really Mr. Cuomo’s fault either.  In fact, it’s no one person’s fault at all!  The subway has been dealt a dangerous cocktail of severe overcrowding and rapidly aging infrastructure.

 

Thinking Gears that Run the Trains

Anyone with an interest in early computer technology is undoubtedly familiar with the mechanical computer.  Before Claude Shannon invented electronic circuitry that could process information in binary, all we had to process information were large arrays of gears, springs, and some primitive analog circuits which were finely tuned to complete very specific tasks.  Some smaller mechanical computers could be found aboard fighter jets to help pilots compute projectile trajectories.  If you saw The Imitation Game last year, you may recall the large computer Alan Turing built to decode encrypted radio transmissions during the Second World War.

Interlocking machine similar to that used in the NYC subway

New York’s subway had one of these big, mechanical monsters after the turn of the century; In fact, New York still has it.  Its name is the interlocking machine and it’s job is simple: make sure two subway trains never end up in the same place at the same time.  Yes, this big, bombastic hunk of metal is all that stands between the train dispatchers and utter chaos.  Its worn metal handles are connected directly to signals, track switches, and little levers designed to trip the emergency breaks of trains that roll past red lights.

The logic followed by the interlocking machine is about as complex as engineers could make it in 1904:

  • Sections of track are divided into blocks, each with a signal and emergency break-trip at their entrance.
  • When a train enters a block, a mechanical switch is triggered and the interlocking machine switches the signal at the entrance of the block to red and activates the break-trip.
  • After the train leaves the block, the interlocking machine switches the track signal back to green and deactivates the break-trip.

Essentially a very large finite-state machine, this interlocking machine was revolutionary back at the turn of the century.  At the turn of the century, however, some things were also acting in the machine’s favor; for instance, there were only three and a half million people living in New York at the time, they were all only five feet tall, and the machine was brand new.

As time moved on, the machine aged and so did too did the society around it.  After the Second World War, we replaced the bumbling network of railroads with an even more extensive network of interstate highways.  The train signal block, occupied by only one train at a time, was replaced by a simpler mechanism: the speed limit.

However, the MTA and the New York subways have lagged behind.  The speed and frequency of train service remains limited by how many train blocks were physically built into the interlocking machines (yes, in full disclosure, there is more than one interlocking machine but they all share the same principles of operation).  This has made it extraordinarily difficult for the MTA to improve train service; all the MTA can do is maintain the again infrastructure.  The closest thing the MTA has to a system-wide software update is a lot of WD40.

 

Full-Steam Ahead

There is an exception to the constant swath of delays…two actually.  In the 1990s and then again recently, the MTA did yank the old signals and interlocking machines from two subway lines and replace them with a fully automated fleet of trains, controlled remotely by a digital computer.  In a odd twist of fate, the subway evolved straight from its Nineteenth Century roots straight to Elon Musk’s age of self-driving vehicles.

The two lines selected were easy targets, both serve large swaths of suburb in Brooklyn and Queens and both are two-track lines, meaning they have no express service.  This made the switch to automated trains easy and very effective for moving large numbers of New Yorkers.  And the switch was effective!  Of all the lines in New York, the two automated lines have seen the least reduction in on-time train service.  The big switch also had some more proactive benefits, like the addition of accurate countdown clocks in stations, a smoother train ride (especially when stopping and taking off), and the ability for train engineers to play Angry Birds during their shifts (yes, I have seen this).

The first to receive the update was the city’s, then obscure, L line.  The L is one of the only two trains to traverse the width of the Manhattan Island and is the transportation backbone for many popular neighborhoods in Brooklyn.  In recent years, these neighborhoods have seen a spike in population due, in part, to frequent and reliable train service.

L train at its terminal station in Canarsie, Brooklyn

The contrast between the automated lines and the gear-box-controlled lines is astounding.  A patron of the subway can stand on a train platform waiting for an A or C train for half an hour… or they could stand on another platform and see two L trains at once on the same stretch of track.

The C line runs the oldest trains in the system, most of them over 50 years old.

The city also elected to upgrade the 7 line; the only other line in the city to traverse the width of Manhattan and one of only two main lines to run through the center of Queens.  Work on the 7 is set to finish soon and the results looks to be promising.

Unfortunately for the rest of the city’s system, the switch to automatic train control for those two lines was not cheap and it was not quick.  In 2005, it was estimated that a system-wide transition to computer controlled trains would not be completed until 2045.  Some other cities, most notably London, made the switch to automated trains years ago.  It is though to say why New York has lagged behind, but it most likely has to do with the immense ridership of the New York system.

New York is the largest American city by population and by land area.  This makes other forms of transportation far less viable when traveling though the city.  After a the public opinion of highways in the city was ruined in the 1960s following the destruction of large swaths of the South Bronx, many of the city’s neighborhoods have been left nearly inaccessible via car.  Although New York is a very walkable city, its massive size makes commuting by foot from the suburbs to Manhattan impractical as well.  Thus the subways must run every day and for every hour of the day.  If the city wants to shut down a line to do repairs, they often cant.  Often times, line are only closed for repairs on weekends and nights for a few hours.

 

Worth the Wait?

Even though it may take years for the subway to upgrade its signals, the city has no other option.  As discussed earlier, the interlocking machine can only support so many trains on a given length of track.  On the automated lines, transponders are placed every 500 feet, supporting many more trains on the same length of track.  Trains can also be stopped instantly instead of having to travel to the next red-signaled block.  With the number of derailments and stalled trains climbing, this unique ability of the remote-controlled trains is invaluable.  Additionally, automated trains running on four-track lines with express service could re-route instantly to adjacent tracks in order to completely bypass stalled trains.  Optimization algorithms could be implemented to have a constant and dynamic flow of trains.  Trains could be controlled more precisely during acceleration and breaking to conserve power and prolong the life of the train.

For the average New Yorker, these changes would mean shorter wait times, less frequent train delays, and a smoother and more pleasant ride.  In the long term, the MTA would most likely save millions of dollars in repair costs without the clunky interlocking machine.  New Yorkers would also save entire lifetimes worth of time on their commutes.  The cost may be high, but unless the antiquated interlocking machines are put to rest, New York will be paying for it every day.

Cross Platform Learning- Opinion

Last semester, my Moodle looked a little barren. Only two of my classes actually had Moodle pages. This would be okay if only 2 of my classes had websites. But all of them did. In fact, most of the classes I took had multiple websites that I was expected to check, and memorize, and be a part of throughout the semester. This is the story of how I kept up with:

  1. courses.umass.edu
  2. people.umass.edu
  3. moodle.umass.edu
  4. owl.oit.umass.edu
  5. piazza.com
  6. Flat World Learn On
  7. SimNet
  8. TopHat
  9. Investopedia
  10. Class Capture

 

The Beginning

At the beginning of the semester it was impossible to make a calendar. My syllabi (which weren’t given out in class) were difficult to find. Because I didn’t have a syllabus from which I could look at the link to the teacher’s page, I had to remember the individual links to each professor’s class. This was a total waste of my time. I couldn’t just give up either because that syllabus is where the class textbook was. I felt trapped by the learning curve of new URLs that were being slung at me. I had moments were I questioned my ability to use computers. Was I so bad that I couldn’t handle a few new websites? Has technology already left me in the past?


The Semester

One of the classes I am taking is on technology integration into various parts of your life. The class is an introductory business class with a tech focus. This class is the biggest culprit of too many websites. For homework we need website A, for class we use website B, for lab we use website C, the tests are based on the information from website D, and everything is poorly managed by website E.

Another class is completely a pen on paper note taking class. In the middle of lecture, my professor will reference something on the website and then quickly go back to dictating notes. Reflecting on it, this teaching had a method to using online resources that I enjoyed. Everything I needed to learn for the tests was given to me in class and if I didn’t understand a concept, there were in depth help on the website.

One class has updates on Moodle that just directs me toward the online OWL course. This wasn’t terrible. I am ok with classes that give me a Moodle dashboard so I have one place to start my search for homework and text books. The OWL course described also had the textbook. This was really nice. One stop shopping for one class.

My last class (I know, I am a slacker that only took 4 classes this semester) never used the online resource which meant I never got practice using it. This was a problem when I needed to use it.


The End

I got over the learning curve of the 10 websites for 4 classes I was taking. But next semester I will just have to go through the same thing. I wish that professors at UMass all had a Moodle page that would at least have the syllabus and a link to their preferred website. But they don’t do that.

Automation with IFTTT

Image result for IFTTT

“If This, Then That”, or IFTTT, is a powerful and easy to use automation tool that can make your life easier. IFTTT is an easy way to automate tasks that could be repetitive or inconvenient. It operates on the fundamental idea of if statements from programming. Users can create “applets”, which are simply just scripts, that trigger when an event occurs. These applets can be as simple as “If I take a picture on my phone, upload it to Facebook”, or range to be much more complex. IFTTT is integrated with over 300 different channels,  including major services such as Facebook, Twitter, Dropbox, and many others, which makes automating your digital life incredibly easy.

Getting Started with IFTTT and Your First Applet

Getting started with IFTTT is very easy. Simply head over to the IFTTT website and sign up. After signing up, you’ll be read to start automating by creating your first applet. In this article, we will build a simple example applet to send a text message of today’s weather report every morning.

In order to create an applet, click on “My Applets” at the top of the page, and select “New Applet”.

Now you need to select a service, by selecting the “this” keyword. In our example, we want to send a text message of the weather every morning. This means that the service will be under a “weather” service like Weather Underground. Hundreds of services are connected through IFTTT, so the possibilities are almost limitless. You can create applets that are based off something happening on Facebook, or even your Android/iOS device.

Next, you need to select a trigger. Again, our sample applet is just to send a text message of the weather report to your text in the morning. This trigger is simply “Today’s weather report”. Triggers often have additional fields that need to be filled out. In this particular one, the time of the report needs to be filled out.

Next, an action service must be selected. This is the “that” part of IFTTT. Our example applet is going to send a text message, so the action service is going to fall under the SMS category.

Like triggers, there are hundreds of action services that can be be used in your applets. In this particular action, you can customize the text message using variables called “ingredients”.

Ingredients are simply variables provided by the trigger service. In this example, since we chose Weather Underground as the trigger service, then we are able to customize our text message using weather related variables provided by Weather Underground such as temperature or condition.

After creating an action, you simply need to review your applet. In this case, we’ve just created an applet that will send a text message about the weather every day. If you’re satisfied with what it does, you can hit finish and IFTTT will trigger your applet whenever the trigger event occurs. Even from this simple applet, it is easy to see that the possibilities of automation are limitless!

Water Damage: How to prevent it, and what to do if it happens

Getting your tech wet is often one of the most common things that people tend to worry about when it comes to their devices. Rightfully so; water damage is often excluded from manufacturer warranties, can permanently ruin technology under the right circumstances, and is one of the easiest things to do to a device without realizing it.

What if I told you that water, in general, is one of the easiest and least-likely things to ruin your device, if reacted to properly?

Don’t get me wrong; water damage is no laughing matter. It’s the second most common reason that tech ends up kicking the bucket, the most common being drops (but not for the reason you might think). While water can quite easily ruin a device within minutes, most, if not all of its harm can be prevented if one follows the proper steps when a device does end up getting wet.

My goal with this article is to highlight why water damage isn’t as bad as it sounds, and most importantly, how to react properly when your shiny new device ends up the victim to either a spill… or an unfortunate swan dive into a toilet.

_________________

Water is, in its purest form, is pretty awful at conducting electricity. However, because most of the water that we encounter on a daily basis is chock-full of dissolved ions, it’s conductive enough to cause serious damage to technology if not addressed properly.

If left alone, the conductive ions in the water will bridge together several points on your device, potentially allowing for harmful bursts of electricity to be sent places which would result in the death of your device.

While that does sound bad, here’s one thing about water damage that you need to understand: you can effectively submerge a turned-off device in water, and as long as you fully dry the whole thing before turning it on again, there’s almost no chance that the water will cause any serious harm.

Image result for underwater computer

You need to react fast, but right. The worst thing you can do to your device once it gets wet is try to turn it on or ‘see if it still works’. The very moment that a significant amount of water gets on your device, your first instinct should be to fully power off the device, and once it’s off, disconnect the battery if it features a removable one.

As long as the device is off, it’s very unlikely that the water will be able to do anything significant, even less so if you unplug the battery. The amount of time you have to turn off your device before the water does any real damage is, honestly, complete luck. It depends on where the water seeps in, how conductive it was, and how the electricity short circuited itself if a short did occur. Remember, short circuits are not innately harmful, it’s just a matter of what ends up getting shocked.

Once your device is off, your best chance for success is to be as thorough as you possibly can when drying it. Dry any visible water off the device, and try to let it sit out in front of a fan or something similar for at least 24 hours (though please don’t put it near a heater).

Rice is also great at drying your devices, especially smaller ones. Simply submerge the device in (unseasoned!) rice, and leave it again for at least 24 hours before attempting to power it on. Since rice is so great at absorbing liquids, it helps to pull out as much water as possible.

Image result for phone in rice

If the device in question is a laptop or desktop computer, bringing it down to us at the IT User Services Help Center in Lederle A109 is an important option to consider. We can take the computer back into the repair center and take it apart, making sure that everything is as dry as possible so we can see if it’s still functional. If the water did end up killing something in the device, we can also hopefully replace whatever component ended up getting fried.

Overall, there are three main points to be taken from this article:

Number one, spills are not death sentences for technology. As long as you follow the right procedures, making sure to immediately power off the device and not attempt to turn it back on until it’s thoroughly dried, it’s highly likely that a spill won’t result in any damage at all.

Number two is that, when it comes to water damage, speed is your best friend. The single biggest thing to keep in mind is that, the faster you get the device turned off and the battery disconnected, the faster it will be safe from short circuiting itself.

Lastly, and a step that many of us forget about when it comes to stuff like this; take your time. A powered off device that was submerged in water has an really good chance at being usable again, but that chance goes out the window if you try to turn it on too early. I’d suggest that for smartphones and tablets, at the very least, they should get a thorough air drying followed by at least 24 hours in rice. For laptops and desktops, however, your best bet is to either open it up yourself, or bring it down the Help Center so we can open it up and make sure it’s thoroughly dry. You have all the time in the world to dry it off, so don’t ruin your shot at fixing it by testing it too early.

I hope this article has helped you understand why not to be afraid of spills, and what to do if one happens. By following the procedures I outlined above, and with a little bit of luck, it’s very likely that any waterlogged device you end up with could survive it’s unfortunate dip.

Good luck!