The University of Massachusetts Amherst
Categories
Hardware iOS Mac OSX

What’s New With AirPods 2?

Apple’s AirPods have quickly become the best selling wireless headphones and are now the second-best selling Apple product. The small white buds have quickly become ubiquitous across the U.S. and are many people’s go-to wireless earbud option. This week, Apple has refreshed the AirPods with a newer model, giving them additional features. These new second generation AirPods look identical to the first generation on the outside, but on the inside much has changed. Utilizing Apple’s new H1 Chip (as opposed to the W1 chip inside the first generation), the new AirPods are able to pair to your iPhone more quickly than ever, and are now able to switch between devices in a much shorter time-frame (a common complaint with the first generation AirPods). Additionally, the new AirPods offer lower latency, which means audio will be more in sync with videos and games. Battery life as also seen an improvement, with talk time now up to 3 hours on a single charge.

Perhaps the biggest feature of these new AirPods does not have anything to do with the earbuds themselves. The case that the new AirPods ship with is now wireless-charging enabled. This means that AirPods can now be charged wirelessly using any Qi-enabled wireless charging pad. Additionally, the new AirPods with Wireless Charging Case will be compatible with Apple’s upcoming AirPower Mat, which will charge an iPhone, Apple Watch, and AirPods, all wirelessly. For those of you with first generation AirPods, don’t fret! Apple is looking to share the wireless charging features with all AirPods owners. The Wireless Charging Case is cross-compatible with both generations of AirPods, and is available for separate purchase for a reduced price. This means that if you already own a pair of AirPods, you are able to purchase the new Wireless Charging Case individually and use it with your first generation AirPods.

With the continued success of AirPods and the continued removal of analog headphone ports from mobile devices, the wireless headphone market will be one that will continue to evolve rapidly for the foreseeable feature. Seeing what features Apple will add to future AirPods to entice customers to continue purchasing them will be interesting, as will seeing how their competitors in the space will improve their products to compete.

Categories
Operating System

My Phone’s Battery Drains Too Fast! Let’s Fix That.

It seems to me that my phone’s battery drains way too fast sometimes. I use it semi regularly throughout the day but still in the evening I’m at 15% when I think there’s no good reason for me to be. Fortunately, there’s an explainable reason this happens. Let’s first take a look at why the phone needs power:

smartphone-1641909_960_720

Everything your phone does requires what’s called a process. A process is all the calculations and tasks the phone has to do in the background so that you can enjoy it the way it was meant to be used. Processes can build up quickly, especially if you’re like me, and you have a lot of apps on your phone that you switch between.

For instance, your phone is making sure you can receive calls; there’s a process for that. It is checking that the screen is at the correct brightness; there’s a process for that. It is looking out for new text messages or SnapChats or Facebook notifications or Instagram updates. They all require processes and they’re all running even when you lock the screen.

I have some tips that will allow your battery to remain as charged as possible:

1.Disable the fancy settings.

This is one of the easiest ways to increase battery life. Your phone came to you with all sorts of features that, on the surface, are fun to use and make your experience better. However, they all require processes that will eat away at your battery life. Fancy settings include but are not limited to:  Bluetooth, location services, auto-rotate, auto-brightness, NFC, Hey Siri/Ok Google, Gestures.

2. Lower the brightness.

I know, I know, you want to be able to see your screen in its most amazing clarity. But that requires power, unfortunately. Setting the screen to a low brightness when it’s dark in your surroundings will help you conserve power. The screen is the one of the most power-draining parts of the phone because of the energy required to light it up. If you can handle a dimly lit display, you’ll really reduce battery consumption.

3. Close apps not in use.

The apps you open throughout the day have an impact on battery life after you’re done using them. Try to remember to close all the apps frequently.

4. Uninstall apps you don’t use.

Some apps have 24/7 processes to check for notifications. SnapChat and Facebook are examples of these. If you have other apps like them that you simply don’t use anymore, uninstall them to make sure they aren’t draining power unnecessarily.

5. Keep a battery bank with you.

If all else fails, having a battery bank with you will allow you to charge your device on the go.

Categories
Windows

How to Add Languages to Your Windows 10 Keyboard

Are you beginning to type in a foreign language? Do you often find yourself copy-and-pasting special characters like é and wish there was an easy shortcut? Thankfully, Windows 10 allows users to easily add and switch between different languages without having to buy a separate physical keyboard.

Personally, I often use the French and Japanese keyboards on my laptop. The French keyboard allows me to quickly enter letters with diacritics (à, ê, ï, etc.). The Japanese keyboard automatically translates Latin characters into hiragana (????), katakana (??), or kanji (??).

The following instructions will help you add new languages to Windows 10.

  1. Navigate to Windows Settings by clicking on the gear on the left side of the Start Menu.
  2. Click on “Time & Language”, then click on “Region & language” in the left sidebar.
  3. Under “Languages”, click “Add a language”.
  4. Find the language that you would like to add. After clicking on it, you may be asked to specify a regional dialect. You will be returned to the “Region & language” page.

Once you have followed these steps, a new icon will appear next to the date and time on the bottom-right of your screen. Most likely, it will say “ENG” for English, the current keyboard language. Click on this icon to open a window listing the currently added languages. From here, you can select a language to change your keyboard’s settings. You may also hold down the Windows ? key and press Space to quickly change languages.

By default, some languages use a different keyboard layout than the QWERTY layout used for US English keyboards. Once you have switched to the new language, test it out by typing in Word, Notepad, or any other program that allows you to enter text. If the keys you type do not match the letters on the screen, the following instructions can help you fix this issue.

  1. On the “Region & language” page, under “Languages”, click the language you just added, then click “Options”.
  2. Scroll down to “Keyboards”, then click “Add a keyboard”.
  3. Scroll down to “United States-International” and click on it. This keyboard follows the QWERTY layout, but also supports some special characters in other languages.
  4. Under “Keyboards”, click the other keyboard, then click “Remove”.

Congratulations! You have now added another language’s keyboard to your computer. Feel free to add as many additional languages as you would like.

Here are a few diacritics you can type using the United States-International keyboard:

  • Acute accent (é) – Type an apostrophe (‘), followed by a letter.
  • Grave accent (à) – Type a grave accent (`), followed by a letter.
  • Diaeresis (ü) – Type a double quote (“) by pressing Shift + ‘, followed by a letter.
  • Circumflex (î) – Type a circumflex/caret (^) by pressing Shift + 6, followed by a letter.
  • Tilde (ñ) – Type a tilde (~) by pressing Shift + `, followed by a letter.
Categories
Operating System

3D Printing: A Multitude of Machines & Materials-SLA/DLP Printing

3D Printing: A Multitude of Machines & Materials- SLA/DLP Printing 

 

3D printing comes in more forms than you may realize. In a previous article we focused on FDM (Fused Deposition Modeling) 3D printing, the most common and popular form of 3D printing. I’d like to introduce you to a more complex and precise method of 3D printing which is also consumer available. Let’s talk about Stereolithography and Digital Light Processing 3D printing.

The basics of both processes is that a photosensitive resin is selectively hardened and adhered to a gradually moving platform. Let’s break that down a bit, shall we? Like FDM printing, SLA and DLP printing work on the premise of building up layer after layer of material in order to create an object. Unlike FDM printing, which takes solid plastic, melts it into a liquid, then cools it back into a solid, SLA and DLP printing turn a liquid resin into a solid using light. Both SLA and DLP printing use some form of light to harden their photo sensitive resin. SLA uses a laser to draw out each layer, in a sort of winding path. DLP exposes an entire layer of a model to the light at one time using a specialized projector. If you are interested in looking at more the intricacies of the two processes, I suggest looking at this article from Formlabs.

Seen above: The Form 2, an example of an SLA printer. https://formlabs.com/3d-printers/form-2/

Let’s talk materials. Whereas FDM printing can print in a variety of plastics and hybrid filaments, SLA and DLP printers are far more limited. The resins used in SLA and DLP can be had in many generic colors, and in a few different transparencies, but “exotic” resins akin to metal/wood hybrid FDM filaments have yet to become available.

How about print area? Most consumer available SLA/DLP printers print areas are noticeably smaller than their FDM counter parts. In general, hobbyist FDM printers (sub $1000 range) have print areas from 4”x4”x4” to 8”x8”x8”. Most consumer available resin printers have print areas in the ballpark of 4”x4”x4” to 6”x6”x6”. Note, these measurements are by no means exact. Resin printers often have the interesting quality of having rectangular print areas (opposed to more common square print areas). If you want to print anything massive, stick to FDM, your sanity and wallet will thank you later. You don’t need the amount of detail that resin printing offers on something larger than a softball. Which is why resin printing is used mostly for very intricate operations.

Seen above: The AnyCubic Photon, an example of a DLP resin printer. http://www.anycubic3d.com/products/show/1359.html

Another component to resin printing is the higher cost compared to FDM. Though FDM and resin printing is already like comparing apples to oranges, let’s do our best to not throw and bananas into the mix. For this comparison lets focus on the costs associated with using generic resins and generic PLA filament.

A 1kg spool of generic PLA plastic for FDM printing can be had for ~$20. SLA/DLP resins commonly come in 500g bottles, the prices vary a bit, but you can expect to spend ~$50 per 500g bottle. In both cases, buying in bulk can save money, whereas fancy colors/effects bump up the price (these numbers are derived from a quick search of Amazon for both products, a greater study about the costs of different printing types can be found at this link by All3DP). But how far does this material get you? This question is hard to answer, as changing the smallest print setting can drastically affect the amount of material used for a print. Infill percentage, infill type, types of external support structure, wall thickness, these are just a few settings which can affect the amount of materials used. The point being, resin printing is generally slower, prints smaller things, and is more expensive compared to FDM printing.

So why would you ever use a printer which is slower, less versatile, more expensive to own and use? The most significant pro for resin printing is the resolution at which it can print. If you recall from my previous article, it was mentioned that in general, FDM printers are capable of .1mm or 100-micron printing. Meaning that they can produce layers which are 100 microns thick, the thinner the layers the more layers are required, which means more time, but also means more detail. Where an average FDM printer can print 100-micron layers, and an expensive FDM printer can print ~50-micron layers, whereas resin printers can print ~25-micron layers. This means that you can get more detail into your print where it counts. Why might you need this extra level of detail you ask?

There are several applications/use cases where you might want/need this high level of detail. One of these applications is for tabletop game figurines/pieces. If you find yourself engaging in a game of DnD for example. Players can design their characters and have accurate physical representations of them for playing the game. Though you can print these models with an FDM printer, their details may not be accurately recreated due to inaccuracies and limitations of FDM printing, and due to the scale of the figures desired.

Another high detail application is the creation of jewelry. When a high level of dimensional accuracy is key, especially on a small scale, resin printing is appropriate. Whether you are printing a piece which will be used in the casting of jewelry (in which case metal will replace the plastic and the form will be an exact copy), or as an example of the final product, you want that piece to be an accurate representation of the final product. This same mentality can be applied to the prototyping of small mechanical devices where the dimensions of parts must be exact.

A third example for high detail resin printing is for medical applications. The most common application for this type of 3D printing in the medical field is to make dental aligners, those plastic retainer devices. Each patients mouth is different, meaning that their teeth are in different positions and in need of different levels of correction. A scan or mold (which can then be scanned) can be made of the patients mouth which can then be made into an alignment device, which is custom printed for the client. This article by CNN details how a college student did just this, saving himself tons of money.

So, resin printing is not only more expensive, and has a more limited niche of uses, but it has another significant factor to consider. Where FDM printing requires that you remove the scaffolding (support material which allows overhangs to be printed), resin printing requires this step and more to finalize a print. Most resins require that you clean the print gently with isopropyl alcohol. Once you’ve done this, you still have another step. Most resins also require that you cure them with UV light before they are ready to use/display. Hobbyists have done this by setting their prints outside or by a window on a sunny day. Others have used UV lamp devices (commonly used to set manicure products) to accomplish the same thing. High end products do exist which are effectively a large version of one of those UV nail polish curing stations, but they allow for the speedy curing of larger prints.

So, is resin printing for you? That I can’t really say, but hopefully this information has helped you decide if ponying up the extra cash for a resin printer and its accompanying tools is worth it for you. If high levels of detail are your goal, and you don’t mind the smelly resins and cleaning solutions and the accompanying price tag, maybe pick one up and give it a try.

Categories
Operating System

What is Decentralization On The Web and Why Does It Matter?

These days, there are a few large technology companies that handle most of the web’s information. Amazon, Google, Facebook, and others have ownership over the lion’s share of our data. Many of these companies have been in hot water recently over data privacy violations for misusing the vast amounts of data they have on their customers. Furthermore, these companies’ business models depend on gathering as much data as possible to sell ads against.

Many years ago, the web was much less centralized around these huge companies. For instance, before Gmail it was much more common to host your own email or use a much smaller service. You had much more control over your own service. Today, your data isn’t in your hands, it’s in Google or Facebooks. Furthermore, they can kick you off your platform for a number of reasons without any warning. Additionally, there are political reasons for not wanting all of your information in these centralized silos. Being a part of these platforms means that you must conform with their rules and guidelines, no matter how much you don’t like them.

Decentralized systems fix this by giving you control over your information. Instead of one centralized company with one running copy of the service, decentralized services work a lot like email. Anyone can run their own email server and have control over their own information. This has been true about email since it was formed. But for social networks and other sites, this kind of distributed model is now becoming an option as well.

Mastodon is a twitter-like social network based on federation and decentralization. Federation means that individual versions of the service run by different people can talk to one another. This means that I can follow someone with an account on Mastodon.com from my account at Mastodon.xyz. This works very similarly to how email works: you can email anyone from your gmail account, not just other gmail accounts. Federation means that I can run my own server with my own rules if I wanted to. I can choose to allow certain content or people and know that my data is in my control.

Many people are starting to call decentralized technology “Web 3.0”. While Web 2.0 saw people using the internet for more and more things, this came at the cost of consolidation and large companies taking over much of the control of the web. With decentralization and federation, the web can once again be for the people, and not only for large companies.

 

Categories
Linux

Quick Guide to Patching Linux Icon Packs

One of the nice advantages of using Linux is the wealth of customization options available. A key area of  that customization are app icons. For those unfamiliar, it is similar to how icons change across Android versions, even though the apps themselves are often the same.

Now I could go over how to install icons packs, however there are countless sites who already explain the process very well, such as Tips on Ubuntu. As for obtaining icon packs, a great site is OpenDesktop.org, which among other things hosts a wide variety of free to use icon packs.

So, now onto something less commonly covered, patching icon sets (no coding or art skills required). First a little background, if you use one of the more popular icon packs, you’ll likely have no issues. However many of the smaller icon packs either only support certain distros or lack icons for lesser known programs.

Take for example my all time favorite icon pack, Oranchelo, it is geared mainly towards Ubuntu. Lately I’ve been using Fedora 28 and while some icons work liked they’re supposed to, others, which work in Ubuntu, do not. So why is that?

Each program has an icon name, and while many programs keep a consistent icon name across distros, this is not always the case, such as with some of the default Gnome apps in Fedora vs Ubuntu. To fix this we need to either change the app’s icon name or create symbolic links to the current icon name, the latter generally being the better option.

First we need to find the current icon name, for all common (and probably for the uncommon ones as well) Linux desktop environments the way to do this is:

cd /usr/share/applications/

Here you will see (using the ls command) a bunch of .desktop files, these are config files that set how a program will be shown on the desktop, such as what will it be called under English or under Polish, as well as what the icon should be.

I’ve already patched Oranchelo for most of my applications but I’ve yet to do it for the IceCat browser (located towards the bottom on the screenshot), so I’ll use it as an example. Since we already have our terminal located in the right folder, lets just search for the right .desktop file:

ls | grep -i icecat

From that command we now know that full name of the file is “icecat.desktop”. Now that we see the file, we just need to find what it’s icon name is:

cat icecat.desktop | grep -i icon

Now that we know what the icon is called (“icecat”, icon names aren’t always this simple), we needed to open up our installed icon pack, if you’re not sure where that is, look back on the install instructions you used for your pack and just find where you placed said pack.

I generally prefer to look at the icons in a graphical file manager, so that I can make sure I pick the right one.

So now that we’re in the file, we want to go into apps, then scalable. Here we have all our icons, once we found one that we like, we need to create a symbolic link.

Since Oranchelo doesn’t currently have an Icecat logo, I will use the firefox nightly icon.

Now we need to open a terminal at this location and do the following command (depending on where you installed your icons, you may or may not need to use sudo)

ln -s icon.svg desiredApp.svg

Or in my case:

ln -s firefox-nightly-icon.svg icecat.svg

Now all we need to do is toggle the icons, simply switch to the system default icon set and then switch back and the correct icons will show.

As you can see, the desired icon is now set for Icecat

However this is not the only way to patch icon sets, some icon sets are designed to only replace a small amount of icons, such as folders. These often use inheritance to fill the void, however you may not always like the set from which they inherit the remainder, or you may simply prefer if it inherited from a different set.

One of my favorite Gnome themes, Canta, comes with its own icon pack that replaces folders but inherits the rest from Numix icon pack, but since I haven’t installed Numix, it defaults to the system default. However, I have the Flat Remix pack installed, I’ll make it instead inherit from that.

As before, we need to go into the icon pack folder. Once we’re in the canta icon pack, we need to open up the index.theme file with a text editor (as before, depending on where it is installed, you may or may not need sudo).

A few lines from the bottom you will see a “Inherits=”. For Canta it is:

"Inherits=Numix-Circle,Adwaita,gnome,hicolor"

So if we want it to inherit from flat remix (provided flat remix is installed correctly), all we need to do is add it in, changing the line to:

"Inherits=Flat-Remix,Numix-Circle,Adwaita,gnome,hicolor"

Once you save the file, all the missing icons should be automatically inherited from Flat Remix.

Best of luck with all your Linux customization.

Categories
Operating System

Handling Media Files in MatLab

You might be wondering: does anyone love anything as much as I love MatLab?  I get it, another MatLab article… Well, this one is pretty cool.  Handling media files in MatLab is, not only extremely useful, but is also rewarding.  To the programming enthusiast, it can be hard to learn about data structures and search algorithms and have only the facilities to apply this knowledge to text documents and large arrays of numbers.  Learning about how to handle media files allows to you see how computation effects pictures, and hear how it effects music.  Paired with some of the knowledge for my last two articles, one can begin to see how a variety of media-processing tools can be created using MatLab.

 

Audio

Audio is, perhaps, the simplest place to start.  MathWorks provides two built-in functions for handling audio: audioread() & audiowrite().  As the names may suggest, audioread can read-in an audio file from your machine and turn it into a matrix; audiowrite can take a matrix and write it to your computer as a new audio file.  Both functions can tolerate most conventional audio file formats (WAV, FLAC, M4A, etc…); however, there is an asymmetry between the two function in that, while audioread can read-in MP3 files, audiowrite cannot write MP3 files.   Still, there are a number of good, free MP3 encoders out there that can turn your WAV or FLAC file into an MP3 after you’ve created it.

So let’s get into some details… audioread has only one input argument (actually, it can be used with more than one, but for our purposed, you only have to use one), the filename.  Please note, filename here means the directory too (:C\TheDirectory\TheFile.wav).  If you want to select the file off your computer, you could use uigetfile for this.

The audioread function has two output arguments: the matrix of samples from the audio file & the sample rate.  I would encourage the reader to save both since the sample rate will prove to be important in basically every useful process you could perform on the audio.  Sample values in the audio matrix are represented by doubles and are normalized (the maximum value is 1).

Once you have the audio file read-in to MatLab, you can do a whole host of things to it.  MatLab has in-built filtering and other digital signal processing tools that you can use to modify the audio.  You can also make plots of the audio magnitude as well as it’s frequency contents using the fft() function.  The plot shown below is of the frequency content of All Star by Smashmouth.Once you’re finished processing the audio, you can write it back to a file on your computer.  This is done using the audiowrite() function.  The input arguments to audiowrite are the filename, audio matrix in Matlab, and sample rate.  Once again, the filename should also include the directory you want to save in.  This time, the filename should also include the file extension (.wav, .ogg, .flac, .m4a, .mp4).  With only this information, MatLab will produce a usable audio file that can then be played through any of your standard media players.

The audiowrite function also allows for some more parameters to be specified when creating your audiofile.  Name-argument pairs can be sent as arguments to the function (after the filename, matrix, and sample-rate) and can be used to set a number of different parameters.  For example, ‘BitsPerSample’ allows you to specify the bit-depth of the output file (the default is 16 bits, the standard for audio CDs).  ‘BitRate’ allows you to specify the amount of compression if you’re creating an .m4a or .mp4 file.  You can also use these arguments to put in song titles and artist names for use with software like iTunes.

 

Images

Yes, MatLab can also do pictures.  There are two functions associated with handling images: imread() and imwrite().  I think you can surmise from the names of these two function which one reads-in images and which one writes them out.  With images, samples exist in space rather than in time so there is no sample-rate to worry about.  Images still do have a bit-depth and, in my own experience, it tends to differ a lot more from image-to-image than it does for audio files.

When you import an image into MatLab, the image is represented by a three-dimensional matrix.  For each color channel (red, green, and blue), there is a two-dimensional matrix with the same vertical and horizontal resolution as your photo.  When you display the image, the three channels are summed together to produce a full-color image.

By the way, if you want to display an image in MatLab, use the image() function.

MathWorks provides a good deal of image-processing features built-into MatLab so if you are interested in doing some crazy stuff to your pictures, you’re covered!

Categories
Linux Security Software

Hiding in Plain Sight with Steganography

Steganography is the process of hiding one file inside another, most popularly, hiding a file within a picture. If you’re a fan of Mr. Robot you are likely already somewhat familiar with this.

Although hiding files inside pictures may seem hard, it is actually rather easy. All files at their core are just text, so to hide one file into another it is just a case of inserting the text value of one file into another.

Even though this possible on all platforms, it is easiest to accomplish on Linux (although the following commands will probably work on Mac OS as well).

There are many different ways to hide different types of files, however the easiest and most versatile method is to use zip archives.

Once you create your own zip archive we can then append it to the end of an image file, such as a png.

cat deathstarplans.zip >> r2d2.png

If you’re wondering what just happened, let me explain. Cat prints out a file as text (deathstarplans.zip in this instance). Instead of printing to the terminal, >> tells your terminal to appends the text to the end of the specified file -> r2d2.png.

We could have also just done > however that would replace the text of the specified file, specifically the metadata of r2d2.png in this instance. This does work and it would still allow you to view the image… BUT r2d2.png would be easily recognized as containing a zip file and defeat the entire purpose.

Getting the file(s) out is also easy, simply run unzip r2d2.png. Unzip will throw a warning that “x extra bytes” are before the zip file, which you can ignore, basically just restates that we hid the zip in the png file. And so they files pop out.

So why zip? Tar tends to be more popular on Linux… however tar has a problem with this method. Tar does not parse through the file and get to the actual start of the archive whereas zip does so automatically. That isn’t to say its impossible to get tar to work, it simply would require some extra work (aka scripting). However there is another, more adavanced way, steghide.

Unlike zip, steghide does not come preinstalled on most Linux Distos, but is in most default repositories, including for Arch and Ubuntu/Linux Mint.

sudo pacman -S steghide – Arch

sudo apt install steghide – Ubuntu/Linux Mint

Steghide does have its ups and downs. One upside is that it is a lot better at hiding and can easily hide any file type. It does so by using an advanced algorithm to hide it within the image (or audio) file without changing the look (or sound) of the file. This also means that without using steghide (or at least the same mathematical approach as steghide) it is very difficult to extract the hidden files from the image.

However there is big draw back: steghide only supports a limited amount of ‘cover’ files – JPEG, BMP, WAV, and AU. But since JPEG files are a common image type, it isn’t a large draw back and will not look out of place.

To hide the file the command would be steghide embed -cf clones.jpg -ef order66.pdf

At which point steghide will prompt you to enter a password. Keep in mind that if you lose the password you will likely never recover the embedded file.

To extract the file we can run steghide extract -sf clones.jpg, assuming we use the correct password, the hidden file is revealed.

All that being said, both methods leave the ‘secret’ file untouched and only hide a copy. Assuming the goal is to hide the file, the files in the open need to be securely removed. shred is a good command which overwrites the file multiple times to make it as difficult to recover as possible.

shred -z order66.pdf

or to delete it automatically

shred -zu order66.pdf

Categories
Operating System

October Apple Event Preview

Today Apple sent out invitations for an event on October 30th in New York City. The event, titled “There’s more in the Making”, hints at a creative and pro focused event, which is further suggested by the event being hosted at the Howard Gilman Opera House. There are several rumored devices that will be launched at this event

The headline product that is rumored to be announced will be an update to the iPad Pro line. The line, which is made up of two models, is rumored to gain many of the features from the iPhone X line of phones. This includes smaller bezels and FaceID to replace the fingerprint reader. The devices are also said to switch over from their proprietary Lightning connector in favor of the more standard USB-C. This will also allow the iPad to connect to external display and other accessories much more easily. The iPad and the iPhone are some of the only devices in the industry that haven’t switched over to USB-C. This transition will help the industry converge on a single port type.

There are also rumored to be new Mac’s at this event. The Mac mini hasn’t been updated in over 4 years and is overly due for a refresh. The new minis are rumored to be smaller and more aimed at the pro market. This makes sense given the overall theme of the event. Apple is also rumored to be introducing a new low end Mac laptop at around the $1000 price point. This will replace the aging MacBook Air that Apple is still selling. This is by far Apple’s highest volume price range, so it’s important to have a modern, compelling option.

Is there anything else that Apple will announce next week? What are your predictions?

Categories
Operating System

Are Self-Driving Cars Safe?

Self-driving cars promise to revolutionize driving by removing human error from the equation altogether. No more drunk or tired driving, great reductions in traffic, and even the possibility of being productive on the commute to work. But what are the consequences of relying on algorithms and hardware to accomplish this vision? Software can be hacked or tricked, electrical components can be damaged. Can we really argue that it is safer to relinquish control to a computer than to operate a motor vehicle ourselves? Ultimately, this question cannot be answered with confidence until we conduct far more testing. Data analysis is key to understanding how these vehicles will perform and specifically how they will anticipate and react to the kind of human error which they exist to eliminate. But “the verdict isn’t out yet” is hardly a satisfying answer, and for this reason I would argue that despite concerns about ‘fooling’ self-driving cars, this technology is safer than human drivers.

The article “Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms” details how researchers have tricked computer vision algorithm to misinterpret street signs. Researchers were able to achieve these results by training their classifier program with public road sign data, and then adding new entries of a modified street sign with their own classifiers. Essentially, the computer is “taught” how to analyze a specific image and, after numerous trial runs, will eventually be able to recognize recurring elements in specific street signs and match them with a specific designation / classifier. The article mainly serves to explore how these machines could be manipulated, but only briefly touches upon a key safety feature which would prevent real-world trickery. Notably, redundancy is key in any self-driving car. Using GPS locations of signs and data from past users could ensure that signs are not incorrectly classified by the computer vision algorithm.

The article “The Long, Winding Road for Driverless Cars” focuses less on the safety ramifications of self-driving vehicles, and instead on how practical it is that we will see fully autonomous cars in the near future. The author touches upon the idea that selling current vehicles (such as Tesla) with self-driving abilities as “autopilot” might be misleading, as these current solutions still require a human to be attentive behind the wheel. She presents the hurdle that in order to replace human drivers, self-driving vehicles cannot just be “better” than human drivers but near perfect. While these are all valid concerns, they will only result in benefits for consumers. Mistrust in new tech means that companies and regulatory authorities will go through rigorous trials to ensure that these vehicles are ready for the road and maintain consumer confidence. We have already accepted many aspects of car automation (stopping when an object is detected, hands-free parallel parking, and lane-detection) to make our lives easier, and perhaps some time in the near future self-driving cars will be fully tested and ready for mass deployment.

Categories
Operating System Software

What is Docker and How Does it Work?

Docker is a very popular tool in the world of enterprise software development. However, it can be difficult to understand what it’s really for. Here we will take a brief look at why software engineers, and everyday users, choose Docker to quickly and efficiently manage their computer software.

Categories
Operating System Security

Password Security on Github

“The password you provided has been reported as compromised due to re-use of that password on another service by you or someone else. GitHub has not been compromised directly. To increase your security, please change your password as soon as possible.”

I thought this was funny when I first saw this message from Github, a website that has over 28 million users and 57 million repositories. I knew I was receiving this message because I used a very similar password for my IBM intern account and my personal account.

So I was telling my coworkers in IT about it, and they pointed out to me in horror – “That means they’re storing passwords in plaintext…”

Well turns out this isn’t true. In fact, they use fairly secure Key-Derivation Function (KDF) software called Bcrypt.

For obvious reasons, this is scary. The responsible practices for password storage are, well, complicated. It’s a combination of hashing or the more secure Key-Derivation Function, both of which basically scrambles up the user’s password so that not just anyone can decode it, and a careful implementation of where . If a company isn’t using proper security for user data, there’s an increased risk of getting hacked. And realistically, if someone managed to snag the password to your Github account, they’d likely be able to get into at least a few of your other accounts as well.

If you want to learn about this more in depth, you can read this interesting thread.

Categories
Operating System

The Future of the Mac

There have been two major rumors in the past month about the future of the Mac. It’s clear in the past several years that much of Apple’s development effort has been geared towards Apple’s mobile operating system, iOS, which powers iPhones and iPads. Apple has also been introducing new platforms, such as Apple Watch and HomePod. Through all of this, the Mac has been gaining features at a snail’s pace. It seems like Apple will only add features when it feels it must in order to match something it introduces first on iOS. But these recent rumors point to a Mac platform that could be revitalized.

The first major rumor is a shared development library between iOS and the Mac. What does this mean to non-developers? It means that we could very well see iOS apps such as Snapchat or Instagram on Mac. MacOS uses a development framework called AppKit. This framework stems back many years to when Apple bought a company called NeXT computer. These systems are what eventually became the Mac, and the underlying framework has stayed largely the same since then. Obviously, there have been changes and many additions, but it is still different from what developers use to make iOS apps for iPhones and iPads. iOS uses a framework called UIKit, which is very different in key areas. Basically, it means that to develop an app for the iPhone and the Mac takes twice the development effort. Supposedly, Apple is working on a framework for the Mac that is virtually identical to UIKit. This means that developers can port their apps to the Mac with basically no work. In theory, the amount of apps on the Mac would increase as developers port over their iOS apps to the Mac. This means many communication apps such as Snapchat and Instagram could be usable desktop apps.

What Apple’s future macOS framework could look like.

The second major rumor is that Apple is expected to switch from Intel provided CPUs to their own ARM based architecture. Apple switched to Intel CPUs in 2006 after using PowerPCs for many years. This transition brought along an almost 2x increase in performance compared to the PowerPC chips they were using. In the last few years, Intel hasn’t seen the year over year performance increases that they used to have. Additionally, Intel has been delaying new architectures as manufacturing smaller chips gets harder and harder. This means Apple is dependent on Intel’s schedule to introduce new features. On the other hand, Apple has been producing industry-leading ARM chips for use in their iPhones and iPads. These chips are starting to benchmark at or above some of the Intel chips that Apple is using in their Mac line. Rumors are saying that the low-power Macs could see these new ARM based chips as soon as 2020. The major caveat with this transition is that developers could have to re-write some of their applications for the new architecture. This means it might take some time for applications to be compatible, and some older applications might never get updated.

Its clear that Apple’s focus in the past several years has been on its mobile platforms and not on its original platform, the Mac. But these two rumors show that Apple is still putting serious engineering work into its desktop operating system. These new features could lead to a thriving Mac ecosystem in the years to come.

Categories
Hardware Library Mac OSX Software Windows

A Reflection on Winning The Vive

By Parker Louison 

The Views and Opinions Expressed in This Article Are Those of Parker Louison and Do Not Necessarily Reflect the Official Policy or Position of UMass Amherst IT 

A Note of Intention

I want to start off this article by explaining that I’m not making this in an effort to gloat or brag, and I certainly hope it doesn’t come across that way. I put all of the creative energy I had left this semester into the project I’m about to dissect and discuss, so sadly I won’t be publishing a video this semester (as I’ve done for the past two semesters). One of the reasons I’m making this is because a lot of the reaction towards what I made included people asking how I made it and how long it took me, and trust me, we’ll go in depth on that.

My First Taste

My first experience with high-grade virtual reality was a few weeks before the start of my sophomore year at UMass when my friend Kyle drove down to visit me, bringing along his HTC Vive after finding out that the only experience I’d had with VR was a cheap $20 adapter for my phone. There’s a consensus online that virtual reality as a concept is better pitched through firsthand experience rather than by word of mouth or marketing. The whole appeal of VR relies on subjective perception and organic optical illusions, so I can understand why a lot of people think the whole “you feel like you’re in the game” spiel sounds like nothing but a load of shallow marketing. Remember when Batman: Arkham Asylum came out and nearly every review of it mentioned that it made you feel like Batman? Yeah, well now there’s actually a Batman Arkham VR game, and I don’t doubt it probably does make you actually feel like you’re Batman. The experience I had with VR that night hit me hard, and I came to understand why so many people online were making it out to be such a big deal. Despite my skeptical mindset going in, I found that it’s just as immersive as many have made it out to be. 

This wasn’t Microsoft’s Kinect, where the action of taking away the remote actually limited player expression. This was a genuinely deep and fascinating technological breakthrough that opens the door for design innovations while also requiring programmers to master a whole new creative craft. The rulebook for what does and doesn’t work in VR is still being written, and despite the technology still being in its early stages, I wanted in. I wanted in so badly that I decided to try and save up my earnings over the next semester in an effort to buy one. That went about as well as you’d expect; not just because I was working within a college student’s budget, but also because I’m awful with my money. My Art-Major friend Jillian would tell you it’s because I’m a Taurus, but I think it has more to do with me being a giant man-child who impulse-purchases stupid stuff because the process of waiting for something to arrive via Amazon feels like something meaningful in my life. It’s no wonder I got addicted to Animal Crossing over Spring Break… 

The Task

Anyway, I was sitting in my Comp-Lit discussion class when I got the email about the Digital Media Lab’s new Ready Player One contest, with the first place winner taking home an HTC Vive Headset. I’m not usually one for contests, and I couldn’t picture myself actually winning the thing, but something about the challenge piqued my interest. The task involved creating a pitch video, less than one minute in length, in which I’d have to describe how I would implement Virtual Reality on campus in a meaningful way. 

With Virtual Reality, there are a lot of possible implementations relating to different departments. In the Journalism department, we’ve talked at length in some of my classes about the potential applications of VR, but all of those applications were either for the benefit of journalists covering stories or the public consuming them. The task seemed to indicate that the idea I needed to pitch had to be centered more on benefiting the average college student, rather than benefiting a specific major (at least, that’s how I interpreted it). 

One of my original ideas was a virtual stress-relief dog, but then I realized that people with anxiety would likely only get even more stressed out with having to put on some weird giant headset… and real-life dogs can give hecking good nuzzles that can’t really be simulated. You can’t substitute soft fur with hard plastic. 

I came to college as a journalism major, and a day rarely goes by when I don’t have some doubts about my choice. In High School I decided on journalism because I won this debate at a CT Youth Form thing and loved writing and multi-media, so I figured it seemed like a safe bet. Still, it was a safe bet that was never pitched to me. I had no idea what being a journalist would actually be like; my whole image of what being a reporter entailed came from movies and television. I thought about it for a while, about how stupid and hormonal I was and still am, and realized that I’m kind of stuck. If I hypothetically wanted to switch to chemistry or computer science, I’d be starting from scratch with even more debt to bear. Two whole years of progress would be flushed down the toilet, and I’d have nothing to show for it. College is a place for discovery; where your comfortable environment is flipped on its head and you’re forced to take care of yourself and make your own friends. It’s a place where you work four years for a piece of paper to make your resume look nicer when you put it on an employer’s desk, and you’re expected to have the whole rest of your life figured out when you’re a hormonal teenager who spent his savings on a skateboard he never learned how to ride.

And so I decided that, in this neo-cyberpunk dystopia we’re steadily developing into, it would make sense for simulations to come before rigorous training. Why not create simulated experiences where people could test the waters for free? Put themselves in the shoes of whatever career path they want to explore to see if the shoes fit right, you know?

I mentioned “cyberpunk” there earlier because I have this weird obsession with cyberpunk stuff at the moment and I really wanted to give my pitch video some sort of tongue-in-cheek retrograde 80s hacker aesthetic to mask my cynicism as campy fun, but that had to be cut once I realized I had to make this thing under a minute long.

Gathering My Party and Gear

Anyway, I wrote up a rough script and rented out one of the booths in the Digital Media Lab. With some help from Becky Wandel (the News Editor at WMUA) I was able to cut down my audio to just barely under the limit. With the audio complete, it came time to add visual flair. I originally wanted to do a stop-motion animated thing with flash-cards akin to the intros I’ve made for my Techbytes videos, but I’m slow at drawing and realized that it’d take too much time and effort, which is hilarious because the idea I settled on was arguably even more time-consuming and draining.

I’m the proud owner of a Nikon D80, a hand-me-down DSLR from my mom, which I bring with me everywhere I go, mostly because I like taking pictures, but also because I think it makes me seem more interesting. A while back I got a speck of dust on the sensor, which requires special equipment to clean (basically a glorified turkey baster). I went on a journey to the Best Buy at the Holyoke Mall with two friends to buy said cleaning equipment while documenting the entire thing using my camera. Later, I made a geeky stop-motion video out of all those photos, which I thought ended up looking great, so I figured doing something similar for the pitch video would be kind of cool. I messaged a bunch of my friends, and in a single day I managed to shoot the first 60% of the photos I needed. I then rented out the Vive in the DML and did some photoshoots there. 

At one point while I was photographing my friend Jillian playing theBlu, she half-jokingly mentioned that the simulation made her want to study Marine Biology. That kind of validated my idea and pushed me to make sure I made this video perfect. The opposite effect happened when talking to my friend Rachael, who said she was going to pitch something for disability services, to which I immediately thought “damn, she might win with that.”

I then knew what I had to do. It was too late to change my idea or start over, so I instead decided that my best shot at winning was to make my video so stylistically pleasing and attention-grabbing that it couldn’t be ignored. If I wasn’t going to have the best idea, then gosh darn it (I can’t cuss because this is an article for my job) I was going to have the prettiest graphics I could muster.   

The Boss Fight 

I decided to use a combination of iMovie and Photoshop, programs I’m already familiar with, because teaching myself how to use more efficient software would ironically be less efficient given the short time frame I had to get this thing out the door. Using a drawing tablet I borrowed from my friend Julia, I set out to create the most complicated and ambitious video project I’ve ever attempted to make. 

A few things to understand about me: when it comes to passion projects, I’m a bit of a perfectionist and extremely harsh on myself. I can’t even watch my Freshman Year IT video because I accidentally made it sound like a $100 investment in some less than amazing open back headphones was a reasonable decision on my part, and my other IT video makes me cringe because I thought, at the time, it’d be funny to zoom in on the weird hand motions I make while I talk every five seconds.

So in this case, I didn’t hold back and frequently deleted whole sections of my video just because I didn’t like how a single brush stroke animated (with the exception of the way my name is lopsided in the credits, which will haunt me for the rest of my life). For two weeks, I rigorously animated each individual frame in Photoshop, exported it, and imported it into iMovie. 

(Above) A visual representation of all the files it took to create the video

(Above) Frame by frame, I lined up my slides in iMovie

The most demanding section was, without a doubt, the one involving my friend Matthew, which I spent one out of the two weeks entirely focused on. For that section, I needed it to animate at a speed faster than 0.04 seconds, which is impossible because 0.04 seconds is the shortest you can make a frame in iMovie’s streamlined interface, so I ended up creating a whole new project file, slowing down my audio by half-speed, editing the frames of that section relative to that slowed down audio before exporting it, putting it into the original project file and doubling its speed just to get it to animate smoothly. 

 (Above) Some sections required me to find loopholes in the software to get them to animate faster than iMovie would allow

(Above) Some of the scrap paper I scribbled notes on while editing the video together

Each individual border was drawn multiple times with slight variations and all the on-screen text (with the exception of the works cited) was handwritten by me multiple times over so that I could alternate between the frames of animation to make sure everything was constantly moving. 

(Above) Boarders were individually drawn and cycled through in order to maintain visual momentum

This was one of my major design philosophies during the development of this project: I didn’t want there to be a single moment in the 59 seconds where nothing was moving. I wanted my video to grab the viewer’s attention, and I feared that losing momentum in the visual movement would cause me to lose the viewer’s interest. The song LACool by DJ Grumble came on my Spotify radio coincidentally right when I was listening over the audio for the section I was editing, and I thought it fit so well I bought it from iTunes on the spot and edited it in.

I finished my video on Monday, March 26th, turned it into the Digital Media Lab, stumbled back to my dorm, and went to bed at 6:00 PM by accident. 

The Video

(Above) The final video submission 

The winner wouldn’t be announced until Wednesday, so for two days I nervously waited until 6:00 PM on March 28th, when I sat on my bed in my dorm room refreshing the Digital Media Lab website every 7 seconds like a stalker on an ex’s Facebook page waiting for the winner to finally be posted. At 6:29 PM I got a call from an unrecognized number from Tallahassee, Florida, and almost didn’t answer because I thought it was a sales call. Turns out it was Steve Acquah, the coordinator of the Digital Media Lab, who informed me that my video won. Soon after, the Digital Media Lab Website was also updated with the announcement.

(Above) A screenshot taken of the announcement on the Digital Media Lab Website 

Thank You

Along with the raw joy and excitement came a sort of surreal disbelief. Looking back on those stressful weeks of work, it all felt like it flew by faster than I could’ve realized once I got that phone call. I’m so grateful for not only the reward but the experience. Making that video was a stressful nightmare, but it also forced me to push myself to my creative limits and challenge myself in so many ways. On a night where I would’ve probably just gone home and watched Netflix by myself, I sprinted around campus to meet up with and take photos of my friends. This project got me to get all my friends together and rent out the Vive in the DML, basically forcing me to play video games and have fun with the people I love. While the process of editing it all together drove me crazy, the journey is definitely going to be a highlight of my time at UMass. 

I’m grateful to all of my friends who modeled for me, loaned me equipment, got dinner with me while I was stressing out over editing, played Super Hot VR with me, gave me advice on my audio, pushed me to not give up, and were there to celebrate with me when I won. I’m also immensely grateful to the staff and managers of the DML for providing me with this opportunity, as well as for their compliments and praise for the work I did. This was an experience that means a lot to me and it’s one I won’t soon forget. Thank you.

Epilogue

I picked up my prize the other day at the DML (see photo above the title of this article)! Unfortunately, I have a lot of work going on, so it’s going to be locked up in a safe place until that’s done. Still, it’s not like I could use it right now if I wanted to. My gaming PC hasn’t been touched in ages (since I don’t bring it with me to college) so I’m going to need to upgrade the GPU before I can actually set up the Vive with it. It’s a good thing there isn’t a spike in demand for high-end GPUs at the moment for cryptocurrency mining, right?

(Above) A visual representation of what Bitcoin has done to the GPU market (and my life)

…Oh.

Regardless of when I can actually use the prize I won, this experience was one I’m grateful to have had. The video I made is one I’m extremely proud of, and the journey I went on to create it is one I’ll think about for years to come.

Categories
Operating System

SoFi the Robotic Fish

Researchers at MIT’s Computer Science and Artificial Intelligence department have created a Soft Robotic Fish (nicknamed SoFi) which is able to swim and blend in with real fish while observing and gathering data from them. This remarkable bot is not only cool and adorable, but it also paves the way for the future of lifelike Artificial Intelligence.

Think about it: We have already reached the point where we can create a robotic fish which is capable of fooling real fish into thinking that it’s a real fish. Granted, fish aren’t the smartest of the creatures on this planet, but they can usually tell when something is out of the ordinary and quickly swim away. SoFi, however, seems to be accepted as one of their own. How long will it take for us to create a robot that can fool more intelligent species? Specifically, how long will it be until Soft Robotic Humans are roaming the streets as if they weren’t born yesterday? Perhaps more importantly, is this something that we actually want?

The benefits of a robotic animal like SoFi are obvious: It allows us to get up close and personal with these foreign species and learn more about them. This benefit of course translates to other wild animals like birds, bees, lions, etc. We humans cant swim with the fishes, roost with the birds, visit the hive with the bees or roar with the lions, but a robot like SoFi sure can. So it makes sense to invest in this type of technology for research purposes. But when it comes to replicating humanity, things get a bit trickier. I’m pretty confident in saying that most humans in this world would not appreciate being secretly observed in their daily lives “for science.” Of course, it’s still hard to say whether or not this would even be possible, but the existence of Sofi and the technology behind it leads me to believe we may be closer than most of us think.

Regardless of its possible concerning implications, SoFi is a truly amazing feat of engineering. If nothing else, these Soft Robots will bring an epic evolution to the Nature Documentary genre. For more information about the tech behind SoFi, check out the video at the top from MITCSAIL.

Categories
Operating System

Building a Better Bracket: Beating the Odds with Machine Learning

Like most other fans of college basketball, I spent an unhealthy amount of time dedicated to the sport the week after Selection Sunday (March 11th). Starting with spending hours filling out brackets, researching rosters, injuries, and FiveThirtyEight’s statistical predictions to fine-tune my perfect bracket, through watching around 30 games over the course of four days. I made it a full six hours into the tournament before my whole bracket busted. The three-punch combo of Buffalo (13) over Arizona (4), Loyola Chicago (11) beating Miami (6), and, most amazingly, the UMBC Retrievers (16) crushing the overall one-seed and tournament favorite, UVA, spelled the end for my predictions. After these three upsets, everyone’s brackets were shattered. The ESPN leaderboards looked like a post-war battlefield. No one was safe.

The UMBC good boys became the only 16th seed to beat a 1st seed in NCAA tournament history

The odds against picking a perfect bracket are astronomical. The probability ranges from 1 in 9.2 quintillion to 1 in 128 billion. Warren Buffet offers $1 million a year for life for Berkshire Hathaway employees who correctly pick a bracket. Needless to say, no one has been able to cash in on the prize. Picking a perfect bracket is nearly impossible, and is (in)famous for being one of the most unlikely statistical probabilities in gambling.

The Yin and Yang of March Madness

To make the chances of making a perfect bracket somewhat feasible, a competition has been set up to see who can beat the odds with machine learning. Hosted by Kaggle, an online competition platform for modeling and analytics that was purchased by Google’s parent company, Alphabet, the competition has people making models to predict which teams will win each game based on prior data. A model that is correct and predicted it with 99% confidence will score better than one with a 95% confidence and so on. The prize is $100,000, split among the teams that made the top 3 brackets. Teams are provided with the results of every men’s and women’s game in the tournament since 1985, the year that the tournament first started with 64 teams. They are also provided with every play since 2009 in the tournament. Despite all this data, it is still very hard to predict, with the best bracket in this competition, which has been hosted for five years, predicting 39 games correct. Many unquantifiable factors, such as hot streaks and team chemistry, play a large factor in the difficulty in choosing, so it looks like we’re still years off from having our computers picking the perfect bracket.

Categories
Operating System

[Sidebar] How to Resize a VirtualBox .vdi

Congratulations! You’ve made a virtual box of your favorite linux distro. But now you want to download a picture of your cat and find out that you’ve run out of disk space. 
Image: habrahabr.ru

Rather than free up space by deleting the other pics of Snuffles, you decide you’d rather just make the virtual machine have more disk space. But you’ll find out quickly that Oracle has not made this super-easy to do. The process is not simple, but it can be if you just use the following steps:

Open the Command Line on your windows machine. (Open Start and type cmd)

You can then navigate to you vitualbox installation folder. It’s default location is C:\Program Files\Oracle\VirtualBox\

Once there, type this command to resize the .vdi file:

VBoxmanage modified LOCATION –resize SIZE

Replace LOCATION with the absolute file path to your .vdi image (just drag the .vdi file from file explorer to you cmd window) and replace SIZE with the new size you want (measured in MB) 1 GB = 1000 MB

Now your .vdi is resized, but the disk space is unallocated in the virtual machine. You’ll need to resize it. To do this, download gparted live. Make a new virtual machine. It is going to simulate a live CD boot where you can modify your virtual partition.

If your filesystem is ext4, like mine was when I did this, you’ll need to delete the linux-swap file located in-between your partition and the unallocated space. Make sure you leave at least 4 GB of unallocated space so that you can add the linux-swap partition back later.

After you’ve resized your partition, you’ll be done. Boot into the virtual machine as normal and you’ll notice you have more space for Snuffles.

Image: wideopenpets.com
Categories
Operating System

Is Artificial Intelligence like J.A.R.V.I.S. Possible?

If you are a fan of Marvel Comics or the Marvel Cinematic Universe,  you are likely aware of J.A.R.V.I.S., Tony Stark’s personal artificial intelligence(AI) program. J.A.R.V.I.S. helps Tony Stark reach his full potential as Iron Man by helping run operations and diagnostics on the Iron Man suit, as well as gathering information and running simulations. J.A.R.V.I.S. also has a distinct personality, sometimes displaying sarcasm and wit, no doubt programmed in by Stark. With artificial intelligence and machine learning developing at a breakneck pace, it’s worth asking if an AI like J.A.R.V.I.S. is even possible.

One of the most prominent AI programs in use right now is IBM Watson. Watson made its debut in 2011 as a contestant on Jeopardy in a special broadcast against two of the show’s best contestants and won. Commercial use of Watson began in 2013.  Watson is now being used for a variety of functions from tracking elevator use in support of maintenance efforts, to planning irrigation systems for farms. (For more stories about Watson’s many jobs, look here.)

As far as hardware is concerned, Watson relies on a cluster of 90 IBM Power 750 servers that each have a 3.5GHz processor, 16 terabytes of RAM. This allows Watson to process the equivalent of one million books per second. The estimated cost of Watson’s hardware was 300 million dollars.

When Watson competed on Jeopardy, all of the information Watson had access to had to be stored on the machine’s RAM because it would not have been able to access it within a competitive time frame if it was stored on the machine’s hard drive. Since Watson’s bout on Jeopardy, solid state drives have started to emerge, which would allow information that is used more often to be accessed at a faster rate than if the same information was stored on a standard hard drive. With further advances in memory storage technology, information could be accessed at faster rates.

IBM’s Watson appears to be a step in the direction toward AI similar to J.A.R.V.I.S. With quantum computing as an expanding frontier, processing speeds could become even faster, making something like J.A.R.V.I.S. a more realizable reality. Personally, I believe such a feat is possible, and could even be achieved in our lifetime.

Categories
Operating System

Which Computer Is Right for You: A Beginner’s Guide

People always ask me, “Are Macs better than PCs?” or “What kind of computer should I buy?” so I’m here to clear some confusion and misconceptions about computers and hopefully help you find the computer best suited to your purposes.

Computers can generally be separated into two large operating system groups: MacOS and Windows. There are Linux and Ubuntu users, but the majority of consumers will never use these operating systems, so I’ll focus on the big two for this article. Computers can also be separated into two physical categories, desktops and laptops.

Desktops, as the name suggests, sit on top of (or under) your desk, and are great for a number of reasons. Firstly, they are generally the most cost-efficient. With the ability to custom-build a desktop, you’re able to the best bang for your buck. And even if you choose to buy a prebuilt, the cost differences nowadays between prebuilt and custom builds are small. Desktops also serve as being very powerful machines, with the best performance, as they aren’t constrained to physical size like laptops are. Many laptop parts have to be altered to fit the limited space, but desktops have as much space as the case has to offer. More space within the case means bigger/more powerful parts, better ventilation for cooling, etc. Additionally, desktops are generally more future-proof. If a hard drive runs out of space, you can buy and install another. If your graphics card can’t support modern games anymore, you can order one that fits your budget and just replace the old one. Overall, desktops are ideal… as long as you don’t want to move them around a lot. A full setup consisting of a tower, monitor, and peripherals can be very heavy and inconvenient to move around, not including the many cables required to connect everything together. If you are looking for a good machine that will last the years, and don’t need to move it around often, then you might be looking for a desktop. I will go over the details of operating systems further down.

If you’re looking for a portable machine, then you’re looking for a laptop. But here too there’s a lot of variety: You have Chromebooks, which are incredibly fast, light, and (importantly) cheap machines that use ChromeOS for very basic functionalities. Unlike other OSes, this one is designed to be used while connected to the internet, with documents and files in the cloud. The applications are limited to the what’s available of the Chrome store. If all you need a laptop to do is use the internet and edit things on Google Drive, then a Chromebook might be perfect for you.

Next are your middle-of-the-line to high-end laptops, the majority of laptops. This is where you’ll find your MacBooks, your ultrabooks, the all around laptop for most functionalities. This is what most people will prefer, as they can do the most, and retain portability. There is also a ton of variety within this group. There are touch screens, super-bendable hinges, I/O ports, etc. Here, what it’s going to come down to is personal preference. There too many options to write about, but I encourage everyone to try to assess a number of different computers, before deciding which ones they like the best.

Lastly, I’d like to discuss operating system, primarily MacOS and Windows. I did briefly mention ChromeOS, but that’s only really for Chromebooks and it’s a very basic system. With MacOS, what people like is the convenience. Apple has created an “ecosystem” of devices that, if you are a part of this ecosystem, everything works perfectly in harmony. MacOS is very user-friendly and easy to pickup, and if you own an iPhone, an Apple Watch, an iPad, any iOS device, you can connect it to your computer and use it in sync all together. iMessage, Photos, Apple Cloud, are all there to keep your devices connected and make it super easy to swap between. Windows doesn’t have an “ecosystem,” but what it lacks in user-friendliness it makes up for versatility and user power. Windows is good at being customizable. You have a lot more freedom when it comes to making changes. This comes back to the device it’s on. Mac devices have top-of-the-line build quality. They’re constructed beautifully, and are extremely good at what they do, but they come with a high price tag. Their devices are built in a way to discourage user-modification like adding storage/memory, etc. Microsoft laptops range from $150 well into the thousands for gaming machines, where the common MacBooks start near $1000. If you’re looking for gaming, Windows is also the way to go. If you aren’t choosing a desktop, there are many gaming laptops out for sale. Although you won’t find the same performance per dollar, they are laptops and portable.

With this, hopefully you have everything you need to buy the perfect laptop for you the next time you need one.

Categories
Operating System

Should smart watches be allowed in professional sports?

With the advent of smart technology, the relative ease with which we access information is changing. The smart watch puts much of what a person does on their phone, on their wrist, and on the internet. While we make these technological advances, some things remain constant, like professional sports. With the exception of some minor rule changes here and there, many of the most-watched games in the U.S. have remained the same. Recently, The Red Sox allegedly used smart watches to steal signs from The Yankees, which raises an important question: should smart watches be allowed in professional sports?

Most smart watches have the common ability to monitor the wearer’s heart rate. This data could be useful in monitoring players condition so the coach knows when to make substitutions, but it could also be used for medical research. If every professional athlete wore a smart device while they played in games and did workouts, the amount of data that could be made available to medical professionals in one year would be astounding. This data could lead to a better understanding than we have now of the human body at work.

While wearing smart watches in professional sports hold potential societal gain, the reality of the situation is not as optimistic. Many sports involve physical contact, which leads to a risk of either the smartwatch breaking, or increased injury due to contact with a smart watch on a player’s wrist. There is also an increased risk of cheating if players and coaches can view text messages on their wrist.

In my opinion, sports would be better off without smart technology becoming part of any game. The beauty of sporting matches is that they are meant to display the raw athletic abilities of players in competition. Adding smart technology to the game could lead to records that have asterisks by them, similar to home run records set by players who used steroids.

Categories
Operating System

An Extensive Guide to Keyboard Shortcuts

In this day and age, it’s safe to assume that most of you know a thing or to about how to use a computer, one of those things being keyboard shortcuts. Keyboard shortcuts, for the uninitiated, are really handy combinations of buttons, usually two or three, that perform certain functions that would otherwise take somewhat longer to do manually with just the mouse. For example, highlighting a piece of text and pressing Control (CTRL) + C copies the text to your clipboard, and subsequently pressing CTRL + V pastes that copied text to wherever you’re entering text.

Most people tend to know copy and paste, as well as a handful of other shortcuts, but beyond them are an abundance of shortcuts that can potentially save time and make your computer-using experience that much more convenient. In this article, I’ll go over some commonly known keyboard shortcuts and several most likely not very well known ones as well.

Most of these keyboard shortcuts will be primarily on Windows, although some can also apply on Mac as well, usually substituting CTRL with the Command button.

General shortcuts:

CTRL + C – As mentioned above copies any highlighted text to the clipboard.

CTRL + V – Also mentioned above, pastes any copied text into any active text field.

CTRL + X – Cuts any highlighted text; as the wording suggests, instead of just copying the text, it will “cut” it and remove it from the text field. Essentially rather than copying, the text will be moved to the clipboard instead.

CTRL + Z – Undo an action. An action can be just about anything; since this is a fairly universal shortcut, an action can be what you last typed in Microsoft Word, a line/shape drawn in Photoshop, or just any “thing” previously done in an application.

CTRL + Y – Redo an action. For example, if you changed your mind about undoing the last action, you can use this shortcut to bring that back.

CTRL + A – Selects all items/text in a document or window, i.e. highlights them.

CTRL + D – Deletes the selected file and moves it to the Recycle Bin.

CTRL + R – Refreshes the active window. Generally you’ll only use this in the context of Internet browsers. Can also be done with F5.

CTRL + Right Arrow – Moves the cursor to the beginning of the next word.

CTRL + Left Arrow – Moves the cursor to the beginning of the previous word.

CTRL + Down Arrow – Moves the cursor to the beginning of the next paragraph.

CTRL + Up Arrow – Moves the cursor to the beginning of the previous paragraph.

Alt + Tab – Displays all open applications and while holding down Alt, by pressing Tab, will cycle through which application to switch to from left to right.

CTRL + Alt + Tab – Displays all open applications. Using the arrow keys and Enter, you can switch to another application.

CTRL + Esc – Opens the Start Menu, can also be done with Windows Key.

Shift + Any arrow key, when editing text, selects text in the direction corresponding to the arrow key. Selects text character by character.

CTRL + Shift + Any arrow key – When editing text, selects a block of text, i.e. a word.

CTRL + Shift + Esc – Opens Task Manager directly.

Alt + F4 – Close the active item or exit the active application.

CTRL + F4 – In applications that are full screen and let you have multiple documents open, closes the active document, instead of the entire application.

Alt + Enter – Displays the properties for a selected file.

Alt + Left Arrow – Go back, usually in the context of Internet browsers.

Alt + Right Arrow – Go forward, same as above.

Shift + Delete – Deletes a selected file without moving it to the Recycle Bin first, i.e. deletes it permanently.

Windows Logo Key Shortcuts:
Windows logo key ? + D – Displays and hides the desktop.

Windows logo key ? + E – Opens File Explorer

Windows logo key ? + I – Opens Windows Settings

Windows logo key ? + L – Locks your PC or switches accounts.

Windows logo key ? + M – Minimize all open windows/applications.

Windows logo key ? + Shift + M – Restore minimized windows/applications on the desktop.

Windows logo key ? + P – When connecting your computer to a projector or second monitor, opens up a menu to select how you want Windows to be displayed on the secondary display. You can select from PC screen only (uses only the computer’s screen), Duplicate (shows what is on your computer screen on the secondary display), Extend (Extends the desktop, allowing you to move applications/windows to the secondary display, and keep content on the primary screen off the secondary display), and Second Screen Only (Only the secondary display will be used).

Windows logo key ? + R – Opens the Run Dialog Box. Typing and entering in the file names for applications will open the file/application, useful for troubleshooting scenarios.

Windows logo key ? + T – Cycle through open applications on the taskbar; pressing Enter will switch to the selected application.

Windows logo key ? + Comma (,) – Temporarily peeks at the desktop.

Windows logo key ? + Pause Break – Displays System Properties window in Control Panel. You can find useful information here about your computer such as the version of Windows you are running, general info about the hardware of the computer, etc.

Windows logo key ? + Tab – Opens Task view, which is similar to CTRL + Alt + Tab.

Windows logo key ? + Up/Down – Maximizes or minimizes a window/application respectively.

Windows logo key ? + Left/Right – Maximizes a window to the left or right side of the screen.

Windows logo key ? + Shift + Left/Right – When you have more than one monitor, moves a window/application from one monitor to another.

Windows logo key ? + Space bar – When you have more than one keyboard/input method installed (usually for typing in different languages), switches between installed input methods.

That just about covers most common keyboard shortcuts you can use on a Windows computer. The list goes on however, as there are so many more keyboard shortcuts and functions you can perform, which is even further expanded when taking into account that certain applications have their own keyboard shortcuts when those are in use.

You might end up never using half of the keyboard shortcuts on this list, much less of all keyboard shortcuts in general, favoring the good old fashioned way using the mouse and clicking, and that’s fine. The amount of time you save using a keyboard shortcut versus the clicking your way through things to perform a function is arguably negligible and most of the time is just a quality of life preference at the end of the day. But depending on how you use your computer and what kind of work you do on it, chances are picking up some of these keyboard shortcuts could save you a lot of frustration down the line.

Categories
Operating System

How Do Games Get on Steam?

While it may seem like a strange question to ask, there is an interesting history behind the largest (online and brick and mortar) storefront for video games. The control exerted by Steam on the market it controls has wide ranging implications for both consumers and developers. The availability of indie games is a relatively recent development in Steam’s history; so are the current trends pushing the near-exponential growth of the Steam library.

Back when Steam launched, the library selection was very limited, relying on the IP (Intellectual Property) that Valve (Steam’s parent company) had built up over the past half-decade.  For the first 2 years of Steam’s life you could only find games created and published by Valve (Half Life and Counterstrike 1.6 being the most notable), but in late 2005 that changed as Steam inked a deal with Strategy First, a small Canadian publisher, and games started flowing onto the service. For the next 5 years the steam library was very limited as generally only large/influential publishers were able to get their games on Steam. This created tension in the Steam community, as many people want indie games to be featured and make their way onto the storefront. The tension broke when Steam agreed to allow indie games on the platform.

By 2010, the issues were obvious: Steam had no way to discern which indie games people wanted and which were not suitable for the platform. Two years later, in response to these concerns, Steam implemented the Green Light system, designed to get quality indie games on Steam. Initially Green Light was received positively. Black Mesa (A popular mod that ported Valve’s original Half Life to the Half Life 2 engine) and other releases of quality games inspired confidence. All seemed good. Fast forward to late 2015: Several disturbing trends had begun to emerge.

An enterprising “developer” realized that you can buy assets for the unity engine store, and with very minimal effort create a “game” that you could get on Green Light. These “games” were often just the unity assets with AI zombies that would slowly follow you around, providing little to no engaging content and which hardly could be considered a game. These games should have never made it through Green Light, but the developers got creative in getting people to vote for their games. Some would give “review” keys away pending a vote/good review on their page while others promised actual monetary profit through the Steam’s Trading Card economy.

Asset flips are just one example of how Green Light was exploited (not to mention the cartel-like behavior behind some of the asset flippers). By 2016 Steam was in full damage control, as the effects of Green Light were becoming apparent, the curated garden that once was Steam became overgrown and flooded with sub-par games. So overabundant was the flow of content that by the end of 2016, nearly 40% of Steam’s whole library was released in that year alone. 13 years of content control and managing customers’ expectations were nullified in the span of a year. (The uptick began in 2014, but 2016 was the real breaking point).

Steam, now in damage-control mode, decided to abandon content control in favor of an open marketplace that uses algorithms to recommend games to consumers. This “fix” has only hid the enormity of sub-par games that make up most of the Steam library now. And while an algorithm can recommend games, it will often end up recommending the same types of games, creating an echo chamber effect as you are only recommended the games you express interest in, and not those that would appeal to you the most.

In 2017, Steam abandoned Green Light in favor of Steam Direct, an updated method of allowing developers to publish games, this time without community interaction. Steam re-assumed the mantle of gatekeeper, taking back responsibility for quality control, albeit with standards so low, one can hardly call it vetting. (Some approved games don’t even include an .exe in the download)

 

Categories
Mac OSX

Restoring a MacBook with an Erased Hard Drive


If you’re anything like me, you will (or already have) accidentally wiped your Macbook’s ssd. It may seem like you just bricked your MacBook, but luckily there is a remedy.

The way forward is to use the built-in “internet recovery” which, on startup, can be triggered via pressing “cmd + R”.

There is a bit of a catch: if you do this straight away, there is a good chance that the Mac will get stuck here and throw up an error – error -3001F in my personal experience. This tends to be because the Mac assumes it is already connected to Wi-Fi (when its not) and gives an error after it fails to connect to apple servers. If instead your MacBook lets you select a Wi-Fi network during this process, you’re in the clear and can skip the next paragraph.

Luckily there is another way to connect, via apple’s boot menu. To get there, power the computer on, hit the power button and very soon after, hold the option key. Eventually you will see a screen where you can pick a Wi-Fi network.

Unfortunately if you’re at UMass, eduroam (or UMASS) won’t work, however you can easily connect to any typical home Wi-Fi or a mobile hotspot (although you should make sure you have unlimited data first).

Once you’re connected, you want to hit “cmd + R” from that boot screen. Do not restart the computer. If you had been able to connect without the boot menu, you should be already be in internet recovery and do not need to press anything.

Now that the wifi is connected, you need to wait. Eventually you will see the Macbook’s recovery tools. First thing you need to do is to select disk utility, select your Macbook’s hard drive and hit erase – this may seem redundant but I’ll explain in a moment. Now go back into the main repair menu by closing the disk utility.

Unless you created a “time machine” backup, you’ll want to pick the reinstall Mac OS X option. After clicking through for a bit, you will see a page asking you to select a drive. If you properly erased the hard drive a few moments before, you will be able to select the hard drive and continue on. If you hadn’t erased the drive again, there is a good chance no drive will appear in the drive selection. To fix that, all you have to do is to erase the drive again with the disk utility mention earlier – the one catch is that you can only get back to the recovery tools if you restart the computer and start internet recovery again, which as you may have noticed, is a slow process.

Depending on the age of your Macbook, there is a solid chance that you will end up with an old version of Mac OS. If you have two step verification enabled, you may have issues updating the the latest Mac OS version.

Out of my own experience, OS X Mavericks will not allow you to login to the app store if you have two step verification – but I would recommend trying, your luck could be better than mine. The reason why we need to App Store is because it is required to upgrade to High Sierra/the present version of OS X.

If you were unable to login, there is a work around – that is to say, OS X Mavericks will let you make a new Apple ID, which luckily are free. Since you will be creating this account purely for the sake of updating the MacBook, I wouldn’t recommend using your primary email or adding any form of payment to the account.

Once you’re logged in, you should be free to update and after some more loading screens, you will have an fully up-to-date MacBook. The last thing remaining (if you had to create a new Apple ID) is to log out of the App Store and login to your personal Apple ID.

Categories
Operating System

Smartphone Fingerprint Scanners

The next generation of Smartphone security is here! Mostly clear fingerprint sensors can now be embedded behind or under the screen. There has been a huge push in phones this year to make the bezels as tiny as possible. Of course this means finding a place for the fingerprint scanner. Many phones have moved it to the back. LG was the first to do it, and it was relatively well executed. Samsung followed suit, and many complain it’s too hard to tell apart from the camera bump. The Pixel and Pixel 2 have one on the back that works well and has gestures! To minimize the bezel, the iPhone X removed the scanner all-together, and instead hid a plethora of sensors inside its iconic notch to usher in the era of faceID.

 

But now two android phones are being released that place the fingerprint scanner, almost completely invisibly, under the screen. The first, a VIVO X20 Plus UD, won an award for best in show at CES 2018. The sensor is a small pad where a traditional scanner should be. Any time that area of the phone is touched, that area of the phone flashes brightly, and the sensor looks for the light reflected off of your finger. Check it out here:

Vivo’s concept phone brings the concept a bit further, with the fingerprint scanner taking up a larger pad, allowing you to touch anywhere on roughly 1/3 of the screen. This concept phone also pushes the bezel-less concept another level by moving the selfie-cam to a piece of plastic that extends in and out from the top of the phone. Is this the future?

 

Limitations:

It’s a bit “slow” right now. (It takes about a second.) The cool animation should be enough to hold you over.  But keep in mind it’s also the first generation of a product. It will only get quicker with time.

The phone needs to have an OLED screen.  While not uncommon, many phones, Iphones included, have LCD displays.  OLED screens allow individual pixels to turn on and off, rather than the whole screen or none of it, like LED displays require.

And finally, yes, at very specific lighting conditions and viewing angles, you can see the sensor through the screen.

Categories
Operating System

Microcontrollers and the Maker Movement

The maker movement is a growing trend in the DIY world which involve using microcontroller technologies such as Arduino to develop and create small or large scale projects such as home automation, gadgets, robotics and electronic devices. There is no need in prior knowledge!

Projects vary from home automation to robotics but can be used to pretty much anything; automatic door locks, Phone controlled sprinklers and even portable chargers are just a few examples for the endless possibilities. With all the information available over the internet, virtually anyone can create simple projects without a deep knowledge of electricity and programming. Most products come preconfigured, open source and all the documentation is available online. The movement brings collaboration to the front line of development and projects the work you do inside a computer to the outside physical world

Unlike the past, starting your own project is easy and highly available. No longer the mystery of engineering and computer science wizards prevent you from making your own garage opener. The increase in the demand and the growing interest in DIY projects caused an increase in manufacturing which brought down prices – cables, resistors and transistors are sold for less than dollar each and microcontrollers such as the Arduino Uno would cost only 3$. Start your own project in less than 5$, make that pocket change your next adventure!

Microcontrollers have been heavily integrated in Hackathons in recent years. Hackathon is a design sprint-like event that usually takes two to three days in which people collaborate intensively on software projects within the time limitation. These days Hackathons also include hardware competition categories such as robotics and home automation. So if you’re looking for a way to winning your first Hackathon or interested in finding an internship from a Hackathon the microcontroller categories are somewhat simple to compete with years’ worth of knowledge.

Furthermore, since so many people began working on project there was a need for a community to support and help people out so in addition to the online community there also physical hubs that started to pop up. Those are called “Makerspace”, the makerspace is environment which provides the individual with the tools and knowledge to excel in his task and complete his goal. Even here at UMass Amherst there’s a work in progress to build a makerspace where students can come and get introduced to the topic.

In conclusion, the maker movement combined with the Arduino technologies create an endless possibilities for projects and provides a new visual way for anyone to learn physics, programming and circuit design, It is a way for people to express their creativity.