Handling Media Files in MatLab

You might be wondering: does anyone love anything as much as I love MatLab?  I get it, another MatLab article… Well, this one is pretty cool.  Handling media files in MatLab is, not only extremely useful, but is also rewarding.  To the programming enthusiast, it can be hard to learn about data structures and search algorithms and have only the facilities to apply this knowledge to text documents and large arrays of numbers.  Learning about how to handle media files allows to you see how computation effects pictures, and hear how it effects music.  Paired with some of the knowledge for my last two articles, one can begin to see how a variety of media-processing tools can be created using MatLab.

 

Audio

Audio is, perhaps, the simplest place to start.  MathWorks provides two built-in functions for handling audio: audioread() & audiowrite().  As the names may suggest, audioread can read-in an audio file from your machine and turn it into a matrix; audiowrite can take a matrix and write it to your computer as a new audio file.  Both functions can tolerate most conventional audio file formats (WAV, FLAC, M4A, etc…); however, there is an asymmetry between the two function in that, while audioread can read-in MP3 files, audiowrite cannot write MP3 files.   Still, there are a number of good, free MP3 encoders out there that can turn your WAV or FLAC file into an MP3 after you’ve created it.

So let’s get into some details… audioread has only one input argument (actually, it can be used with more than one, but for our purposed, you only have to use one), the filename.  Please note, filename here means the directory too (:C\TheDirectory\TheFile.wav).  If you want to select the file off your computer, you could use uigetfile for this.

The audioread function has two output arguments: the matrix of samples from the audio file & the sample rate.  I would encourage the reader to save both since the sample rate will prove to be important in basically every useful process you could perform on the audio.  Sample values in the audio matrix are represented by doubles and are normalized (the maximum value is 1).

Once you have the audio file read-in to MatLab, you can do a whole host of things to it.  MatLab has in-built filtering and other digital signal processing tools that you can use to modify the audio.  You can also make plots of the audio magnitude as well as it’s frequency contents using the fft() function.  The plot shown below is of the frequency content of All Star by Smashmouth.Once you’re finished processing the audio, you can write it back to a file on your computer.  This is done using the audiowrite() function.  The input arguments to audiowrite are the filename, audio matrix in Matlab, and sample rate.  Once again, the filename should also include the directory you want to save in.  This time, the filename should also include the file extension (.wav, .ogg, .flac, .m4a, .mp4).  With only this information, MatLab will produce a usable audio file that can then be played through any of your standard media players.

The audiowrite function also allows for some more parameters to be specified when creating your audiofile.  Name-argument pairs can be sent as arguments to the function (after the filename, matrix, and sample-rate) and can be used to set a number of different parameters.  For example, ‘BitsPerSample’ allows you to specify the bit-depth of the output file (the default is 16 bits, the standard for audio CDs).  ‘BitRate’ allows you to specify the amount of compression if you’re creating an .m4a or .mp4 file.  You can also use these arguments to put in song titles and artist names for use with software like iTunes.

 

Images

Yes, MatLab can also do pictures.  There are two functions associated with handling images: imread() and imwrite().  I think you can surmise from the names of these two function which one reads-in images and which one writes them out.  With images, samples exist in space rather than in time so there is no sample-rate to worry about.  Images still do have a bit-depth and, in my own experience, it tends to differ a lot more from image-to-image than it does for audio files.

When you import an image into MatLab, the image is represented by a three-dimensional matrix.  For each color channel (red, green, and blue), there is a two-dimensional matrix with the same vertical and horizontal resolution as your photo.  When you display the image, the three channels are summed together to produce a full-color image.

By the way, if you want to display an image in MatLab, use the image() function.

MathWorks provides a good deal of image-processing features built-into MatLab so if you are interested in doing some crazy stuff to your pictures, you’re covered!

The idea of powering devices wirelessly has been around for centuries, ever since Nikola Tesla built the Tesla tower that could light up lamps about 2 km away based on electromagnetic induction. Wireless Charging devices can be traced back to electric toothbrushes that used a relatively primitive form of inductive charging, decades before Nokia announced Integrated Inductive charging in its break-though Lumia 920 model in 2012. This marked the birth of the Qi standard which at that time was still contending for the much coveted universal/international standard spot. Although now it seems like wireless charging is right around the corner; and with Apple and Google launching Qi compatible phones, the message is clear and simple. ‘Wireless is the future and future is here.’ Or is it ?

 Qi (Mandarin for ‘material energy’ or ‘inner strength’) is a near-field energy transfer technology that works on the principle of electromagnetic induction. Simply put, the base station (charging matt, pad or dock) has a transmitting coil, which (when connected to an active power source) induces a current into the receiver coil in the phone, which in turn charges the battery. In its early stages, Qi used ‘guided positioning’ which required the device to be placed in a certain alignment on the base station. With some rapid developments over the time, this has been effectively replaced by the ‘free positioning’, which is standard in almost all the recent Qi charging devices. There’s a catch here- the devices must have a transmittable back surface. Glass is currently the most viable option and most Qi compatible smartphones have glass backs. This certainly has its implications though, the obvious one being significantly reduced durability.

Come to think of it, the fact that in order to charge, the device has to be within at the most an inch of the base station sounds counterproductive.  Besides, if the base needs to be connected to a power source, that’s still one cable. So…….. what’s the point ? Well currently the mobility part is more of a grey-area since this technology is still in its transitional phase. Majority of the Qi compatible smartphones still come with a traditional adapter by default and the wireless dock needs to be purchased separately. There are several other issues with near-field charging that need to be addressed, such as-

  •  Longer charging times
  •  Reduced efficiency ~60-70%
  •  high manufacturing costs
  •  higher energy consumption which could lead to increased production costs of electricity
  •  residual electromagnetic waves- a potential health risk
  •  Devices heat up faster compared to traditional adapters >  energy/heat waste
  •  Higher probability of software updates causing bugs

Over the past decade, people have come up with interesting solutions for this, including a charging phone case and even a battery-less phone powered by ambient radio waves and wifi signals. But the most promising option is the pi charging startup which hopes to fix the range issue, by allowing devices to pair with a charging pad within the range of a foot in any direction. The concept is still in its experimental stages and it’s going to be a while before the mid-long range wireless charging technology becomes pervasive standard for smartphones and other IoT devices. Assuming, further progress is made down that road, wireless charging hotspots could be a possibility in the not-very-distant future.

Qi standard despite all its shortcomings has had considerable success in the market and it looks like it’s here to stay for the next few years. A green light by both Apple and Google has given it the necessary boost towards being profitable and wireless pads are gradually finding their way into Cafes, Libraries, Restaurants, Airports etc. Furniture retailers such as Ikea have even started manufacturing wireless charging desks and tables with inductive pads/surfaces built in.  However, switching completely to, and relying solely on inductive wireless charging wouldn’t be the most practical option as of now unless upgrades are made to it, keeping all the major concerns surrounding it in sights. Going fully wireless would mean remodelling the very foundations of conventional means of transmitting electricity. In short, the current Qi standard is not the endgame and it can be seen as more of a stepping stone towards mid-long range charging hotspots.

Is AI journalism the future?

Artificial intelligence in news media is being used in many new ways, from speeding up research to accumulating and cross-referencing data and beyond.
You might be wondering: how AI does something as complex as writing the news?

AI writes the news by sifting through huge amounts of data, and finding the useful data by categorizing them. The AI tool then uses this data to train itself to imitate human writers. In addition to that it also helps human reporters avoid grunt work such-as tracking the score or updating a breaking news.
Automated Journalism is everywhere from Google News, to Facebook’s fake news checker, in addition to that there are AI tools used  by major publications: Narrative Science’s Quill: It puts together reports and stories just based off of raw data.  The kicker is that a study was done and most people couldn’t tell the difference between articles written by software or real journalists.

The news industry is predicting that 90% of the articles are going to be written by AI within the next 10-15 years. The industry is going through a huge push towards automated news generation because of the huge amounts of data we are amassing.

While some of us might be scared about machines taking over and influencing our minds, the reality couldn’t be far from it. These AI tools can only write fact-based articles which are much closer to a computer reading a bunch of facts to you than a qualitative article written by a real journalist. These tools don’t have the power to sway most people, and checks are being made to make sure these tools aren’t used to spread “fake news”.

 

Hiding in Plain Sight with Steganography

Steganography is the process of hiding one file inside another, most popularly, hiding a file within a picture. If you’re a fan of Mr. Robot you are likely already somewhat familiar with this.

Although hiding files inside pictures may seem hard, it is actually rather easy. All files at their core are just text, so to hide one file into another it is just a case of inserting the text value of one file into another.

Even though this possible on all platforms, it is easiest to accomplish on Linux (although the following commands will probably work on Mac OS as well).

There are many different ways to hide different types of files, however the easiest and most versatile method is to use zip archives.

Once you create your own zip archive we can then append it to the end of an image file, such as a png.

cat deathstarplans.zip >> r2d2.png

If you’re wondering what just happened, let me explain. Cat prints out a file as text (deathstarplans.zip in this instance). Instead of printing to the terminal, >> tells your terminal to appends the text to the end of the specified file -> r2d2.png.

We could have also just done > however that would replace the text of the specified file, specifically the metadata of r2d2.png in this instance. This does work and it would still allow you to view the image… BUT r2d2.png would be easily recognized as containing a zip file and defeat the entire purpose.

Getting the file(s) out is also easy, simply run unzip r2d2.png. Unzip will throw a warning that “x extra bytes” are before the zip file, which you can ignore, basically just restates that we hid the zip in the png file. And so they files pop out.

So why zip? Tar tends to be more popular on Linux… however tar has a problem with this method. Tar does not parse through the file and get to the actual start of the archive whereas zip does so automatically. That isn’t to say its impossible to get tar to work, it simply would require some extra work (aka scripting). However there is another, more adavanced way, steghide.

Unlike zip, steghide does not come preinstalled on most Linux Distos, but is in most default repositories, including for Arch and Ubuntu/Linux Mint.

sudo pacman -S steghide – Arch

sudo apt install steghide – Ubuntu/Linux Mint

Steghide does have its ups and downs. One upside is that it is a lot better at hiding and can easily hide any file type. It does so by using an advanced algorithm to hide it within the image (or audio) file without changing the look (or sound) of the file. This also means that without using steghide (or at least the same mathematical approach as steghide) it is very difficult to extract the hidden files from the image.

However there is big draw back: steghide only supports a limited amount of ‘cover’ files – JPEG, BMP, WAV, and AU. But since JPEG files are a common image type, it isn’t a large draw back and will not look out of place.

To hide the file the command would be steghide embed -cf clones.jpg -ef order66.pdf

At which point steghide will prompt you to enter a password. Keep in mind that if you lose the password you will likely never recover the embedded file.

To extract the file we can run steghide extract -sf clones.jpg, assuming we use the correct password, the hidden file is revealed.

All that being said, both methods leave the ‘secret’ file untouched and only hide a copy. Assuming the goal is to hide the file, the files in the open need to be securely removed. shred is a good command which overwrites the file multiple times to make it as difficult to recover as possible.

shred -z order66.pdf

or to delete it automatically

shred -zu order66.pdf

How to Google!

Google, the world’s most popular search engine, usually does a great job finding what we need with little information for us. But what about when Google isn’t giving us the hits we need?
This article will go over commonly unused tips that will help refine your search and tell Google exactly what you’re searching for. It will also go over fun, new features of Google.

 

 

1. Filter Results by Time
Users can now browse only the most recent results. After searching “Tools” will appear on the right below the search bar. If you click on ‘Tools’, ‘Any time’ and ‘All Results’ will appear under the search bar. Under ‘Any time’ there are options to show results ranging from the past hour to the past year.

 

2. Search Websites for Specific Words
If you are searching through a specific website you can now search for keywords. Ex: to see how many times Forbes mentioned Kylie Jenner you would simply type “Kylie Jenner site:Forbes.com”.

 

3. Search Exact Phrases and Quotes
A more commonly used trick is typing quotation marks around words or phrases to tell Google to only show results contain the exact words in quotes

 

4. Omit Certain Words Using the Minus Sign
In contrast to the last tip, using “-aword” will omit results containing the word right after the minus sign. For example typing “Apple -iPhone” will get rid of all results containing iPhone with the word Apple.

 

5. Use Google as a timer
Now Google has a stopwatch and timer feature that will show up by just searching “set timer”. No need to mess around on apps when you can just pull it up on the internet!

 

6. Search Newspaper Archives from the 1800s
Search “google news archive search” and the first link will bring you to a page with the names of hundreds of newspapers. You can browse issues of Newspapers by date and name.

 

7.  Use Google to Flip a Coin
Need help making a decision? Simply search “flip a coin” and Google will flip a virtually generated coin and give you an answer of heads or tails.

 

8. Search Through Google’s Other Sites
Google has other search engines for specific types of results. For example, if you’re searching for a blog use “Google Blog Search” or if you want to search for a patent use “Google Patent Search”, etc.

 

Now with these Google tips you can search Google like a pro!

October Apple Event Preview

Today Apple sent out invitations for an event on October 30th in New York City. The event, titled “There’s more in the Making”, hints at a creative and pro focused event, which is further suggested by the event being hosted at the Howard Gilman Opera House. There are several rumored devices that will be launched at this event

The headline product that is rumored to be announced will be an update to the iPad Pro line. The line, which is made up of two models, is rumored to gain many of the features from the iPhone X line of phones. This includes smaller bezels and FaceID to replace the fingerprint reader. The devices are also said to switch over from their proprietary Lightning connector in favor of the more standard USB-C. This will also allow the iPad to connect to external display and other accessories much more easily. The iPad and the iPhone are some of the only devices in the industry that haven’t switched over to USB-C. This transition will help the industry converge on a single port type.

There are also rumored to be new Mac’s at this event. The Mac mini hasn’t been updated in over 4 years and is overly due for a refresh. The new minis are rumored to be smaller and more aimed at the pro market. This makes sense given the overall theme of the event. Apple is also rumored to be introducing a new low end Mac laptop at around the $1000 price point. This will replace the aging MacBook Air that Apple is still selling. This is by far Apple’s highest volume price range, so it’s important to have a modern, compelling option.

Is there anything else that Apple will announce next week? What are your predictions?

Are Self-Driving Cars Safe?

Self-driving cars promise to revolutionize driving by removing human error from the equation altogether. No more drunk or tired driving, great reductions in traffic, and even the possibility of being productive on the commute to work. But what are the consequences of relying on algorithms and hardware to accomplish this vision? Software can be hacked or tricked, electrical components can be damaged. Can we really argue that it is safer to relinquish control to a computer than to operate a motor vehicle ourselves? Ultimately, this question cannot be answered with confidence until we conduct far more testing. Data analysis is key to understanding how these vehicles will perform and specifically how they will anticipate and react to the kind of human error which they exist to eliminate. But “the verdict isn’t out yet” is hardly a satisfying answer, and for this reason I would argue that despite concerns about ‘fooling’ self-driving cars, this technology is safer than human drivers.

The article “Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms” details how researchers have tricked computer vision algorithm to misinterpret street signs. Researchers were able to achieve these results by training their classifier program with public road sign data, and then adding new entries of a modified street sign with their own classifiers. Essentially, the computer is “taught” how to analyze a specific image and, after numerous trial runs, will eventually be able to recognize recurring elements in specific street signs and match them with a specific designation / classifier. The article mainly serves to explore how these machines could be manipulated, but only briefly touches upon a key safety feature which would prevent real-world trickery. Notably, redundancy is key in any self-driving car. Using GPS locations of signs and data from past users could ensure that signs are not incorrectly classified by the computer vision algorithm.

The article “The Long, Winding Road for Driverless Cars” focuses less on the safety ramifications of self-driving vehicles, and instead on how practical it is that we will see fully autonomous cars in the near future. The author touches upon the idea that selling current vehicles (such as Tesla) with self-driving abilities as “autopilot” might be misleading, as these current solutions still require a human to be attentive behind the wheel. She presents the hurdle that in order to replace human drivers, self-driving vehicles cannot just be “better” than human drivers but near perfect. While these are all valid concerns, they will only result in benefits for consumers. Mistrust in new tech means that companies and regulatory authorities will go through rigorous trials to ensure that these vehicles are ready for the road and maintain consumer confidence. We have already accepted many aspects of car automation (stopping when an object is detected, hands-free parallel parking, and lane-detection) to make our lives easier, and perhaps some time in the near future self-driving cars will be fully tested and ready for mass deployment.

A Brief Introduction to Creating Functions in MATLAB

Hey wow, look at this!  I’ve finally rallied myself to write a blog article about something that is not digital audio!  Don’t get too excited though, this is still going to be a MATLAB article and, although I am not going to be getting too deep into any DSP, the fundamental techniques underlined in this article can be applied to a wide range of problems.

Now, let me go on record here and say I am not much of a computer programmer.  Thus, if you are looking for a guide to functional programming in general, this is not the place for you!  However, if you are perhaps an engineering student who’s learned MATLAB for school and are maybe interested in learning what this language is capable of, this is a good place to start.  Alternatively, if you are familiar with functional languages (*cough cough* Python), then this article may help you to start transposing your knowledge to a new language.

So What are Functions?

I am sure that, depending on who you ask, there are a lot of definitions for what a function actually is.  Functions in MATLAB more or less follow the standard signals-and-systems model of a system; this is to say they have a set of inputs and a corresponding set of outputs.  There we go, article finished, we did it!

Joking aside, there is not much more to be said about how functions are used in MATLAB; they are excellently simple.  Functions in MATLAB do provide great flexibility though because they can have as many inputs and outputs as you choose (and the number of inputs does not have to be the same as the number of outputs) and the relationship between the inputs and outputs can be whatever you want it to be.  Thus, while you can make a function that is a single-input-single-output linear-time-invariant system, you can also make literally anything else.

How to Create and Use Functions

Before you can think about functions, you’ll need a MATLAB script in which to call your function(s).  If you are familiar with an object oriented language (*cough cough* Java), the script is similar to your main method.  Below, I have included a simple script where we create two numbers and send them to a function called noahFactorial.

Simple Script Example

It doesn’t really matter what noahFactorial does, the only thing that matters here is that the function has two inputs (here X and Y) and one output (Z).

Our actual call to the noahFactorial function happens on line 4.  On the same line, we also assign the output of noahFactorial to the variable Z.  Line 6 has a print statement that will print the inputs and outputs to the console along with some text.

Now looking at noahFactorial, we can see how we define and write a function.  We start by writing ‘function’ and then defining the function output.  Here, the output is just a single variable, but if we were to change ‘output’ to ‘[output1, output2]’, our function would return a 2×1 array containing two output values.

Simple Function Example

Some of you more seasoned programmers might notice that ‘output’ is not given a datatype.  This will undoubtedly make some of you feel uncomfortable but I promise it’s okay; MATLAB is pretty good at knowing what datatype something should be.  One benefit of this more laissez-faire syntax is that ‘output’ itself doesn’t even have to be a single variable.  If you can keep track of it, you can make ‘output’ a 2×1 array and treat the two values like two separate outputs.

Once we write our output, we put an equals sign down (as you might expect), write the name of our function, and put (in parentheses) the input(s) to our function.  Once again, the typing on the inputs is pretty soft so those too can be arrays or single values.

In all, a function declaration should look like:

function output = functionName(input)

or…

function [output1, output2, …, outputN] = functionName(input1, input2, …,inputM)

And just to reiterate, N and M here do not have to be the same.

Once inside our function, we can do whatever MATLAB is capable of.  Unlike Java, return statements are not used to send anything to the output, rather they are used to stop the function in its tracks.  Usually, I will assign an output for error messages; if something goes wrong, I will assign a value to the error output and follow that with ‘return’.  Doing this sends back the error message and stops the function at the return statement.

So, if we don’t use return statements, then how do we send values to the output?  We make sure that in our function, we have variables with the same name as the outputs.  We assign those variable values in the function.  On the last line of the function when the function ends, whatever the values are in the output variables, those values are sent to the output.

For example, if we define an output called X and somewhere in our function we write ‘X=5;’ and we don’t change the value of X before the function ends, the output X will have the value: 5.  If we do the same thing but make another line of code later in the function that says ‘X=6;’, then the value of X returned will be: 6.  Nice and easy.

 

…And it’s that simple.  The thing I really love about functions is that they do not have to be associated with a script or with an object, you can just whip one up and use it.  Furthermore, if you find you need to perform some mathematical operation often, write one function and use it with as many different scripts as you want!  This insane flexibility allows for some insane problem-solving capability.

Once you get the hang of this, you can do all sorts of things.  Usually, when I write a program in MATLAB, I have my main script (sometimes a .fig file if I’m writing a GUI) in one folder, maybe with some assorted text and .csv files, and a whole other folder full of functions for all sorts of different things.  The ability to create functions and some good programming methodology can allow even the most novice of computer programmers to create incredibly useful programs in MATLAB.

 

NOTE: For this article, I used Sublime Text to write-out the examples.  If you have never used MATLAB before and you turn it on for the first time and it looks completely different, don’t be alarmed!  MATLAB comes pre-packaged with its own editor which is quite good, but you can also write MATLAB code in another editor, save it as a .m file, and then open it in the MATLAB editor or run it though the MATLAB kernel later.

Password Security on Github

“The password you provided has been reported as compromised due to re-use of that password on another service by you or someone else. GitHub has not been compromised directly. To increase your security, please change your password as soon as possible.”

I thought this was funny when I first saw this message from Github, a website that has over 28 million users and 57 million repositories. I knew I was receiving this message because I used a very similar password for my IBM intern account and my personal account.

So I was telling my coworkers in IT about it, and they pointed out to me in horror – “That means they’re storing passwords in plaintext…”

Well turns out this isn’t true. In fact, they use fairly secure Key-Derivation Function (KDF) software called Bcrypt.

For obvious reasons, this is scary. The responsible practices for password storage are, well, complicated. It’s a combination of hashing or the more secure Key-Derivation Function, both of which basically scrambles up the user’s password so that not just anyone can decode it, and a careful implementation of where . If a company isn’t using proper security for user data, there’s an increased risk of getting hacked. And realistically, if someone managed to snag the password to your Github account, they’d likely be able to get into at least a few of your other accounts as well.

If you want to learn about this more in depth, you can read this interesting thread.

The Future of the Mac

There have been two major rumors in the past month about the future of the Mac. It’s clear in the past several years that much of Apple’s development effort has been geared towards Apple’s mobile operating system, iOS, which powers iPhones and iPads. Apple has also been introducing new platforms, such as Apple Watch and HomePod. Through all of this, the Mac has been gaining features at a snail’s pace. It seems like Apple will only add features when it feels it must in order to match something it introduces first on iOS. But these recent rumors point to a Mac platform that could be revitalized.

The first major rumor is a shared development library between iOS and the Mac. What does this mean to non-developers? It means that we could very well see iOS apps such as Snapchat or Instagram on Mac. MacOS uses a development framework called AppKit. This framework stems back many years to when Apple bought a company called NeXT computer. These systems are what eventually became the Mac, and the underlying framework has stayed largely the same since then. Obviously, there have been changes and many additions, but it is still different from what developers use to make iOS apps for iPhones and iPads. iOS uses a framework called UIKit, which is very different in key areas. Basically, it means that to develop an app for the iPhone and the Mac takes twice the development effort. Supposedly, Apple is working on a framework for the Mac that is virtually identical to UIKit. This means that developers can port their apps to the Mac with basically no work. In theory, the amount of apps on the Mac would increase as developers port over their iOS apps to the Mac. This means many communication apps such as Snapchat and Instagram could be usable desktop apps.

What Apple’s future macOS framework could look like.

The second major rumor is that Apple is expected to switch from Intel provided CPUs to their own ARM based architecture. Apple switched to Intel CPUs in 2006 after using PowerPCs for many years. This transition brought along an almost 2x increase in performance compared to the PowerPC chips they were using. In the last few years, Intel hasn’t seen the year over year performance increases that they used to have. Additionally, Intel has been delaying new architectures as manufacturing smaller chips gets harder and harder. This means Apple is dependent on Intel’s schedule to introduce new features. On the other hand, Apple has been producing industry-leading ARM chips for use in their iPhones and iPads. These chips are starting to benchmark at or above some of the Intel chips that Apple is using in their Mac line. Rumors are saying that the low-power Macs could see these new ARM based chips as soon as 2020. The major caveat with this transition is that developers could have to re-write some of their applications for the new architecture. This means it might take some time for applications to be compatible, and some older applications might never get updated.

Its clear that Apple’s focus in the past several years has been on its mobile platforms and not on its original platform, the Mac. But these two rumors show that Apple is still putting serious engineering work into its desktop operating system. These new features could lead to a thriving Mac ecosystem in the years to come.

A Reflection on Winning The Vive

By Parker Louison 

The Views and Opinions Expressed in This Article Are Those of Parker Louison and Do Not Necessarily Reflect the Official Policy or Position of UMass Amherst IT 

A Note of Intention

I want to start off this article by explaining that I’m not making this in an effort to gloat or brag, and I certainly hope it doesn’t come across that way. I put all of the creative energy I had left this semester into the project I’m about to dissect and discuss, so sadly I won’t be publishing a video this semester (as I’ve done for the past two semesters). One of the reasons I’m making this is because a lot of the reaction towards what I made included people asking how I made it and how long it took me, and trust me, we’ll go in depth on that.

My First Taste

My first experience with high-grade virtual reality was a few weeks before the start of my sophomore year at UMass when my friend Kyle drove down to visit me, bringing along his HTC Vive after finding out that the only experience I’d had with VR was a cheap $20 adapter for my phone. There’s a consensus online that virtual reality as a concept is better pitched through firsthand experience rather than by word of mouth or marketing. The whole appeal of VR relies on subjective perception and organic optical illusions, so I can understand why a lot of people think the whole “you feel like you’re in the game” spiel sounds like nothing but a load of shallow marketing. Remember when Batman: Arkham Asylum came out and nearly every review of it mentioned that it made you feel like Batman? Yeah, well now there’s actually a Batman Arkham VR game, and I don’t doubt it probably does make you actually feel like you’re Batman. The experience I had with VR that night hit me hard, and I came to understand why so many people online were making it out to be such a big deal. Despite my skeptical mindset going in, I found that it’s just as immersive as many have made it out to be. 

This wasn’t Microsoft’s Kinect, where the action of taking away the remote actually limited player expression. This was a genuinely deep and fascinating technological breakthrough that opens the door for design innovations while also requiring programmers to master a whole new creative craft. The rulebook for what does and doesn’t work in VR is still being written, and despite the technology still being in its early stages, I wanted in. I wanted in so badly that I decided to try and save up my earnings over the next semester in an effort to buy one. That went about as well as you’d expect; not just because I was working within a college student’s budget, but also because I’m awful with my money. My Art-Major friend Jillian would tell you it’s because I’m a Taurus, but I think it has more to do with me being a giant man-child who impulse-purchases stupid stuff because the process of waiting for something to arrive via Amazon feels like something meaningful in my life. It’s no wonder I got addicted to Animal Crossing over Spring Break… 

The Task

Anyway, I was sitting in my Comp-Lit discussion class when I got the email about the Digital Media Lab’s new Ready Player One contest, with the first place winner taking home an HTC Vive Headset. I’m not usually one for contests, and I couldn’t picture myself actually winning the thing, but something about the challenge piqued my interest. The task involved creating a pitch video, less than one minute in length, in which I’d have to describe how I would implement Virtual Reality on campus in a meaningful way. 

With Virtual Reality, there are a lot of possible implementations relating to different departments. In the Journalism department, we’ve talked at length in some of my classes about the potential applications of VR, but all of those applications were either for the benefit of journalists covering stories or the public consuming them. The task seemed to indicate that the idea I needed to pitch had to be centered more on benefiting the average college student, rather than benefiting a specific major (at least, that’s how I interpreted it). 

One of my original ideas was a virtual stress-relief dog, but then I realized that people with anxiety would likely only get even more stressed out with having to put on some weird giant headset… and real-life dogs can give hecking good nuzzles that can’t really be simulated. You can’t substitute soft fur with hard plastic. 

I came to college as a journalism major, and a day rarely goes by when I don’t have some doubts about my choice. In High School I decided on journalism because I won this debate at a CT Youth Form thing and loved writing and multi-media, so I figured it seemed like a safe bet. Still, it was a safe bet that was never pitched to me. I had no idea what being a journalist would actually be like; my whole image of what being a reporter entailed came from movies and television. I thought about it for a while, about how stupid and hormonal I was and still am, and realized that I’m kind of stuck. If I hypothetically wanted to switch to chemistry or computer science, I’d be starting from scratch with even more debt to bear. Two whole years of progress would be flushed down the toilet, and I’d have nothing to show for it. College is a place for discovery; where your comfortable environment is flipped on its head and you’re forced to take care of yourself and make your own friends. It’s a place where you work four years for a piece of paper to make your resume look nicer when you put it on an employer’s desk, and you’re expected to have the whole rest of your life figured out when you’re a hormonal teenager who spent his savings on a skateboard he never learned how to ride.

And so I decided that, in this neo-cyberpunk dystopia we’re steadily developing into, it would make sense for simulations to come before rigorous training. Why not create simulated experiences where people could test the waters for free? Put themselves in the shoes of whatever career path they want to explore to see if the shoes fit right, you know?

I mentioned “cyberpunk” there earlier because I have this weird obsession with cyberpunk stuff at the moment and I really wanted to give my pitch video some sort of tongue-in-cheek retrograde 80s hacker aesthetic to mask my cynicism as campy fun, but that had to be cut once I realized I had to make this thing under a minute long.

Gathering My Party and Gear

Anyway, I wrote up a rough script and rented out one of the booths in the Digital Media Lab. With some help from Becky Wandel (the News Editor at WMUA) I was able to cut down my audio to just barely under the limit. With the audio complete, it came time to add visual flair. I originally wanted to do a stop-motion animated thing with flash-cards akin to the intros I’ve made for my Techbytes videos, but I’m slow at drawing and realized that it’d take too much time and effort, which is hilarious because the idea I settled on was arguably even more time-consuming and draining.

I’m the proud owner of a Nikon D80, a hand-me-down DSLR from my mom, which I bring with me everywhere I go, mostly because I like taking pictures, but also because I think it makes me seem more interesting. A while back I got a speck of dust on the sensor, which requires special equipment to clean (basically a glorified turkey baster). I went on a journey to the Best Buy at the Holyoke Mall with two friends to buy said cleaning equipment while documenting the entire thing using my camera. Later, I made a geeky stop-motion video out of all those photos, which I thought ended up looking great, so I figured doing something similar for the pitch video would be kind of cool. I messaged a bunch of my friends, and in a single day I managed to shoot the first 60% of the photos I needed. I then rented out the Vive in the DML and did some photoshoots there. 

At one point while I was photographing my friend Jillian playing theBlu, she half-jokingly mentioned that the simulation made her want to study Marine Biology. That kind of validated my idea and pushed me to make sure I made this video perfect. The opposite effect happened when talking to my friend Rachael, who said she was going to pitch something for disability services, to which I immediately thought “damn, she might win with that.”

I then knew what I had to do. It was too late to change my idea or start over, so I instead decided that my best shot at winning was to make my video so stylistically pleasing and attention-grabbing that it couldn’t be ignored. If I wasn’t going to have the best idea, then gosh darn it (I can’t cuss because this is an article for my job) I was going to have the prettiest graphics I could muster.   

The Boss Fight 

I decided to use a combination of iMovie and Photoshop, programs I’m already familiar with, because teaching myself how to use more efficient software would ironically be less efficient given the short time frame I had to get this thing out the door. Using a drawing tablet I borrowed from my friend Julia, I set out to create the most complicated and ambitious video project I’ve ever attempted to make. 

A few things to understand about me: when it comes to passion projects, I’m a bit of a perfectionist and extremely harsh on myself. I can’t even watch my Freshman Year IT video because I accidentally made it sound like a $100 investment in some less than amazing open back headphones was a reasonable decision on my part, and my other IT video makes me cringe because I thought, at the time, it’d be funny to zoom in on the weird hand motions I make while I talk every five seconds.

So in this case, I didn’t hold back and frequently deleted whole sections of my video just because I didn’t like how a single brush stroke animated (with the exception of the way my name is lopsided in the credits, which will haunt me for the rest of my life). For two weeks, I rigorously animated each individual frame in Photoshop, exported it, and imported it into iMovie. 

(Above) A visual representation of all the files it took to create the video

(Above) Frame by frame, I lined up my slides in iMovie

The most demanding section was, without a doubt, the one involving my friend Matthew, which I spent one out of the two weeks entirely focused on. For that section, I needed it to animate at a speed faster than 0.04 seconds, which is impossible because 0.04 seconds is the shortest you can make a frame in iMovie’s streamlined interface, so I ended up creating a whole new project file, slowing down my audio by half-speed, editing the frames of that section relative to that slowed down audio before exporting it, putting it into the original project file and doubling its speed just to get it to animate smoothly. 

 (Above) Some sections required me to find loopholes in the software to get them to animate faster than iMovie would allow

(Above) Some of the scrap paper I scribbled notes on while editing the video together

Each individual border was drawn multiple times with slight variations and all the on-screen text (with the exception of the works cited) was handwritten by me multiple times over so that I could alternate between the frames of animation to make sure everything was constantly moving. 

(Above) Boarders were individually drawn and cycled through in order to maintain visual momentum

This was one of my major design philosophies during the development of this project: I didn’t want there to be a single moment in the 59 seconds where nothing was moving. I wanted my video to grab the viewer’s attention, and I feared that losing momentum in the visual movement would cause me to lose the viewer’s interest. The song LACool by DJ Grumble came on my Spotify radio coincidentally right when I was listening over the audio for the section I was editing, and I thought it fit so well I bought it from iTunes on the spot and edited it in.

I finished my video on Monday, March 26th, turned it into the Digital Media Lab, stumbled back to my dorm, and went to bed at 6:00 PM by accident. 

The Video

(Above) The final video submission 

The winner wouldn’t be announced until Wednesday, so for two days I nervously waited until 6:00 PM on March 28th, when I sat on my bed in my dorm room refreshing the Digital Media Lab website every 7 seconds like a stalker on an ex’s Facebook page waiting for the winner to finally be posted. At 6:29 PM I got a call from an unrecognized number from Tallahassee, Florida, and almost didn’t answer because I thought it was a sales call. Turns out it was Steve Acquah, the coordinator of the Digital Media Lab, who informed me that my video won. Soon after, the Digital Media Lab Website was also updated with the announcement.

(Above) A screenshot taken of the announcement on the Digital Media Lab Website 

Thank You

Along with the raw joy and excitement came a sort of surreal disbelief. Looking back on those stressful weeks of work, it all felt like it flew by faster than I could’ve realized once I got that phone call. I’m so grateful for not only the reward but the experience. Making that video was a stressful nightmare, but it also forced me to push myself to my creative limits and challenge myself in so many ways. On a night where I would’ve probably just gone home and watched Netflix by myself, I sprinted around campus to meet up with and take photos of my friends. This project got me to get all my friends together and rent out the Vive in the DML, basically forcing me to play video games and have fun with the people I love. While the process of editing it all together drove me crazy, the journey is definitely going to be a highlight of my time at UMass. 

I’m grateful to all of my friends who modeled for me, loaned me equipment, got dinner with me while I was stressing out over editing, played Super Hot VR with me, gave me advice on my audio, pushed me to not give up, and were there to celebrate with me when I won. I’m also immensely grateful to the staff and managers of the DML for providing me with this opportunity, as well as for their compliments and praise for the work I did. This was an experience that means a lot to me and it’s one I won’t soon forget. Thank you.

Epilogue

I picked up my prize the other day at the DML (see photo above the title of this article)! Unfortunately, I have a lot of work going on, so it’s going to be locked up in a safe place until that’s done. Still, it’s not like I could use it right now if I wanted to. My gaming PC hasn’t been touched in ages (since I don’t bring it with me to college) so I’m going to need to upgrade the GPU before I can actually set up the Vive with it. It’s a good thing there isn’t a spike in demand for high-end GPUs at the moment for cryptocurrency mining, right?

(Above) A visual representation of what Bitcoin has done to the GPU market (and my life)

…Oh.

Regardless of when I can actually use the prize I won, this experience was one I’m grateful to have had. The video I made is one I’m extremely proud of, and the journey I went on to create it is one I’ll think about for years to come.

SoFi the Robotic Fish

Researchers at MIT’s Computer Science and Artificial Intelligence department have created a Soft Robotic Fish (nicknamed SoFi) which is able to swim and blend in with real fish while observing and gathering data from them. This remarkable bot is not only cool and adorable, but it also paves the way for the future of lifelike Artificial Intelligence.

Think about it: We have already reached the point where we can create a robotic fish which is capable of fooling real fish into thinking that it’s a real fish. Granted, fish aren’t the smartest of the creatures on this planet, but they can usually tell when something is out of the ordinary and quickly swim away. SoFi, however, seems to be accepted as one of their own. How long will it take for us to create a robot that can fool more intelligent species? Specifically, how long will it be until Soft Robotic Humans are roaming the streets as if they weren’t born yesterday? Perhaps more importantly, is this something that we actually want?

The benefits of a robotic animal like SoFi are obvious: It allows us to get up close and personal with these foreign species and learn more about them. This benefit of course translates to other wild animals like birds, bees, lions, etc. We humans cant swim with the fishes, roost with the birds, visit the hive with the bees or roar with the lions, but a robot like SoFi sure can. So it makes sense to invest in this type of technology for research purposes. But when it comes to replicating humanity, things get a bit trickier. I’m pretty confident in saying that most humans in this world would not appreciate being secretly observed in their daily lives “for science.” Of course, it’s still hard to say whether or not this would even be possible, but the existence of Sofi and the technology behind it leads me to believe we may be closer than most of us think.

Regardless of its possible concerning implications, SoFi is a truly amazing feat of engineering. If nothing else, these Soft Robots will bring an epic evolution to the Nature Documentary genre. For more information about the tech behind SoFi, check out the video at the top from MITCSAIL.

Building a Better Bracket: Beating the Odds with Machine Learning

Like most other fans of college basketball, I spent an unhealthy amount of time dedicated to the sport the week after Selection Sunday (March 11th). Starting with spending hours filling out brackets, researching rosters, injuries, and FiveThirtyEight’s statistical predictions to fine-tune my perfect bracket, through watching around 30 games over the course of four days. I made it a full six hours into the tournament before my whole bracket busted. The three-punch combo of Buffalo (13) over Arizona (4), Loyola Chicago (11) beating Miami (6), and, most amazingly, the UMBC Retrievers (16) crushing the overall one-seed and tournament favorite, UVA, spelled the end for my predictions. After these three upsets, everyone’s brackets were shattered. The ESPN leaderboards looked like a post-war battlefield. No one was safe.

The UMBC good boys became the only 16th seed to beat a 1st seed in NCAA tournament history

The odds against picking a perfect bracket are astronomical. The probability ranges from 1 in 9.2 quintillion to 1 in 128 billion. Warren Buffet offers $1 million a year for life for Berkshire Hathaway employees who correctly pick a bracket. Needless to say, no one has been able to cash in on the prize. Picking a perfect bracket is nearly impossible, and is (in)famous for being one of the most unlikely statistical probabilities in gambling.

The Yin and Yang of March Madness

To make the chances of making a perfect bracket somewhat feasible, a competition has been set up to see who can beat the odds with machine learning. Hosted by Kaggle, an online competition platform for modeling and analytics that was purchased by Google’s parent company, Alphabet, the competition has people making models to predict which teams will win each game based on prior data. A model that is correct and predicted it with 99% confidence will score better than one with a 95% confidence and so on. The prize is $100,000, split among the teams that made the top 3 brackets. Teams are provided with the results of every men’s and women’s game in the tournament since 1985, the year that the tournament first started with 64 teams. They are also provided with every play since 2009 in the tournament. Despite all this data, it is still very hard to predict, with the best bracket in this competition, which has been hosted for five years, predicting 39 games correct. Many unquantifiable factors, such as hot streaks and team chemistry, play a large factor in the difficulty in choosing, so it looks like we’re still years off from having our computers picking the perfect bracket.

[Sidebar] How to Resize a VirtualBox .vdi

Congratulations! You’ve made a virtual box of your favorite linux distro. But now you want to download a picture of your cat and find out that you’ve run out of disk space. 
Image: habrahabr.ru

Rather than free up space by deleting the other pics of Snuffles, you decide you’d rather just make the virtual machine have more disk space. But you’ll find out quickly that Oracle has not made this super-easy to do. The process is not simple, but it can be if you just use the following steps:

Open the Command Line on your windows machine. (Open Start and type cmd)

You can then navigate to you vitualbox installation folder. It’s default location is C:\Program Files\Oracle\VirtualBox\

Once there, type this command to resize the .vdi file:

VBoxmanage modified LOCATION –resize SIZE

Replace LOCATION with the absolute file path to your .vdi image (just drag the .vdi file from file explorer to you cmd window) and replace SIZE with the new size you want (measured in MB) 1 GB = 1000 MB

Now your .vdi is resized, but the disk space is unallocated in the virtual machine. You’ll need to resize it. To do this, download gparted live. Make a new virtual machine. It is going to simulate a live CD boot where you can modify your virtual partition.

If your filesystem is ext4, like mine was when I did this, you’ll need to delete the linux-swap file located in-between your partition and the unallocated space. Make sure you leave at least 4 GB of unallocated space so that you can add the linux-swap partition back later.

After you’ve resized your partition, you’ll be done. Boot into the virtual machine as normal and you’ll notice you have more space for Snuffles.

Image: wideopenpets.com

Is Artificial Intelligence like J.A.R.V.I.S. Possible?

If you are a fan of Marvel Comics or the Marvel Cinematic Universe,  you are likely aware of J.A.R.V.I.S., Tony Stark’s personal artificial intelligence(AI) program. J.A.R.V.I.S. helps Tony Stark reach his full potential as Iron Man by helping run operations and diagnostics on the Iron Man suit, as well as gathering information and running simulations. J.A.R.V.I.S. also has a distinct personality, sometimes displaying sarcasm and wit, no doubt programmed in by Stark. With artificial intelligence and machine learning developing at a breakneck pace, it’s worth asking if an AI like J.A.R.V.I.S. is even possible.

One of the most prominent AI programs in use right now is IBM Watson. Watson made its debut in 2011 as a contestant on Jeopardy in a special broadcast against two of the show’s best contestants and won. Commercial use of Watson began in 2013.  Watson is now being used for a variety of functions from tracking elevator use in support of maintenance efforts, to planning irrigation systems for farms. (For more stories about Watson’s many jobs, look here.)

As far as hardware is concerned, Watson relies on a cluster of 90 IBM Power 750 servers that each have a 3.5GHz processor, 16 terabytes of RAM. This allows Watson to process the equivalent of one million books per second. The estimated cost of Watson’s hardware was 300 million dollars.

When Watson competed on Jeopardy, all of the information Watson had access to had to be stored on the machine’s RAM because it would not have been able to access it within a competitive time frame if it was stored on the machine’s hard drive. Since Watson’s bout on Jeopardy, solid state drives have started to emerge, which would allow information that is used more often to be accessed at a faster rate than if the same information was stored on a standard hard drive. With further advances in memory storage technology, information could be accessed at faster rates.

IBM’s Watson appears to be a step in the direction toward AI similar to J.A.R.V.I.S. With quantum computing as an expanding frontier, processing speeds could become even faster, making something like J.A.R.V.I.S. a more realizable reality. Personally, I believe such a feat is possible, and could even be achieved in our lifetime.

Which Computer Is Right for You: A Beginner’s Guide

People always ask me, “Are Macs better than PCs?” or “What kind of computer should I buy?” so I’m here to clear some confusion and misconceptions about computers and hopefully help you find the computer best suited to your purposes.

Computers can generally be separated into two large operating system groups: MacOS and Windows. There are Linux and Ubuntu users, but the majority of consumers will never use these operating systems, so I’ll focus on the big two for this article. Computers can also be separated into two physical categories, desktops and laptops.

Desktops, as the name suggests, sit on top of (or under) your desk, and are great for a number of reasons. Firstly, they are generally the most cost-efficient. With the ability to custom-build a desktop, you’re able to the best bang for your buck. And even if you choose to buy a prebuilt, the cost differences nowadays between prebuilt and custom builds are small. Desktops also serve as being very powerful machines, with the best performance, as they aren’t constrained to physical size like laptops are. Many laptop parts have to be altered to fit the limited space, but desktops have as much space as the case has to offer. More space within the case means bigger/more powerful parts, better ventilation for cooling, etc. Additionally, desktops are generally more future-proof. If a hard drive runs out of space, you can buy and install another. If your graphics card can’t support modern games anymore, you can order one that fits your budget and just replace the old one. Overall, desktops are ideal… as long as you don’t want to move them around a lot. A full setup consisting of a tower, monitor, and peripherals can be very heavy and inconvenient to move around, not including the many cables required to connect everything together. If you are looking for a good machine that will last the years, and don’t need to move it around often, then you might be looking for a desktop. I will go over the details of operating systems further down.

If you’re looking for a portable machine, then you’re looking for a laptop. But here too there’s a lot of variety: You have Chromebooks, which are incredibly fast, light, and (importantly) cheap machines that use ChromeOS for very basic functionalities. Unlike other OSes, this one is designed to be used while connected to the internet, with documents and files in the cloud. The applications are limited to the what’s available of the Chrome store. If all you need a laptop to do is use the internet and edit things on Google Drive, then a Chromebook might be perfect for you.

Next are your middle-of-the-line to high-end laptops, the majority of laptops. This is where you’ll find your MacBooks, your ultrabooks, the all around laptop for most functionalities. This is what most people will prefer, as they can do the most, and retain portability. There is also a ton of variety within this group. There are touch screens, super-bendable hinges, I/O ports, etc. Here, what it’s going to come down to is personal preference. There too many options to write about, but I encourage everyone to try to assess a number of different computers, before deciding which ones they like the best.

Lastly, I’d like to discuss operating system, primarily MacOS and Windows. I did briefly mention ChromeOS, but that’s only really for Chromebooks and it’s a very basic system. With MacOS, what people like is the convenience. Apple has created an “ecosystem” of devices that, if you are a part of this ecosystem, everything works perfectly in harmony. MacOS is very user-friendly and easy to pickup, and if you own an iPhone, an Apple Watch, an iPad, any iOS device, you can connect it to your computer and use it in sync all together. iMessage, Photos, Apple Cloud, are all there to keep your devices connected and make it super easy to swap between. Windows doesn’t have an “ecosystem,” but what it lacks in user-friendliness it makes up for versatility and user power. Windows is good at being customizable. You have a lot more freedom when it comes to making changes. This comes back to the device it’s on. Mac devices have top-of-the-line build quality. They’re constructed beautifully, and are extremely good at what they do, but they come with a high price tag. Their devices are built in a way to discourage user-modification like adding storage/memory, etc. Microsoft laptops range from $150 well into the thousands for gaming machines, where the common MacBooks start near $1000. If you’re looking for gaming, Windows is also the way to go. If you aren’t choosing a desktop, there are many gaming laptops out for sale. Although you won’t find the same performance per dollar, they are laptops and portable.

With this, hopefully you have everything you need to buy the perfect laptop for you the next time you need one.

A [Mathematical] Analysis of Sample Rates and Audio Quality

 

Digital audio again? Ah yes… only in this article, I will set out to examine a simple yet complicated question: how does the sampling rate of digital audio affect its quality? If you have no clue what the sampling rate is, stay tuned and I will explain. If you know what sampling rate is and want to know more about it, also stay tuned; this article will go over more than just the basics. If you own a recording studio and insist on recording every second of audio in the highest possible sampling rate to get the best quality, read on and I hope inform you of the mathematical benefits of doing so…

What is the Sampling Rate?

In order for your computer to be able to process, store, and play back audio, the audio must be in a discrete-time form. What does this mean? It means that, rather than the audio being stored as a continuous sound-wave (as we hear it), the sound-wave is broken up into a bunch of infinitesimally small points. This way, the discrete-time audio can be represented as a list of numerical values in the computer’s memory. This is all well and good but some work needs to be done to turn a continuous-time (CT) sound-wave into a discrete-time (DT) audio file; that work is called sampling.

 

Sampling is the process of observing and recording the value of a complex signal during uniform intervals of time. Figure 1(a) is ‘analog’ sampling where this recorded value is not modified by the sampling process and figure 1(b) is digital sampling where the recorded value is quarantined so it can be represented with a binary word.

During sampling, the amplitude (loudness) of the CT wave is measured and recorded at regular intervals to create the list of values that make up the DT audio file. The inverse of this sampling interval is known as the sample rate and has a unit of Hertz (Hz). By far, the most common sample rate for digital audio is 44100 Hz; this means that the CT sound-wave is sampled 44100 times every second.

This is a staggering number of data points! On a audio CD, each sample is represented by two bytes; that means that one second of audio will take up over 170 KB of space! Why is all this necessary? you may ask…

The Nyquist-Shannon Sampling Theorem

Some of you more interested readers may have heard already of the Nyquist-Shannon Sampling Theorem (some of you may also know this theorem simply as the Nyquist Theorem). The Nyquist-Shannon Theorem asserts that any CT signal can be sampled, turned into a DT file, and then converted back into a CT signal with no loss in information so long as one condition is met: the CT signal is band-limited at the Nyquist Frequency. Let’s unpack this…

Firstly, what does it mean for a signal to be band-limited? Every complex sound-wave is made up of a whole myriad of different frequencies. To illustrate this point, below is the frequency spectrum (the graph of all the frequencies in a signal) of All Star by Smash Mouth:

Smash Mouth is band-limited! How do we know? Because the plot of frequencies ends. This is what it means for a signal to be band-limited: it does not contain any frequencies beyond a certain point. Human hearing is band-limited too; most humans cannot hear any frequencies above 20,000 Hz!

So, I suppose then we can take this to mean that, if the Nyquist frequency is just right, any audible sound can be represented in digital form with no loss in information? By this theorem, yes! Now, you may ask, what does does the Nyquist frequency have to be for this to happen?

For the Shannon-Nyquist Sampling Theorem to hold, the Nyquist frequency must be greater than twice the highest frequency being sampled. For sound, the highest frequency is 20 kHz; and thus, the Nyquist frequency required for sampled audio to capture sound with no loss in information is… 40 kHz. What was that sample-rate I mentioned earlier? You know, that one that is so common that basically all digital audio uses it? It was 44.1 kHz. Huzzah! Basically all digital audio is a perfect representation of the original sound it is representing! Well…

Aliasing: the Nyquist Theorem’s Complicated Side-Effect

Just because we cannot hear sound about 20 kHz does not mean it does not exist; there are plenty of sound-waves at frequencies higher than humans can hear.

So what happens to these higher sound-waves when they are sampled? Do they just not get recorded? Unfortunately no…

A visual illustration of how under-sampling a frequency results in some unusual side-effects. This unique kind of error is known as ‘aliasing’

So if these higher frequencies do get recorded but frequencies above the Nyquist frequency cannot be sampled correctly, then what happens to them? They are falsely interprated as lower frequencies and superimposed over the correctly sampled frequencies. The distance between the high frequency and the Nyquist frequency govern what lower frequency these high-frequency signals will be interpreted as. To illustrate this point, here is an extreme example…

Say we are trying to sample a signal that contains two frequencies: 1 Hz and 3 Hz. Due to poor planning, the Nyquist frequency is selected to be 2 Hz (meaning we are sampling at a rate of 4 Hz). Further complicating things, the 3 Hz cosine-wave is offset by 180° (meaning the waveform is essentially multiplied by -1). So we have the following two waveforms….

1 Hz cosine waveform

3 Hz cosine waveform with 180° phase offset

When the two waves are superimposed to create one complicated waveform, it looks like this…

Superimposed waveform constructed from the 1 Hz and 3 Hz waves

Pretty, right? Well unfortunately, if we try to sample this complicated waveform at 4 Hz, do you know what we get? Nothing! Zero! Zilch! Why is this? Because when the 3 Hz cosine wave is sampled and reconstructed, it is falsely interpreted as a 1 Hz wave! Its frequency is reflected about the Nyquist frequency of 2 Hz. Since the original 1 Hz wave is below the Nyquist frequency, it is interpreted with the correct frequency. So we have two 1 Hz waves but one of them starts at 1 and the other at -1; when they are added together, they create zero!

Another way we can see this phenomena is by looking at the graph. Since we are sampling at 4 Hz, that means we are observing and recording four evenly-spaced points between zero and one, one and two, three and four, etc… Take a look at the above graph and try to find 4 evenly-space points between zero and one (but not including one). You will find that every single one of these points corresponds with a value of zero! Wow!

So aliasing can be a big issue! However, designers of digital audio recording and processing systems are aware of this and actually provision special filters (called anti-aliasing filters) to get rid of these unwanted effects.

So is That It?

Nope! These filters are good, but they’re not perfect. Analog filters cannot just chop-off all frequencies above a certain point, they have to, more or less, gradually attenuate them. So this means designers have a choice: either leave some high frequencies and risk distortion from aliasing or roll-off audible frequencies before they’re even recorded.

And then there’s noise… Noise is everywhere, all the time, and it never goes away. Modern electronics are rather good at reducing the amount of noise in a signal but they are far from perfect. Furthermore noise tends to be mostly present at higher frequencies; exactly the frequencies that end up getting aliased…

What effect would this have on the recorded signal? Well if we believe that random signal noise is present at all frequencies (above and below the Nyquist frequency), then our original signal would be masked with a layer of infinitely-loud aliased noise. Fortunately for digitally recorded music, the noise does stop at very high frequencies due to transmission-line effects (a much more complicated topic).

What can be Learned from All of This?

The end result of this analysis on sample rate is that the sample rate alone does not tell the whole story about what’s being recorded. Although 44.1 kHz (the standard sample rate for CDs and MP3 files) may be able to record frequencies up to 22 kHz, in practice a signal being sampled at 44.1 kHz will have distortion in the higher frequencies due to high frequency noise beyond the Nyquist frequency.

So then, what can be said about recording at higher sample rates? Some new analog-to-digital converts for musical recording sample at 192 kHz. Most, if not all, of the audio recording I do is done at a sample rate of 96 kHz. The benefit to recording at the higher sample rates is that you can recording high-frequency noise without it causing aliasing and distortion in the audible range. With 96 kHz, you get a full 28 kHz of bandwidth beyond the audible range where noise can exist without causing problems. Since signals with frequencies up to around 9.8 MHz can exist in a 10 foot cable before transmission line effects kick in, this is extremely important!

And with that, a final correlation can be predicted: the greater the sample rate, the less noise will result in aliasing in the audible spectrum. To those of you out there who have insisted that the higher sample rates sound better, maybe now you’ll have some heavy-duty math to back up your claims!

Should smart watches be allowed in professional sports?

With the advent of smart technology, the relative ease with which we access information is changing. The smart watch puts much of what a person does on their phone, on their wrist, and on the internet. While we make these technological advances, some things remain constant, like professional sports. With the exception of some minor rule changes here and there, many of the most-watched games in the U.S. have remained the same. Recently, The Red Sox allegedly used smart watches to steal signs from The Yankees, which raises an important question: should smart watches be allowed in professional sports?

Most smart watches have the common ability to monitor the wearer’s heart rate. This data could be useful in monitoring players condition so the coach knows when to make substitutions, but it could also be used for medical research. If every professional athlete wore a smart device while they played in games and did workouts, the amount of data that could be made available to medical professionals in one year would be astounding. This data could lead to a better understanding than we have now of the human body at work.

While wearing smart watches in professional sports hold potential societal gain, the reality of the situation is not as optimistic. Many sports involve physical contact, which leads to a risk of either the smartwatch breaking, or increased injury due to contact with a smart watch on a player’s wrist. There is also an increased risk of cheating if players and coaches can view text messages on their wrist.

In my opinion, sports would be better off without smart technology becoming part of any game. The beauty of sporting matches is that they are meant to display the raw athletic abilities of players in competition. Adding smart technology to the game could lead to records that have asterisks by them, similar to home run records set by players who used steroids.

An Extensive Guide to Keyboard Shortcuts

In this day and age, it’s safe to assume that most of you know a thing or to about how to use a computer, one of those things being keyboard shortcuts. Keyboard shortcuts, for the uninitiated, are really handy combinations of buttons, usually two or three, that perform certain functions that would otherwise take somewhat longer to do manually with just the mouse. For example, highlighting a piece of text and pressing Control (CTRL) + C copies the text to your clipboard, and subsequently pressing CTRL + V pastes that copied text to wherever you’re entering text.

Most people tend to know copy and paste, as well as a handful of other shortcuts, but beyond them are an abundance of shortcuts that can potentially save time and make your computer-using experience that much more convenient. In this article, I’ll go over some commonly known keyboard shortcuts and several most likely not very well known ones as well.

Most of these keyboard shortcuts will be primarily on Windows, although some can also apply on Mac as well, usually substituting CTRL with the Command button.

General shortcuts:

CTRL + C – As mentioned above copies any highlighted text to the clipboard.

CTRL + V – Also mentioned above, pastes any copied text into any active text field.

CTRL + X – Cuts any highlighted text; as the wording suggests, instead of just copying the text, it will “cut” it and remove it from the text field. Essentially rather than copying, the text will be moved to the clipboard instead.

CTRL + Z – Undo an action. An action can be just about anything; since this is a fairly universal shortcut, an action can be what you last typed in Microsoft Word, a line/shape drawn in Photoshop, or just any “thing” previously done in an application.

CTRL + Y – Redo an action. For example, if you changed your mind about undoing the last action, you can use this shortcut to bring that back.

CTRL + A – Selects all items/text in a document or window, i.e. highlights them.

CTRL + D – Deletes the selected file and moves it to the Recycle Bin.

CTRL + R – Refreshes the active window. Generally you’ll only use this in the context of Internet browsers. Can also be done with F5.

CTRL + Right Arrow – Moves the cursor to the beginning of the next word.

CTRL + Left Arrow – Moves the cursor to the beginning of the previous word.

CTRL + Down Arrow – Moves the cursor to the beginning of the next paragraph.

CTRL + Up Arrow – Moves the cursor to the beginning of the previous paragraph.

Alt + Tab – Displays all open applications and while holding down Alt, by pressing Tab, will cycle through which application to switch to from left to right.

CTRL + Alt + Tab – Displays all open applications. Using the arrow keys and Enter, you can switch to another application.

CTRL + Esc – Opens the Start Menu, can also be done with Windows Key.

Shift + Any arrow key, when editing text, selects text in the direction corresponding to the arrow key. Selects text character by character.

CTRL + Shift + Any arrow key – When editing text, selects a block of text, i.e. a word.

CTRL + Shift + Esc – Opens Task Manager directly.

Alt + F4 – Close the active item or exit the active application.

CTRL + F4 – In applications that are full screen and let you have multiple documents open, closes the active document, instead of the entire application.

Alt + Enter – Displays the properties for a selected file.

Alt + Left Arrow – Go back, usually in the context of Internet browsers.

Alt + Right Arrow – Go forward, same as above.

Shift + Delete – Deletes a selected file without moving it to the Recycle Bin first, i.e. deletes it permanently.

Windows Logo Key Shortcuts:
Windows logo key ⊞ + D – Displays and hides the desktop.

Windows logo key ⊞ + E – Opens File Explorer

Windows logo key ⊞ + I – Opens Windows Settings

Windows logo key ⊞ + L – Locks your PC or switches accounts.

Windows logo key ⊞ + M – Minimize all open windows/applications.

Windows logo key ⊞ + Shift + M – Restore minimized windows/applications on the desktop.

Windows logo key ⊞ + P – When connecting your computer to a projector or second monitor, opens up a menu to select how you want Windows to be displayed on the secondary display. You can select from PC screen only (uses only the computer’s screen), Duplicate (shows what is on your computer screen on the secondary display), Extend (Extends the desktop, allowing you to move applications/windows to the secondary display, and keep content on the primary screen off the secondary display), and Second Screen Only (Only the secondary display will be used).

Windows logo key ⊞ + R – Opens the Run Dialog Box. Typing and entering in the file names for applications will open the file/application, useful for troubleshooting scenarios.

Windows logo key ⊞ + T – Cycle through open applications on the taskbar; pressing Enter will switch to the selected application.

Windows logo key ⊞ + Comma (,) – Temporarily peeks at the desktop.

Windows logo key ⊞ + Pause Break – Displays System Properties window in Control Panel. You can find useful information here about your computer such as the version of Windows you are running, general info about the hardware of the computer, etc.

Windows logo key ⊞ + Tab – Opens Task view, which is similar to CTRL + Alt + Tab.

Windows logo key ⊞ + Up/Down – Maximizes or minimizes a window/application respectively.

Windows logo key ⊞ + Left/Right – Maximizes a window to the left or right side of the screen.

Windows logo key ⊞ + Shift + Left/Right – When you have more than one monitor, moves a window/application from one monitor to another.

Windows logo key ⊞ + Space bar – When you have more than one keyboard/input method installed (usually for typing in different languages), switches between installed input methods.

That just about covers most common keyboard shortcuts you can use on a Windows computer. The list goes on however, as there are so many more keyboard shortcuts and functions you can perform, which is even further expanded when taking into account that certain applications have their own keyboard shortcuts when those are in use.

You might end up never using half of the keyboard shortcuts on this list, much less of all keyboard shortcuts in general, favoring the good old fashioned way using the mouse and clicking, and that’s fine. The amount of time you save using a keyboard shortcut versus the clicking your way through things to perform a function is arguably negligible and most of the time is just a quality of life preference at the end of the day. But depending on how you use your computer and what kind of work you do on it, chances are picking up some of these keyboard shortcuts could save you a lot of frustration down the line.

How Do Games Get on Steam?

While it may seem like a strange question to ask, there is an interesting history behind the largest (online and brick and mortar) storefront for video games. The control exerted by Steam on the market it controls has wide ranging implications for both consumers and developers. The availability of indie games is a relatively recent development in Steam’s history; so are the current trends pushing the near-exponential growth of the Steam library.

Back when Steam launched, the library selection was very limited, relying on the IP (Intellectual Property) that Valve (Steam’s parent company) had built up over the past half-decade.  For the first 2 years of Steam’s life you could only find games created and published by Valve (Half Life and Counterstrike 1.6 being the most notable), but in late 2005 that changed as Steam inked a deal with Strategy First, a small Canadian publisher, and games started flowing onto the service. For the next 5 years the steam library was very limited as generally only large/influential publishers were able to get their games on Steam. This created tension in the Steam community, as many people want indie games to be featured and make their way onto the storefront. The tension broke when Steam agreed to allow indie games on the platform.

By 2010, the issues were obvious: Steam had no way to discern which indie games people wanted and which were not suitable for the platform. Two years later, in response to these concerns, Steam implemented the Green Light system, designed to get quality indie games on Steam. Initially Green Light was received positively. Black Mesa (A popular mod that ported Valve’s original Half Life to the Half Life 2 engine) and other releases of quality games inspired confidence. All seemed good. Fast forward to late 2015: Several disturbing trends had begun to emerge.

An enterprising “developer” realized that you can buy assets for the unity engine store, and with very minimal effort create a “game” that you could get on Green Light. These “games” were often just the unity assets with AI zombies that would slowly follow you around, providing little to no engaging content and which hardly could be considered a game. These games should have never made it through Green Light, but the developers got creative in getting people to vote for their games. Some would give “review” keys away pending a vote/good review on their page while others promised actual monetary profit through the Steam’s Trading Card economy.

Asset flips are just one example of how Green Light was exploited (not to mention the cartel-like behavior behind some of the asset flippers). By 2016 Steam was in full damage control, as the effects of Green Light were becoming apparent, the curated garden that once was Steam became overgrown and flooded with sub-par games. So overabundant was the flow of content that by the end of 2016, nearly 40% of Steam’s whole library was released in that year alone. 13 years of content control and managing customers’ expectations were nullified in the span of a year. (The uptick began in 2014, but 2016 was the real breaking point).

Steam, now in damage-control mode, decided to abandon content control in favor of an open marketplace that uses algorithms to recommend games to consumers. This “fix” has only hid the enormity of sub-par games that make up most of the Steam library now. And while an algorithm can recommend games, it will often end up recommending the same types of games, creating an echo chamber effect as you are only recommended the games you express interest in, and not those that would appeal to you the most.

In 2017, Steam abandoned Green Light in favor of Steam Direct, an updated method of allowing developers to publish games, this time without community interaction. Steam re-assumed the mantle of gatekeeper, taking back responsibility for quality control, albeit with standards so low, one can hardly call it vetting. (Some approved games don’t even include an .exe in the download)

 

Restoring a MacBook with an Erased Hard Drive


If you’re anything like me, you will (or already have) accidentally wiped your Macbook’s ssd. It may seem like you just bricked your MacBook, but luckily there is a remedy.

The way forward is to use the built-in “internet recovery” which, on startup, can be triggered via pressing “cmd + R”.

There is a bit of a catch: if you do this straight away, there is a good chance that the Mac will get stuck here and throw up an error – error -3001F in my personal experience. This tends to be because the Mac assumes it is already connected to Wi-Fi (when its not) and gives an error after it fails to connect to apple servers. If instead your MacBook lets you select a Wi-Fi network during this process, you’re in the clear and can skip the next paragraph.

Luckily there is another way to connect, via apple’s boot menu. To get there, power the computer on, hit the power button and very soon after, hold the option key. Eventually you will see a screen where you can pick a Wi-Fi network.

Unfortunately if you’re at UMass, eduroam (or UMASS) won’t work, however you can easily connect to any typical home Wi-Fi or a mobile hotspot (although you should make sure you have unlimited data first).

Once you’re connected, you want to hit “cmd + R” from that boot screen. Do not restart the computer. If you had been able to connect without the boot menu, you should be already be in internet recovery and do not need to press anything.

Now that the wifi is connected, you need to wait. Eventually you will see the Macbook’s recovery tools. First thing you need to do is to select disk utility, select your Macbook’s hard drive and hit erase – this may seem redundant but I’ll explain in a moment. Now go back into the main repair menu by closing the disk utility.

Unless you created a “time machine” backup, you’ll want to pick the reinstall Mac OS X option. After clicking through for a bit, you will see a page asking you to select a drive. If you properly erased the hard drive a few moments before, you will be able to select the hard drive and continue on. If you hadn’t erased the drive again, there is a good chance no drive will appear in the drive selection. To fix that, all you have to do is to erase the drive again with the disk utility mention earlier – the one catch is that you can only get back to the recovery tools if you restart the computer and start internet recovery again, which as you may have noticed, is a slow process.

Depending on the age of your Macbook, there is a solid chance that you will end up with an old version of Mac OS. If you have two step verification enabled, you may have issues updating the the latest Mac OS version.

Out of my own experience, OS X Mavericks will not allow you to login to the app store if you have two step verification – but I would recommend trying, your luck could be better than mine. The reason why we need to App Store is because it is required to upgrade to High Sierra/the present version of OS X.

If you were unable to login, there is a work around – that is to say, OS X Mavericks will let you make a new Apple ID, which luckily are free. Since you will be creating this account purely for the sake of updating the MacBook, I wouldn’t recommend using your primary email or adding any form of payment to the account.

Once you’re logged in, you should be free to update and after some more loading screens, you will have an fully up-to-date MacBook. The last thing remaining (if you had to create a new Apple ID) is to log out of the App Store and login to your personal Apple ID.

Smartphone Fingerprint Scanners

The next generation of Smartphone security is here! Mostly clear fingerprint sensors can now be embedded behind or under the screen. There has been a huge push in phones this year to make the bezels as tiny as possible. Of course this means finding a place for the fingerprint scanner. Many phones have moved it to the back. LG was the first to do it, and it was relatively well executed. Samsung followed suit, and many complain it’s too hard to tell apart from the camera bump. The Pixel and Pixel 2 have one on the back that works well and has gestures! To minimize the bezel, the iPhone X removed the scanner all-together, and instead hid a plethora of sensors inside its iconic notch to usher in the era of faceID.

 

But now two android phones are being released that place the fingerprint scanner, almost completely invisibly, under the screen. The first, a VIVO X20 Plus UD, won an award for best in show at CES 2018. The sensor is a small pad where a traditional scanner should be. Any time that area of the phone is touched, that area of the phone flashes brightly, and the sensor looks for the light reflected off of your finger. Check it out here:

Vivo’s concept phone brings the concept a bit further, with the fingerprint scanner taking up a larger pad, allowing you to touch anywhere on roughly 1/3 of the screen. This concept phone also pushes the bezel-less concept another level by moving the selfie-cam to a piece of plastic that extends in and out from the top of the phone. Is this the future?

 

Limitations:

It’s a bit “slow” right now. (It takes about a second.) The cool animation should be enough to hold you over.  But keep in mind it’s also the first generation of a product. It will only get quicker with time.

The phone needs to have an OLED screen.  While not uncommon, many phones, Iphones included, have LCD displays.  OLED screens allow individual pixels to turn on and off, rather than the whole screen or none of it, like LED displays require.

And finally, yes, at very specific lighting conditions and viewing angles, you can see the sensor through the screen.

What’s Going on with Cambridge Analytica?

If you’ve paid attention in the news this week, you may have heard the name “Cambridge Analytica” tossed around or something about a “Facebook data breach.” At a glance, it may be hard to tell what these events are all about and how they relate to you. The purpose of this article is to clarify those points and to elucidate what personal information one puts on the internet when using Facebook. As well, we will look at what you can do as a user to protect your data.

The company at the heart of this Facebook data scandal is Cambridge Analytica: a private data analytics firm based in Cambridge, UK, specializing in strategic advertising for elections. They have worked on LEAVE.EU (a pro-Brexit election campaign), as well as Ted Cruz’s and Donald Trump’s 2016 presidential election campaigns. Cambridge Analytica uses “psychographic analysis” to predict and target the kind of people who are most likely to respond to their advertisements. “Psychographic analysis”, simply put, is gathering data on individuals’ psychological profiles and using it to develop and target ads. They get their psychological data from online surveys that determine personality traits of individuals. They compare this personality data with data from survey-takers’ Facebook profiles, and extrapolate the correlations between personality traits and more readily accessible info (likes, friends, age group) onto Facebook users who have not even taken the survey. According to CEO Alexander Nix, “Today in the United States we have somewhere close to four or five thousand data points on every individual […] So we model the personality of every adult across the United States, some 230 million people.”. This wealth of data under their belts is extremely powerful in their business, because they know exactly what kind of people could be swayed by a political ad. By affecting individuals across the US, they can sway whole elections.

Gathering data on individuals who have not waived away their information may sound shady, and in fact it breaks Facebook’s terms and conditions. Facebook allows its users’ data to be collected for academic purposes, but prohibits the sale of that data to “any ad network, data broker or other advertising or monetization-related service.” Cambridge Analytica bought their data from Global Science Research, a private business analytics research company. The data in question was collected by a personality survey (a Facebook app called “thisisyourdigitallife”, a quiz that appears similar to the silly quizzes one often sees while browsing Facebook). This app, with its special academic privileges, was able to harvest data not just from the user who took the personality quiz, but from all the quiz-taker’s friends as well. This was entirely legal under Facebook’s terms and conditions, and was not a “breach” at all. Survey-takers consented before taking it, but their friends were never notified about their data being used. Facebook took down thisisyourdigitallife in 2015 and requested Cambridge Analytica delete the data, however ex-Cambridge Analytica employee Christopher Wylie says, “literally all I had to do was tick a box and sign it and send it back, and that was it. Facebook made zero effort to get the data back.”

This chain of events makes it clear that data analytics companies (as well as malicious hackers) are not above breaking rules to harvest your personal information, and Facebook alone will not protect it. In order to know how your data is being used, you must be conscious of who has access to it.

What kind of data does Facebook have?

If you go onto your Facebook settings, there will be an option to download a copy of your data. My file is about 600 MB, and contains all my messages, photos, and videos, as well as my friends list, advertisement data, all the events I’ve ever been invited to, phone numbers of contacts, posts, likes, even my facial recognition data! What is super important in the realm of targeted advertisement (though not the only info people are interested in) are the ad data, friends list, and likes. The “Ads Topics” section, a huge list of topics I may be interested in that
determines what kind of ads I see regularly, has my character pinned down.Though some of these are admittedly absurd, (Organism? Mason, Ohio? Carrot?) knowing I’m interested in computer science, cooperative businesses, Brian Wilson, UMass, LGBT issues, plus the knowledge that I’m from Connecticut and friends with mostly young adults says a lot about my character even without “psychographic analysis”—so imagine what kind of in-depth record they have of me up at Cambridge Analytica! I implore you, if interested, to download this archive yourself and see what kind of person the ad-brokers of Facebook think you are.

Is there a way to protect my data on Facebook?

What’s out there is out there, and from the Cambridge Analytica episode we know third-party companies may not delete data they’ve already harvested, and Facebook isn’t particularly interested in getting it back, so even being on Facebook could be considered a risk by some. However, it is relatively easy to remove applications that have access to your information, and that is a great way to get started protecting your data from shady data harvesters. These applications are anything that requires you to sign in with Facebook. This can mean other social media networks that link with Facebook (like Spotify, Soundcloud, or Tinder), or Facebook hosted applications (things like Truth Game, What You Would Look Like As The Other Gender, or Which Meme Are You?). In Facebook’s settings you can view and remove applications that seem a little shady.

You can do so by visiting this link, or by going into settings, then going into Apps.

After that you will see a screen like this, and you can view and remove apps from there.

However, according to Facebook, “Apps you install may retain your info after you remove them from Facebook.” They recommend to “Contact the app developer to remove this info”. There is a lot to learn from the events surrounding Facebook and Cambridge Analytica this month, and one lesson is to be wary of who you allow to access your personal information.

Creating and Remembering Long Passwords – The Roman Room Concept

Comic courtesy of xkcd by Randall Munroe

If you are anything like me, you have numerous passwords that you have to keep track of.  I can also safely assume, that unless you are in the vast minority or people, you also have autofill/remember passwords turned on for all of your accounts. I’m here to tell you that there is an easy way to remember your passwords so that using these convenient insecurities can be avoided.

The practice that I use and advocate for remembering and creating passwords is called The Roman Room. I’ll admit, this concept is not my own. I’ve borrowed it from a TV show called Leverage. I found it to be a neat concept, and as such I have employed it since.  The practice works as follows: Imagine a room, it can be factual or fictional. Now imagine specific, detailed items that you can either “place” in the room, or that exist in the room in real life. This place could be your bedroom, your family’s RV, really anywhere that you have a vivid memory of, and can recall easily. I suggest thinking of items that you know very well, as this will make describing them later easier. Something like a piece of artwork, a unique piece of furniture, or a vacation souvenir. Something that makes a regular appearance in the same spot or something that has a permanence about it.

Now comes the challenging part: creating the password. The difficulty comes in creating a password that fulfills the password requirements at hand. This technique is most useful when you have the option to have a longer password (16+ characters), as that adds to more security, as well as allows for a more memorable/unique password. Let’s say for example that I often store my bicycle by hanging it on my bedroom wall. It’s a black and red mountain bike, with 7 speeds. I could conjure up the password “Black&RedMountain7Sp33d”.

Editor: This is not Tyler's bike.

Image: bicyclehabitat.com

Alternatively, I could create a password that describes that state of the bike opposed to its appearance.  This example reminds me of how the bike looks when its hung on the wall, it looks like its floating. Which reminds me of that scene from ET. I could then create the password “PhoneHomeB1cycle”, or something along those lines. This technique is just something that I find useful when I comes time to create a new password, and as a means to remember them easily that also prevents me from being lazy using the same password again, and again. Though this method doesn’t always generate the most secure password (by that I mean gibberish-looking password), it is a means to help you create better passwords and remember them without having to store them behind yet another password (in a password manager). What good is a password if you can’t remember or have to write it down?