The world has seen several generations of wireless technologies, and now comes the fifth generation of wireless technology – 5G. With each generation being better and improved, 5G is one of the fastest and most powerful wireless technology the world has seen.
Most phones of today likely support 4G and so the high speeds you enjoy are powered by 4G network on your smartphones. 5G will provide an even greater speed boost. Statistically, the average 4G speed is about 16.9 megabits per second (Mbps), according to Open Signal. 5G promises to deliver Gigabit speeds (>1Gbps). This incremental innovation will likely allow its users to stream not only HD but 4K HDR and much more with ease, all thanks to its speed.
Even though speed is a great part of 5G, 5G is not all about speed. This new technology also induces a change in the number of cell sites that are required for coverage and the number of devices that can connect to a single cell sight. With technological advances, the number of devices owned by a person also increases. Furthermore, with radical technological changes in cars (like self-driven cars), they too need connectivity to the network. Due to this fact, more devices need connectivity in smaller regions. 5G cell sites will be able to cover the connectivity of more devices to the network in a small area. However, to provide high speed to several devices, there needs to be more number of 5G cell sites.
A current issue with today’s network is latency. Latency is defined as “the delay before a transfer of data begins following an instruction for its transfer”. One of the goals of 5G is to reduce the latency. Reduced latency can provide an improved experience for gamers and in virtual reality. Furthermore, latency becomes a very important factor in the automotive industry. In the future, it is possible the cars communicate with each other based on 5G network and these conversations can prevent crashes when such technologies are incorporated in the crash-avoidance systems. The reduced latency and be very effective in preventing accidents in car crashes especially with the upcoming technology of self-driven cars.
Cryptocurrencies have taken a seemingly permanent foothold in the world of technology and banking; more and more people are reaching out and investing or making transactions with Bitcoin and similar online coins. The potential impact that these decentralized coins have on our society is enormous for laypeople and tech enthusiasts alike.
Why is decentralization a big deal?
Throughout all of history, from the great Roman Empire to modern-day United States, money has been backed, affiliated, printed, and controlled by a governing body of the state. Artificial inflation rates, adjustable interest rates, and rapid economic collapses and growth are all side-effects of a governing body with an agenda controlling the money and its supply.
Bitcoin, for example, is one of many online cryptocurrencies, and has no official governing entity. This is completely uncharted territory, as not only is it not being manipulated artificially, but it is not associated with any governing body or any regulations and laws that may come with it. The price is determined solely on the open market – supply and demand.
No other currency has ever been free of a governing body and state like cryptocurrencies are today. The major effect of this is what it will do to the banking industry. Banks rely on governments to control interest rates, and they rely on there being a demand for money, specifically a demand for money to be spent and saved. Banks are intertwined with our identity, it is assumed everyone has a checking account and is a member of a large bank, and thus the forfeiting of all of our privacy and personal information that goes along with creating a bank account and identity. The opportunity to choose whether or not to be part of a bank, and further to be your own bank and hold your own cryptocurrencies in your own locked vault, is a privilege none of our ancestors were ever granted.
The implications of a mass of people determining to be their own bank is catastrophic for banking entities. Purchasing and transacting things will become more secure, and more private. People will not be able to be tracked by where they swiped their credit card, as Bitcoin by it’s very nature is anonymous and leaves no trail. The demand for banks will go down and change the entire workings of the very foundation of our government – if enough people choose to take this route.
What’s the catch?
A heated discussion is currently present on the usability of cryptocurrency in today’s world, this is a topic that is under heavy scrutiny as ultimately it will determine how successful it is for cryptocurrencies to be the major player in today’s economy.
The con’s of cryptocurrency currently lie in the usability for small and/or quick transactions in today’s society. In order for Bitcoin to be able to be used, it most be supported by both a buyer and a seller. That means that business owners must have a certain threshold of “tech saviness” to be able to even entertain the thought of accepting bitcoin as a payment.
In conjunction with needing to be supported on both ends, the fees for transacting are determined by how quickly the transaction needs to “go through” the network – see this article on how bitcoin transactions work on the tech side – and how big the transaction is monetarily. For example, a $100 transaction to another person that needs to get to that person in 20 minutes will likely be significantly more expensive than a $100 transaction that needs to get to that person in a 24 hour period. This spells trouble for small transactions, like your local coffee shop. If a coffee shop wants to accept bitcoin, they have two options. They can either take the gamble and allow a longer period of time for transactions to process – running the risk of someone not actually sending a transaction and skimming a free service from them – or require a quick 20 minute transaction but have higher fees for the buyer, and in turn a possible drop in sales via bitcoin.
The last point is crucial to understanding and predicting the future of cryptocurrencies in our world. If the fees and time for transactions to complete are lowered and made more efficient, Bitcoin will almost inevitably take a permanent resting place in our society as a whole, and perhaps be the most used currency, changing the game and freeing money up from regulation, agendas, and politics.
Artificial intelligence in news media is being used in many new ways, from speeding up research to accumulating and cross-referencing data and beyond.
You might be wondering: how AI does something as complex as writing the news?
AI writes the news by sifting through huge amounts of data, and finding the useful data by categorizing them. The AI tool then uses this data to train itself to imitate human writers. In addition to that it also helps human reporters avoid grunt work such-as tracking the score or updating a breaking news.
Automated Journalism is everywhere from Google News, to Facebook’s fake news checker, in addition to that there are AI tools used by major publications: Narrative Science’s Quill: It puts together reports and stories just based off of raw data. The kicker is that a study was done and most people couldn’t tell the difference between articles written by software or real journalists.
The news industry is predicting that 90% of the articles are going to be written by AI within the next 10-15 years. The industry is going through a huge push towards automated news generation because of the huge amounts of data we are amassing.
While some of us might be scared about machines taking over and influencing our minds, the reality couldn’t be far from it. These AI tools can only write fact-based articles which are much closer to a computer reading a bunch of facts to you than a qualitative article written by a real journalist. These tools don’t have the power to sway most people, and checks are being made to make sure these tools aren’t used to spread “fake news”.
Steganography is the process of hiding one file inside another, most popularly, hiding a file within a picture. If you’re a fan of Mr. Robot you are likely already somewhat familiar with this.
Although hiding files inside pictures may seem hard, it is actually rather easy. All files at their core are just text, so to hide one file into another it is just a case of inserting the text value of one file into another.
Even though this possible on all platforms, it is easiest to accomplish on Linux (although the following commands will probably work on Mac OS as well).
There are many different ways to hide different types of files, however the easiest and most versatile method is to use zip archives.
Once you create your own zip archive we can then append it to the end of an image file, such as a png.
cat deathstarplans.zip >> r2d2.png
If you’re wondering what just happened, let me explain. Cat prints out a file as text (deathstarplans.zip in this instance). Instead of printing to the terminal, >> tells your terminal to appends the text to the end of the specified file -> r2d2.png.
We could have also just done > however that would replace the text of the specified file, specifically the metadata of r2d2.png in this instance. This does work and it would still allow you to view the image… BUT r2d2.png would be easily recognized as containing a zip file and defeat the entire purpose.
Getting the file(s) out is also easy, simply run unzip r2d2.png. Unzip will throw a warning that “x extra bytes” are before the zip file, which you can ignore, basically just restates that we hid the zip in the png file. And so they files pop out.
So why zip? Tar tends to be more popular on Linux… however tar has a problem with this method. Tar does not parse through the file and get to the actual start of the archive whereas zip does so automatically. That isn’t to say its impossible to get tar to work, it simply would require some extra work (aka scripting). However there is another, more adavanced way, steghide.
Unlike zip, steghide does not come preinstalled on most Linux Distos, but is in most default repositories, including for Arch and Ubuntu/Linux Mint.
sudo pacman -S steghide – Arch
sudo apt install steghide – Ubuntu/Linux Mint
Steghide does have its ups and downs. One upside is that it is a lot better at hiding and can easily hide any file type. It does so by using an advanced algorithm to hide it within the image (or audio) file without changing the look (or sound) of the file. This also means that without using steghide (or at least the same mathematical approach as steghide) it is very difficult to extract the hidden files from the image.
However there is big draw back: steghide only supports a limited amount of ‘cover’ files – JPEG, BMP, WAV, and AU. But since JPEG files are a common image type, it isn’t a large draw back and will not look out of place.
To hide the file the command would be steghide embed -cf clones.jpg -ef order66.pdf
At which point steghide will prompt you to enter a password. Keep in mind that if you lose the password you will likely never recover the embedded file.
To extract the file we can run steghide extract -sf clones.jpg, assuming we use the correct password, the hidden file is revealed.
All that being said, both methods leave the ‘secret’ file untouched and only hide a copy. Assuming the goal is to hide the file, the files in the open need to be securely removed. shred is a good command which overwrites the file multiple times to make it as difficult to recover as possible.
Google, the world’s most popular search engine, usually does a great job finding what we need with little information for us. But what about when Google isn’t giving us the hits we need?
This article will go over commonly unused tips that will help refine your search and tell Google exactly what you’re searching for. It will also go over fun, new features of Google.
1. Filter Results by Time
Users can now browse only the most recent results. After searching “Tools” will appear on the right below the search bar. If you click on ‘Tools’, ‘Any time’ and ‘All Results’ will appear under the search bar. Under ‘Any time’ there are options to show results ranging from the past hour to the past year.
2. Search Websites for Specific Words
If you are searching through a specific website you can now search for keywords. Ex: to see how many times Forbes mentioned Kylie Jenner you would simply type “Kylie Jenner site:Forbes.com”.
3. Search Exact Phrases and Quotes
A more commonly used trick is typing quotation marks around words or phrases to tell Google to only show results contain the exact words in quotes
4. Omit Certain Words Using the Minus Sign
In contrast to the last tip, using “-aword” will omit results containing the word right after the minus sign. For example typing “Apple -iPhone” will get rid of all results containing iPhone with the word Apple.
5. Use Google as a timer
Now Google has a stopwatch and timer feature that will show up by just searching “set timer”. No need to mess around on apps when you can just pull it up on the internet!
6. Search Newspaper Archives from the 1800s
Search “google news archive search” and the first link will bring you to a page with the names of hundreds of newspapers. You can browse issues of Newspapers by date and name.
7. Use Google to Flip a Coin
Need help making a decision? Simply search “flip a coin” and Google will flip a virtually generated coin and give you an answer of heads or tails.
8. Search Through Google’s Other Sites
Google has other search engines for specific types of results. For example, if you’re searching for a blog use “Google Blog Search” or if you want to search for a patent use “Google Patent Search”, etc.
Now with these Google tips you can search Google like a pro!
Hey wow, look at this! I’ve finally rallied myself to write a blog article about something that is not digital audio! Don’t get too excited though, this is still going to be a MATLAB article and, although I am not going to be getting too deep into any DSP, the fundamental techniques underlined in this article can be applied to a wide range of problems.
Now, let me go on record here and say I am not much of a computer programmer. Thus, if you are looking for a guide to functional programming in general, this is not the place for you! However, if you are perhaps an engineering student who’s learned MATLAB for school and are maybe interested in learning what this language is capable of, this is a good place to start. Alternatively, if you are familiar with functional languages (*cough cough* Python), then this article may help you to start transposing your knowledge to a new language.
So What are Functions?
I am sure that, depending on who you ask, there are a lot of definitions for what a function actually is. Functions in MATLAB more or less follow the standard signals-and-systems model of a system; this is to say they have a set of inputs and a corresponding set of outputs. There we go, article finished, we did it!
Joking aside, there is not much more to be said about how functions are used in MATLAB; they are excellently simple. Functions in MATLAB do provide great flexibility though because they can have as many inputs and outputs as you choose (and the number of inputs does not have to be the same as the number of outputs) and the relationship between the inputs and outputs can be whatever you want it to be. Thus, while you can make a function that is a single-input-single-output linear-time-invariant system, you can also make literally anything else.
How to Create and Use Functions
Before you can think about functions, you’ll need a MATLAB script in which to call your function(s). If you are familiar with an object oriented language (*cough cough* Java), the script is similar to your main method. Below, I have included a simple script where we create two numbers and send them to a function called noahFactorial.
It doesn’t really matter what noahFactorial does, the only thing that matters here is that the function has two inputs (here X and Y) and one output (Z).
Our actual call to the noahFactorial function happens on line 4. On the same line, we also assign the output of noahFactorial to the variable Z. Line 6 has a print statement that will print the inputs and outputs to the console along with some text.
Now looking at noahFactorial, we can see how we define and write a function. We start by writing ‘function’ and then defining the function output. Here, the output is just a single variable, but if we were to change ‘output’ to ‘[output1, output2]’, our function would return a 2×1 array containing two output values.
Some of you more seasoned programmers might notice that ‘output’ is not given a datatype. This will undoubtedly make some of you feel uncomfortable but I promise it’s okay; MATLAB is pretty good at knowing what datatype something should be. One benefit of this more laissez-faire syntax is that ‘output’ itself doesn’t even have to be a single variable. If you can keep track of it, you can make ‘output’ a 2×1 array and treat the two values like two separate outputs.
Once we write our output, we put an equals sign down (as you might expect), write the name of our function, and put (in parentheses) the input(s) to our function. Once again, the typing on the inputs is pretty soft so those too can be arrays or single values.
In all, a function declaration should look like:
function output = functionName(input)
function [output1, output2, …, outputN] = functionName(input1, input2, …,inputM)
And just to reiterate, N and M here do not have to be the same.
Once inside our function, we can do whatever MATLAB is capable of. Unlike Java, return statements are not used to send anything to the output, rather they are used to stop the function in its tracks. Usually, I will assign an output for error messages; if something goes wrong, I will assign a value to the error output and follow that with ‘return’. Doing this sends back the error message and stops the function at the return statement.
So, if we don’t use return statements, then how do we send values to the output? We make sure that in our function, we have variables with the same name as the outputs. We assign those variable values in the function. On the last line of the function when the function ends, whatever the values are in the output variables, those values are sent to the output.
For example, if we define an output called X and somewhere in our function we write ‘X=5;’ and we don’t change the value of X before the function ends, the output X will have the value: 5. If we do the same thing but make another line of code later in the function that says ‘X=6;’, then the value of X returned will be: 6. Nice and easy.
…And it’s that simple. The thing I really love about functions is that they do not have to be associated with a script or with an object, you can just whip one up and use it. Furthermore, if you find you need to perform some mathematical operation often, write one function and use it with as many different scripts as you want! This insane flexibility allows for some insane problem-solving capability.
Once you get the hang of this, you can do all sorts of things. Usually, when I write a program in MATLAB, I have my main script (sometimes a .fig file if I’m writing a GUI) in one folder, maybe with some assorted text and .csv files, and a whole other folder full of functions for all sorts of different things. The ability to create functions and some good programming methodology can allow even the most novice of computer programmers to create incredibly useful programs in MATLAB.
NOTE:For this article, I used Sublime Text to write-out the examples. If you have never used MATLAB before and you turn it on for the first time and it looks completely different, don’t be alarmed! MATLAB comes pre-packaged with its own editor which is quite good, but you can also write MATLAB code in another editor, save it as a .m file, and then open it in the MATLAB editor or run it though the MATLAB kernel later.
Docker is a very popular tool in the world of enterprise software development. However, it can be difficult to understand what it’s really for. Here we will take a brief look at why software engineers, and everyday users, choose Docker to quickly and efficiently manage their computer software.
The Views and Opinions Expressed in This Article Are Those of Parker Louison and Do Not Necessarily Reflect the Official Policy or Position of UMass Amherst IT
A Note of Intention
I want to start off this article by explaining that I’m not making this in an effort to gloat or brag, and I certainly hope it doesn’t come across that way. I put all of the creative energy I had left this semester into the project I’m about to dissect and discuss, so sadly I won’t be publishing a video this semester (as I’ve done for the past two semesters). One of the reasons I’m making this is because a lot of the reaction towards what I made included people asking how I made it and how long it took me, and trust me, we’ll go in depth on that.
My First Taste
My first experience with high-grade virtual reality was a few weeks before the start of my sophomore year at UMass when my friend Kyle drove down to visit me, bringing along his HTC Vive after finding out that the only experience I’d had with VR was a cheap $20 adapter for my phone. There’s a consensus online that virtual reality as a concept is better pitched through firsthand experience rather than by word of mouth or marketing. The whole appeal of VR relies on subjective perception and organic optical illusions, so I can understand why a lot of people think the whole “you feel like you’re in the game” spiel sounds like nothing but a load of shallow marketing. Remember when Batman: Arkham Asylum came out and nearly every review of it mentioned that it made you feel like Batman? Yeah, well now there’s actually a Batman Arkham VR game, and I don’t doubt it probably does make you actuallyfeel like you’re Batman. The experience I had with VR that night hit me hard, and I came to understand why so many people online were making it out to be such a big deal. Despite my skeptical mindset going in, I found that it’s just as immersive as many have made it out to be.
This wasn’t Microsoft’s Kinect, where the action of taking away the remote actually limited player expression. This was a genuinely deep and fascinating technological breakthrough that opens the door for design innovations while also requiring programmers to master a whole new creative craft. The rulebook for what does and doesn’t work in VR is still being written, and despite the technology still being in its early stages, I wanted in. I wanted in so badly that I decided to try and save up my earnings over the next semester in an effort to buy one. That went about as well as you’d expect; not just because I was working within a college student’s budget, but also because I’m awful with my money. My Art-Major friend Jillian would tell you it’s because I’m a Taurus, but I think it has more to do with me being a giant man-child who impulse-purchases stupid stuff because the process of waiting for something to arrive via Amazon feels like something meaningful in my life. It’s no wonder I got addicted to Animal Crossing over Spring Break…
Anyway, I was sitting in my Comp-Lit discussion class when I got the email about the Digital Media Lab’s new Ready Player One contest, with the first place winner taking home an HTC Vive Headset. I’m not usually one for contests, and I couldn’t picture myself actually winning the thing, but something about the challenge piqued my interest. The task involved creating a pitch video, less than one minute in length, in which I’d have to describe how I would implement Virtual Reality on campus in a meaningful way.
With Virtual Reality, there are a lot of possible implementations relating to different departments. In the Journalism department, we’ve talked at length in some of my classes about the potential applications of VR, but all of those applications were either for the benefit of journalists covering stories or the public consuming them. The task seemed to indicate that the idea I needed to pitch had to be centered more on benefiting the average college student, rather than benefiting a specific major (at least, that’s how I interpreted it).
One of my original ideas was a virtual stress-relief dog, but then I realized that people with anxiety would likely only get even more stressed out with having to put on some weird giant headset… and real-life dogs can give hecking good nuzzles that can’t really be simulated. You can’t substitute soft fur with hard plastic.
I came to college as a journalism major, and a day rarely goes by when I don’t have some doubts about my choice. In High School I decided on journalism because I won this debate at a CT Youth Form thing and loved writing and multi-media, so I figured it seemed like a safe bet. Still, it was a safe bet that was never pitched to me. I had no idea what being a journalist would actually be like; my whole image of what being a reporter entailed came from movies and television. I thought about it for a while, about how stupid and hormonal I was and still am, and realized that I’m kind of stuck. If I hypothetically wanted to switch to chemistry or computer science, I’d be starting from scratch with even more debt to bear. Two whole years of progress would be flushed down the toilet, and I’d have nothing to show for it. College is a place for discovery; where your comfortable environment is flipped on its head and you’re forced to take care of yourself and make your own friends. It’s a place where you work four years for a piece of paper to make your resume look nicer when you put it on an employer’s desk, and you’re expected to have the whole rest of your life figured out when you’re a hormonal teenager who spent his savings on a skateboard he never learned how to ride.
And so I decided that, in this neo-cyberpunk dystopia we’re steadily developing into, it would make sense for simulations to come before rigorous training. Why not create simulated experiences where people could test the waters for free? Put themselves in the shoes of whatever career path they want to explore to see if the shoes fit right, you know?
I mentioned “cyberpunk” there earlier because I have this weird obsession with cyberpunk stuff at the moment and I really wanted to give my pitch video some sort of tongue-in-cheek retrograde 80s hacker aesthetic to mask my cynicism as campy fun, but that had to be cut once I realized I had to make this thing under a minute long.
Gathering My Party and Gear
Anyway, I wrote up a rough script and rented out one of the booths in the Digital Media Lab. With some help from Becky Wandel (the News Editor at WMUA) I was able to cut down my audio to just barely under the limit. With the audio complete, it came time to add visual flair. I originally wanted to do a stop-motion animated thing with flash-cards akin to the intros I’ve made for my Techbytes videos, but I’m slow at drawing and realized that it’d take too much time and effort, which is hilarious because the idea I settled on was arguably even more time-consuming and draining.
I’m the proud owner of a Nikon D80, a hand-me-down DSLR from my mom, which I bring with me everywhere I go, mostly because I like taking pictures, but also because I think it makes me seem more interesting. A while back I got a speck of dust on the sensor, which requires special equipment to clean (basically a glorified turkey baster). I went on a journey to the Best Buy at the Holyoke Mall with two friends to buy said cleaning equipment while documenting the entire thing using my camera. Later, I made a geeky stop-motion video out of all those photos, which I thought ended up looking great, so I figured doing something similar for the pitch video would be kind of cool. I messaged a bunch of my friends, and in a single day I managed to shoot the first 60% of the photos I needed. I then rented out the Vive in the DML and did some photoshoots there.
At one point while I was photographing my friend Jillian playing theBlu, she half-jokingly mentioned that the simulation made her want to study Marine Biology. That kind of validated my idea and pushed me to make sure I made this video perfect. The opposite effect happened when talking to my friend Rachael, who said she was going to pitch something for disability services, to which I immediately thought “damn, she might win with that.”
I then knew what I had to do. It was too late to change my idea or start over, so I instead decided that my best shot at winning was to make my video so stylistically pleasing and attention-grabbing that it couldn’t be ignored. If I wasn’t going to have the best idea, then gosh darn it (I can’t cuss because this is an article for my job) I was going to have the prettiest graphics I could muster.
The Boss Fight
I decided to use a combination of iMovie and Photoshop, programs I’m already familiar with, because teaching myself how to use more efficient software would ironically be less efficient given the short time frame I had to get this thing out the door. Using a drawing tablet I borrowed from my friend Julia, I set out to create the most complicated and ambitious video project I’ve ever attempted to make.
A few things to understand about me: when it comes to passion projects, I’m a bit of a perfectionist and extremely harsh on myself. I can’t even watch my Freshman Year IT video because I accidentally made it sound like a $100 investment in some less than amazing open back headphones was a reasonable decision on my part, and my other IT video makes me cringe because I thought, at the time, it’d be funny to zoom in on the weird hand motions I make while I talk every five seconds.
So in this case, I didn’t hold back and frequently deleted whole sections of my video just because I didn’t like how a single brush stroke animated (with the exception of the way my name is lopsided in the credits, which will haunt me for the rest of my life). For two weeks, I rigorously animated each individual frame in Photoshop, exported it, and imported it into iMovie.
(Above) A visual representation of all the files it took to create the video
(Above) Frame by frame, I lined up my slides in iMovie
The most demanding section was, without a doubt, the one involving my friend Matthew, which I spent one out of the two weeks entirely focused on. For that section, I needed it to animate at a speed faster than 0.04 seconds, which is impossible because 0.04 seconds is the shortest you can make a frame in iMovie’s streamlined interface, so I ended up creating a whole new project file, slowing down my audio by half-speed, editing the frames of that section relative to that slowed down audio before exporting it, putting it into the original project file and doubling its speed just to get it to animate smoothly.
(Above) Some sections required me to find loopholes in the software to get them to animate faster than iMovie would allow
(Above) Some of the scrap paper I scribbled notes on while editing the video together
Each individual border was drawn multiple times with slight variations and all the on-screen text (with the exception of the works cited) was handwritten by me multiple times over so that I could alternate between the frames of animation to make sure everything was constantly moving.
(Above) Boarders were individually drawn and cycled through in order to maintain visual momentum
This was one of my major design philosophies during the development of this project: I didn’t want there to be a single moment in the 59 seconds where nothing was moving. I wanted my video to grab the viewer’s attention, and I feared that losing momentum in the visual movement would cause me to lose the viewer’s interest. The song LACool by DJ Grumble came on my Spotify radio coincidentally right when I was listening over the audio for the section I was editing, and I thought it fit so well I bought it from iTunes on the spot and edited it in.
I finished my video on Monday, March 26th, turned it into the Digital Media Lab, stumbled back to my dorm, and went to bed at 6:00 PM by accident.
(Above) The final video submission
The winner wouldn’t be announced until Wednesday, so for two days I nervously waited until 6:00 PM on March 28th, when I sat on my bed in my dorm room refreshing the Digital Media Lab website every 7 seconds like a stalker on an ex’s Facebook page waiting for the winner to finally be posted. At 6:29 PM I got a call from an unrecognized number from Tallahassee, Florida, and almost didn’t answer because I thought it was a sales call. Turns out it was Steve Acquah, the coordinator of the Digital Media Lab, who informed me that my video won. Soon after, the Digital Media Lab Website was also updated with the announcement.
(Above) A screenshot taken of the announcement on the Digital Media Lab Website
Along with the raw joy and excitement came a sort of surreal disbelief. Looking back on those stressful weeks of work, it all felt like it flew by faster than I could’ve realized once I got that phone call. I’m so grateful for not only the reward but the experience. Making that video was a stressful nightmare, but it also forced me to push myself to my creative limits and challenge myself in so many ways. On a night where I would’ve probably just gone home and watched Netflix by myself, I sprinted around campus to meet up with and take photos of my friends. This project got me to get all my friends together and rent out the Vive in the DML, basically forcing me to play video games and have fun with the people I love. While the process of editing it all together drove me crazy, the journey is definitely going to be a highlight of my time at UMass.
I’m grateful to all of my friends who modeled for me, loaned me equipment, got dinner with me while I was stressing out over editing, played Super Hot VR with me, gave me advice on my audio, pushed me to not give up, and were there to celebrate with me when I won. I’m also immensely grateful to the staff and managers of the DML for providing me with this opportunity, as well as for their compliments and praise for the work I did. This was an experience that means a lot to me and it’s one I won’t soon forget. Thank you.
I picked up my prize the other day at the DML (see photo above the title of this article)! Unfortunately, I have a lot of work going on, so it’s going to be locked up in a safe place until that’s done. Still, it’s not like I could use it right now if I wanted to. My gaming PC hasn’t been touched in ages (since I don’t bring it with me to college) so I’m going to need to upgrade the GPU before I can actually set up the Vive with it. It’s a good thing there isn’t a spike in demand for high-end GPUs at the moment for cryptocurrency mining, right?
(Above) A visual representation of what Bitcoin has done to the GPU market (and my life)
Regardless of when I can actually use the prize I won, this experience was one I’m grateful to have had. The video I made is one I’m extremely proud of, and the journey I went on to create it is one I’ll think about for years to come.
Digital audio again? Ah yes… only in this article, I will set out to examine a simple yet complicated question: how does the sampling rate of digital audio affect its quality? If you have no clue what the sampling rate is, stay tuned and I will explain. If you know what sampling rate is and want to know more about it, also stay tuned; this article will go over more than just the basics. If you own a recording studio and insist on recording every second of audio in the highest possible sampling rate to get the best quality, read on and I hope inform you of the mathematical benefits of doing so…
What is the Sampling Rate?
In order for your computer to be able to process, store, and play back audio, the audio must be in a discrete-time form. What does this mean? It means that, rather than the audio being stored as a continuous sound-wave (as we hear it), the sound-wave is broken up into a bunch of infinitesimally small points. This way, the discrete-time audio can be represented as a list of numerical values in the computer’s memory. This is all well and good but some work needs to be done to turn a continuous-time (CT) sound-wave into a discrete-time (DT) audio file; that work is called sampling.
During sampling, the amplitude (loudness) of the CT wave is measured and recorded at regular intervals to create the list of values that make up the DT audio file. The inverse of this sampling interval is known as the sample rate and has a unit of Hertz (Hz). By far, the most common sample rate for digital audio is 44100 Hz; this means that the CT sound-wave is sampled 44100 times every second.
This is a staggering number of data points! On a audio CD, each sample is represented by two bytes; that means that one second of audio will take up over 170 KB of space! Why is all this necessary? you may ask…
The Nyquist-Shannon Sampling Theorem
Some of you more interested readers may have heard already of the Nyquist-Shannon Sampling Theorem (some of you may also know this theorem simply as the Nyquist Theorem). The Nyquist-Shannon Theorem asserts that any CT signal can be sampled, turned into a DT file, and then converted back into a CT signal with no loss in information so long as one condition is met: the CT signal is band-limited at the Nyquist Frequency. Let’s unpack this…
Firstly, what does it mean for a signal to be band-limited? Every complex sound-wave is made up of a whole myriad of different frequencies. To illustrate this point, below is the frequency spectrum (the graph of all the frequencies in a signal) of All Star by Smash Mouth:
Smash Mouth is band-limited! How do we know? Because the plot of frequencies ends. This is what it means for a signal to be band-limited: it does not contain any frequencies beyond a certain point. Human hearing is band-limited too; most humans cannot hear any frequencies above 20,000 Hz!
So, I suppose then we can take this to mean that, if the Nyquist frequency is just right, any audible sound can be represented in digital form with no loss in information? By this theorem, yes! Now, you may ask, what does does the Nyquist frequency have to be for this to happen?
For the Shannon-Nyquist Sampling Theorem to hold, the Nyquist frequency must be greater than twice the highest frequency being sampled. For sound, the highest frequency is 20 kHz; and thus, the Nyquist frequency required for sampled audio to capture sound with no loss in information is… 40 kHz. What was that sample-rate I mentioned earlier? You know, that one that is so common that basically all digital audio uses it? It was 44.1 kHz. Huzzah! Basically all digital audio is a perfect representation of the original sound it is representing! Well…
Aliasing: the Nyquist Theorem’s Complicated Side-Effect
Just because we cannot hear sound about 20 kHz does not mean it does not exist; there are plenty of sound-waves at frequencies higher than humans can hear.
So what happens to these higher sound-waves when they are sampled? Do they just not get recorded? Unfortunately no…
So if these higher frequencies do get recorded but frequencies above the Nyquist frequency cannot be sampled correctly, then what happens to them? They are falsely interprated as lower frequencies and superimposed over the correctly sampled frequencies. The distance between the high frequency and the Nyquist frequency govern what lower frequency these high-frequency signals will be interpreted as. To illustrate this point, here is an extreme example…
Say we are trying to sample a signal that contains two frequencies: 1 Hz and 3 Hz. Due to poor planning, the Nyquist frequency is selected to be 2 Hz (meaning we are sampling at a rate of 4 Hz). Further complicating things, the 3 Hz cosine-wave is offset by 180° (meaning the waveform is essentially multiplied by -1). So we have the following two waveforms….
When the two waves are superimposed to create one complicated waveform, it looks like this…
Pretty, right? Well unfortunately, if we try to sample this complicated waveform at 4 Hz, do you know what we get? Nothing! Zero! Zilch! Why is this? Because when the 3 Hz cosine wave is sampled and reconstructed, it is falsely interpreted as a 1 Hz wave! Its frequency is reflected about the Nyquist frequency of 2 Hz. Since the original 1 Hz wave is below the Nyquist frequency, it is interpreted with the correct frequency. So we have two 1 Hz waves but one of them starts at 1 and the other at -1; when they are added together, they create zero!
Another way we can see this phenomena is by looking at the graph. Since we are sampling at 4 Hz, that means we are observing and recording four evenly-spaced points between zero and one, one and two, three and four, etc… Take a look at the above graph and try to find 4 evenly-space points between zero and one (but not including one). You will find that every single one of these points corresponds with a value of zero! Wow!
So aliasing can be a big issue! However, designers of digital audio recording and processing systems are aware of this and actually provision special filters (called anti-aliasing filters) to get rid of these unwanted effects.
So is That It?
Nope! These filters are good, but they’re not perfect. Analog filters cannot just chop-off all frequencies above a certain point, they have to, more or less, gradually attenuate them. So this means designers have a choice: either leave some high frequencies and risk distortion from aliasing or roll-off audible frequencies before they’re even recorded.
And then there’s noise… Noise is everywhere, all the time, and it never goes away. Modern electronics are rather good at reducing the amount of noise in a signal but they are far from perfect. Furthermore noise tends to be mostly present at higher frequencies; exactly the frequencies that end up getting aliased…
What effect would this have on the recorded signal? Well if we believe that random signal noise is present at all frequencies (above and below the Nyquist frequency), then our original signal would be masked with a layer of infinitely-loud aliased noise. Fortunately for digitally recorded music, the noise does stop at very high frequencies due to transmission-line effects (a much more complicated topic).
What can be Learned from All of This?
The end result of this analysis on sample rate is that the sample rate alone does not tell the whole story about what’s being recorded. Although 44.1 kHz (the standard sample rate for CDs and MP3 files) may be able to record frequencies up to 22 kHz, in practice a signal being sampled at 44.1 kHz will have distortion in the higher frequencies due to high frequency noise beyond the Nyquist frequency.
So then, what can be said about recording at higher sample rates? Some new analog-to-digital converts for musical recording sample at 192 kHz. Most, if not all, of the audio recording I do is done at a sample rate of 96 kHz. The benefit to recording at the higher sample rates is that you can recording high-frequency noise without it causing aliasing and distortion in the audible range. With 96 kHz, you get a full 28 kHz of bandwidth beyond the audible range where noise can exist without causing problems. Since signals with frequencies up to around 9.8 MHz can exist in a 10 foot cable before transmission line effects kick in, this is extremely important!
And with that, a final correlation can be predicted: the greater the sample rate, the less noise will result in aliasing in the audible spectrum. To those of you out there who have insisted that the higher sample rates sound better, maybe now you’ll have some heavy-duty math to back up your claims!
You’ve probably heard of Bitcoin. Maybe you’ve even heard of other cryptocurrencies, like Ethereum. Maybe you’ve heard that these cryptocurrencies are mined, but maybe you don’t understand how exactly a digital coin could be mined. We’re going to discuss what cryptocurrency miners do and why they do it. We will be discussing the Bitcoin blockchain in particular, but keep in mind that Bitcoin has grown several orders of magnitude greater in the 9-10 years it’s been around. Though other cryptocurrencies change some things up a bit, the same general concepts apply to most blockchain-based cryptocurrencies.
What is Bitcoin?
Bitcoin is the first and the most well-known cryptocurrency. Bitcoin came about in 2009 after someone (or someones, nobody really knows) nicknamed Satoshi Nakamoto released a whitepaper describing a concept for a decentralized peer-to-peer digital currency based on a distributed ledger called a blockchain, and created by cryptographic computing. Okay, those are a lot of fancy words, and if you’ve ever asked someone what Bitcoin is then they’ve probably thrown the same word soup at you without much explanation, so let’s break it down a bit:
Decentralized means that the system works without a main central server, such as a bank. Think of a farmer’s market versus a supermarket; a supermarket is a centralized produce vendor whereas a farmer’s market is a decentralized produce vendor.
Peer-to-peer means that the system works by each user communicating directly with other user. It’s like talking to someone face-to-face instead of messaging them through a middleman like Facebook. If you’ve ever used BitTorrent (to download Linux distributions and public-domain copies of the U.S. Constitution, of course), you’ve been a peer on a peer-to-peer BitTorrent network.
Blockchain is a hot topic right now, but it’s one of the harder concepts to describe. A blockchain performs the job of a ledger at a bank, keeping track of what transactions occurred. What makes blockchain a big deal is that it’s decentralized, meaning that you don’t have to trust a central authority with the list of transactions. Blockchains were first described in Nakamoto’s Bitcoin whitepaper, but Bitcoin itself is not equivalent to blockchain. Bitcoin uses a blockchain. A blockchain is made up of a chain of blocks. Each block contains a set of transactions, and the hash of the previous block, thus chaining them together.
Hashing is the one-way (irreversible) process of converting any input into a string of bits. Hashing is useful in computer science and cryptography because it’s really easy to get the hash of something, but it’s almost impossible to find out what input originally made a particular hash. Any input will always have the same output, but any little difference will make a completely different hash. For example, in the hashing algorithm that Bitcoin uses called SHA-256, “UMass” will always be:
Those are the general details that you need to know to understand cryptocurrency. Miners are just one kind of participant in cryptocurrency.
Who are miners?
Anybody with a Bitcoin wallet address can participate in the blockchain, but not everybody who participates has to mine. Miners are the ones with the big, beefy computers that run the blockchain network. Miners run a mining program on their computer. The program connects to other miners on the network and constantly requests the current state of the blockchain. The miners all race against each other to make a new block to add to the blockchain. When a miner successfully makes a new block, they broadcast it to the other miners in the network. The winning miner gets a reward of 12.5 BTC for successfully adding to the blockchain, and the miners begin the race again.
Okay, so what are the miners doing?
Miners can’t just add blocks to the blockchain whenever they want. This is where the difficulty of cryptocurrency mining comes from. Miners construct candidate blocks and hash them. They compare that hash against a target.
Now get ready for a little bit of math: Remember those 256-bit hashes we talked about? They’re a big deal because there are 2^256 possible hashes (that’s a LOT!), ranging from all 0’s to all 1’s. The Bitcoin network has a difficulty value that changes over time to make finding a valid block easier or harder. Every time a miner hashes a candidate block, they look at the binary value of the hash, and in particular, how many 0s the hash starts with. When a candidate block fails to meet the target, as they often do, the miner program tries to construct a different block. If the number of 0’s at the start of the hash is at least the target amount specified by the difficulty, then the block is valid!
Remember that changing the block in any way makes a completely different hash, so a block with a hash one 0 short of the target isn’t any closer to being valid than another block with a hash a hundred 0’s short of the target. The unpredictability of hashes makes mining similar to a lottery. Every candidate block has as good of a chance of having a valid hash as any other block. However, if you have more computer power, you have better odds of finding a valid block. In one 10 minute period, a supercomputer will be able to hash more blocks than a laptop. This is similar to a lottery; any lottery ticket has the same odds of winning as another ticket, but having more tickets increases your odds of winning.
Can I become a miner?
You probably won’t be able to productively mine Bitcoin alone. It’s like buying 1 lottery ticket when other people are buying millions. Nowadays, most Bitcoin miners pool their mining power together into mining pools. They mine Bitcoin together to increase the chances that one of them finds the next block, and if one of the miners gets the 12.5 BTC reward, they split their earnings with the rest of the pool pro-rata: based on the computing power (number of lottery tickets) contributed.
The U.S. dollar used to be tied to the supply of gold. A U.S. dollar bill was essentially an I.O.U. from the U.S. Federal Reserve for some amount of gold, and you could exchange paper currency for gold at any time. The gold standard was valuable because gold is rare and you have to mine for it in a quarry. Instead of laboring by digging in the quarries, Bitcoin miners labor by calculating hashes. Nobody can make fraudulent gold out of thin air. Bitcoin employs the same rules, but instead of making the scarce resource gold, they made it computer power. It’s possible for a Bitcoin miner to get improbably lucky and find 8 valid blocks in one day and earn 100 BTC, just like it’s possible but improbable to find a massive golden boulder while mining underground one day. These things are effectively impossible, but it is actually impossible for someone to fake a block on the blockchain (The hash would be invalid!) or to fake a golden nugget. (You can chemically detect fool’s gold!)
Other cryptocurrencies work in different ways. Some use different hashing algorithms. For example, Zcash is based on a mining algorithm called Equihash that is designed to be best mined by the kinds of graphics cards found in gaming computers. Some blockchains aren’t mined at all. Ripple is a coin whose cryptocurrency “token” XRP is mostly controlled by the company itself. All possible XRP tokens already exist and new ones cannot be “minted” into existence, unlike the 12.5 BTC mining reward in Bitcoin, and most XRP tokens are still owned by the Ripple company. Some coins, such as NEO, are not even made valuable by scarcity of mining power at all. Instead of using “proof of work” like Bitcoin, they use “proof of stake” to validate ownership. You get paid for simply having some NEO, and the more you have, the more you get!
Blockchains and cryptocurrencies are have become popular buzzwords in the ever-connected worlds of computer science and finance. Blockchain is a creative new application of cryptography, computer networking, and processing power. It’s so new that people are still figuring out what else blockchains can be applied to. Digital currency seems to be the current trend, but blockchains could one day revolutionize health care record-keeping or digital elections. Research into blockchain technology has highlighted many weaknesses in the concept; papers have been published on doublespend attacks, selfish mining attacks, eclipse attacks, Sybil attacks, etc. Yet the technology still has great potential. Cryptocurrency mining has already brought up concerns over environmental impact (mining uses a lot of electricity!) and hardware costs (graphics card prices have increased dramatically!), but mining is nevertheless an engaging, fun and potentially profitable way to get involved in the newest technology to change the world.
The last time we talked about Android Studio, we learned about the layout of an Android app, and how different parts of the app are organized. You can find our previous discussion of Android Studio here. Now that we are familiar with using Android Studio and navigating around the guts of an Android app, let’s get started with making our first app.
The era of self driving cars is coming soon, as we all know, and GM accordingly bought a small startup called, Strobe, Inc. whichnow has a very large influence and is considered a dominant force in the movement towards autonomous driving. Strobe is a very youthful startup that has recently sold ‘Lider’ (laser radar), a piece of technology that is crucial to the autonomy of the self-driving car to General Motors for an undisclosed amount. As the article says, “…technology is according to many in the incipient self-driving world critical to vehicles that will someday achieve full autonomy and be able to drive themselves with no human input…” We can see that many different companies such as Tesla Autopilot, Cadillac Super Cruise and Google’s Waymo are involved in the process of developing self driving cars. The race to autonomy is on and we will soon see the result!
Modern video formats have been designed in such a way as to minimize the storage they take up while maximizing things like resolution and frame rate. To achieve this goal they have developed some clever techniques that can look very cool when they don’t work as they should.
Let’s start with frames. Each frame of a video is like a picture. Most videos very between 24 and 60 frames per second and as you can imagine having 60 pictures for only one second of a video would take up a huge amount of space. So what the developers of modern video formats did was only have full pictures when absolutely necessary. If you think about it a lot of the frames in a video are just a very similar picture with slight differences. So what many formats do is simply tell the old pixels on the screen where to go to make the new picture instead of creating a whole new picture. This process allows for much smaller file sizes for videos as well as allowing datamoshing.
What datamoshing does is it gets rid of the new full picture frames and instead only keeps the frames that tell the pixels where to go. What results is a new video moving based on another videos directions or an image from the same video where the pixels go in directions they’re not suppose to. This process can lead to some very cool and unique glitch effects that have been used to various degrees within different mediums to create an interesting and unique effect.
Do you have an old laptop lying around that you don’t know what to do with? Are you concerned about your data given recent tech company security breaches? Or maybe you’re just bored and want to fiddle around on some computers. Either way here are five free applications that you can host yourself:
Nextcloud – For those who don’t have access to unlimited cloud storage, or those who aren’t comfortable not being in control of their files, you can host your own cloud storage. Nextcloud provides similar functionality to storage providers like Google Drive and Box allowing for file sharing and online editing. There are client apps for all major phones and computers and even provides the option to enable a calendar app. Although Nextcloud is relatively new, it is based on Owncloud which is relatively established, although not quite as modern.
Gitlab – For the developers out there that don’t want to pay for private repositories there’s gitlab. This is a very mature product that is packed full of features like Gitlab Continuous Integration, code snippets, and project wikis. Gitlab can integrate with many external applications as well such as Visual Studio, Jenkins, KanBan and Eclipse. For those that don’t have a free computer to run it on, they also provide hosting for both repository storage and continuous integration runners, although those options do cost money.
Docuwiki – If you constantly find yourself looking up the same information or you just want a place to organize notes Docuwiki is the app for you. It supports a markup formatting style, multiple namespaces to organize your information, and diff report viewer to see view page changes. If the outdated UI doesn’t really appeal to you then Confluence is another option. It is geared more towards the enterprise environment, but for $10 (one time, not a subscription) you can host Confluence for up to ten users.
Mail-in-a-Box – There are a lot of email providers out there, but if this is something you’re interested in hosting Mail-in-a-Box is a great solution. Although the setup of the the application itself is fairly easy, there isn’t much customization that can be done. For a more robust solution iRedMail might be the way to go. Note hosting email can be tricky, and generally is not possible from home internet connections.
Subsonic – All the audiophiles will appreciate Subsonic, an alternative to Google Play and iTunes. You can now store all your music yourself rather than being restricted to the Google or Apple music clients. With apps for all computers and phones you can listen to your music wherever you are. Subsonic includes support for playlists, most major music file formats, and customized themes.
Have you ever found yourself watching tech tutorials online? Nothing to be ashamed of, as everyone has run into an issue they need help solving at some point in their lives. Now, have you ever found yourself watching a BAD tech tutorial online? You know, one where the audio sounds like it’s being dragged across concrete and the video is literally a blurry recording of a computer screen? It ironically feels like a lot of the time the people who make tech tutorials need a tech tutorial on how to make good quality tech tutorials.
So join me, Parker Louison, as I wave my hands around awkwardly for ten minutes while trying my best to give helpful tips for making your tech tutorial professional, clean, and stand out among all the low effort content plaguing the internet!
The concept of using multiple desktops isn’t new. Apple incorporated this feature back in 2007 starting with OS X 10.5 Leopard in the form of Spaces, allowing users to have up to 16 desktops at once. Since then, PC users have wondered if/when Microsoft would follow suit. Now, almost a decade later, they finally have.
Having more than one desktop allows you to separate your open windows into different groups and only focus on one group at a time. This makes it much easier to juggle working on multiple projects at once, giving each one a dedicated desktop. It’s also useful for keeping any distractions out of sight as you try to get your work done, while letting you easily shift into break mode at any time.
If you own a Windows computer and didn’t know about multiple desktops, you’re not alone! Microsoft didn’t include the feature natively until Windows 10, and even then they did it quietly with virtually no advertising for it at all. Here’s a quick guide on how to get started.
To access the desktops interface, simply hold the Windows Key and then press Tab. This will bring you to a page which lists the windows you currently have open. It will look something like this:
Here, you can see that I’ve got a few different tasks open. I’m trying to work on my art in MS Paint, but I keep getting distracted by YouTube videos and Moodle assignments. To make things a little easier, I can create a second desktop and divide these tasks up to focus on one at a time.
To create a new desktop, click the New desktop button in the bottom right corner of this screen. You will see the list of open desktops shown at the bottom:
Now you can see I have a clean slate on Desktop 2 to do whatever I want. You can select which desktop to enter by clicking on it. Once you are in a desktop, you can open up new pages there and it will only be open in that desktop. You can also move pages that are already open from one desktop to another. Let’s move my MS Paint window over to Desktop 2.
On the desktops interface, hovering over a desktop will bring up the list of open windows on that desktop. So, since I want to move a page from Desktop 1 to Desktop 2, I hover over Desktop 1 so I can see the MS Paint window. To move pages around, simply click and drag them to the desired desktop.
I dragged my MS Paint window over from Desktop 1 to Desktop 2. Now, when I open up Desktop 2, the only page I see is my beautiful artwork.
Finally, I can work on my art in peace without distractions! And if I decide I need a break and want to watch some YouTube videos, all I have to do is press Windows+Tab and select Desktop 1 where YouTube is already open.
If you’re still looking for a reason to upgrade to Windows 10, this could be the one. The feature really is super useful once you get the hang of it and figure out how to best use it for your needs. My only complaint is that we don’t have the ability to rename desktops, but this is minor and I’m sure it will be added in a future update.
“If This, Then That”, or IFTTT, is a powerful and easy to use automation tool that can make your life easier. IFTTT is an easy way to automate tasks that could be repetitive or inconvenient. It operates on the fundamental idea of if statements from programming. Users can create “applets”, which are simply just scripts, that trigger when an event occurs. These applets can be as simple as “If I take a picture on my phone, upload it to Facebook”, or range to be much more complex. IFTTT is integrated with over 300 different channels, including major services such as Facebook, Twitter, Dropbox, and many others, which makes automating your digital life incredibly easy.
Getting Started with IFTTT and Your First Applet
Getting started with IFTTT is very easy. Simply head over to the IFTTT website and sign up. After signing up, you’ll be read to start automating by creating your first applet. In this article, we will build a simple example applet to send a text message of today’s weather report every morning.
In order to create an applet, click on “My Applets” at the top of the page, and select “New Applet”.
Now you need to select a service, by selecting the “this” keyword. In our example, we want to send a text message of the weather every morning. This means that the service will be under a “weather” service like Weather Underground. Hundreds of services are connected through IFTTT, so the possibilities are almost limitless. You can create applets that are based off something happening on Facebook, or even your Android/iOS device.
Next, you need to select a trigger. Again, our sample applet is just to send a text message of the weather report to your text in the morning. This trigger is simply “Today’s weather report”. Triggers often have additional fields that need to be filled out. In this particular one, the time of the report needs to be filled out.
Next, an action service must be selected. This is the “that” part of IFTTT. Our example applet is going to send a text message, so the action service is going to fall under the SMS category.
Like triggers, there are hundreds of action services that can be be used in your applets. In this particular action, you can customize the text message using variables called “ingredients”.
Ingredients are simply variables provided by the trigger service. In this example, since we chose Weather Underground as the trigger service, then we are able to customize our text message using weather related variables provided by Weather Underground such as temperature or condition.
After creating an action, you simply need to review your applet. In this case, we’ve just created an applet that will send a text message about the weather every day. If you’re satisfied with what it does, you can hit finish and IFTTT will trigger your applet whenever the trigger event occurs. Even from this simple applet, it is easy to see that the possibilities of automation are limitless!
Whether you came to college with an old laptop, or want to buy a new one without breaking the bank, making our basic computers faster is something we’ve all thought about at some point. This article will show you some software tips and tricks to improve your gaming experience without losing your shirt, and at the end I’ll mention some budget hardware changes you can make to your laptop. First off, we’re going to talk about in-game settings.
All games have built in settings to alter the individual user experience from controls to graphics to audio. We’ll be talking about graphics settings in this section, primarily the hardware intensive ones that don’t compromise the look of the game as much as others. This can also depend on the game and your individual GPU, so it can be helpful to research specific settings from other users in similar positions.
V-Sync, or Vertical Synchronization, allows a game to synchronize the framerate with that of your monitor. Enabling this setting will increase the smoothness of the game. However, for lower end computers, you may be happy to just run the game at a stable FPS that is less than your monitor’s refresh rate. (Note – most monitors have a 60Hz or 60 FPS refresh rate). For that reason, you may want to disable it to allow for more stable low FPS performance.
Anti-Aliasing, or AA for short, is a rendering option which reduces the jaggedness of lines in-game. Unfortunately the additional smoothness heavily impacts hardware usage, and disabling this while keeping other things like texture quality or draw distance higher can make big performance improvements without hurting a game’s appearance too much. Additionally, there are many different kinds of AA options that games might have settings for. MSAA (Multisampling AA), and the even more intensive, TXAA (Temporal AA), are both better smoothing processes that have an even bigger impact on performance. Therefore turning these off on lower-end machines is almost always a must. FXAA (Fast Approximate AA) uses the least processing power, and can therefore be a nice setting to leave on if your computer can handle it.
Anisotropic Filtering (AF):
This setting adds depth of field to a game, by making things further away from your character blurrier. Making things blurrier might seem like it would make things faster, however it actually puts a greater strain on your system as it needs to make additional calculations to initiate the affect. Shutting this off can yield improvements in performance, and some players even prefer it, as it allows them to see distant objects more clearly.
While the aforementioned are the heaviest hitters in terms of performance, changing some other settings can help increase stability and performance too (beyond just simple texture quality and draw distance tweaks). Shadows and reflections are often unnoticed compared to other effects, so while you may not need to turn them off, turning them down can definitely make an impact. Motion blur should be turned off completely, as it can make quick movements result in heavy lag spikes.
The guide above is a good starting point for graphics settings; because there are so many different models, there are any equally large number of combinations of settings. From this point, you can start to increase settings slowly to find the sweet spot between performance and quality.
Before we talk about some more advanced tips, it’s good practice to close applications that you are not using to increase free CPU, Memory, and Disk space. This alone will help immensely in allowing games to run better on your system.
Task Manager Basics:
Assuming you’ve tried to game on a slower computer, you’ll know how annoying it is when the game is running fine and suddenly everything slows down to slideshow speed and you fall off a cliff. Chances are that this kind of lag spike is caused by other “tasks” running in the background, and preventing the game you are running from using the power it needs to keep going. Or perhaps your computer has been on for awhile, so when you start the game, it runs slower than its maximum speed. Even though you hit the “X” button on a window, what’s called the “process tree” may not have been completely terminated. (Think of this like cutting down a weed but leaving the roots.) This can result in more resources being taken up by idle programs that you aren’t using right now. It’s at this point that Task Manager becomes your best friend. To open Task Manager, simply press CTRL + SHIFT + ESC at the same time or press CTRL + ALT + DEL at the same time and select Task Manager from the menu. When it first appears, you’ll notice that only the programs you have open will appear; click the “More Details” Button at the bottom of the window to expand Task Manager. Now you’ll see a series of tabs, the first one being “Processes” – which gives you an excellent overview of everything your CPU, Memory, Disk, and Network are crunching on. Clicking on any of these will bring the process using the highest amount of each resource to the top of the column. Now you can see what’s really using your computer’s processing power. It is important to realize that many of these processes are part of your operating system, and therefore cannot be terminated without causing system instability. However things like Google Chrome and other applications can be closed by right-clicking and hitting “End Task”. If you’re ever unsure of whether you can end a process or not safely, a quick google of the process in question will most likely point you in the right direction.
Here is where you can really make a difference to your computer’s overall performance, not just for gaming. From Task Manager, if you select the “Startup” tab, you will see a list of all programs and services that can start when your computer is turned on. Task Manager will give an impact rating of how much each task slows down your computers boot time. The gaming app Steam, for example, can noticeably slow down a computer on startup. A good rule of thumb is to allow virus protection to start with Windows, however everything else is up to individual preference. Shutting down these processes on startup can prevent unnecessary tasks from ever being opened, and allow for more hardware resource availability for gaming.
You probably know that unlike desktops, laptops contain a battery. What you may not know is that you can alter your battery’s behavior to increase performance, as long as you don’t mind it draining a little faster. On the taskbar, which is by default located at the bottom of your screen, you will notice a collection of small icons next to the date and time on the right, one of which looks like a battery. Left-clicking will bring up the menu shown below, however right-clicking will bring up a menu with an option “Power Options” on it.
Clicking this will bring up a settings window which allows you to change and customize your power plan for your needs. By default it is set to “Balanced”, but changing to “High Performance” can increase your computer’s gaming potential significantly. Be warned that battery duration will decrease on the High Performance setting, although it is possible to change the battery’s behavior separately for when your computer is using the battery or plugged in.
Unlike desktops, for laptops there are not many upgrade paths. However one option exists for almost every computer that can have a massive effect on performance if you’re willing to spend a little extra.
Hard Disk (HDD) to Solid State (SSD) Drive Upgrade:
Chances are that if you have a budget computer, it probably came with a traditional spinning hard drive. For manufacturers, this makes sense as they are cheaper than solid states, and work perfectly well for light use. Games can be very demanding on laptop HDDs to recall and store data very quickly, sometimes causing them to fall behind. Additionally, laptops have motion sensors built into them which restrict read/write capabilities when the computer is in motion to prevent damage to the spinning disk inside the HDD. An upgrade to a SSD not only eliminates this restriction, but also has a much faster read/write time due to the lack of any moving parts. Although SSDs can get quite expensive depending on the size you want, companies such as Crucial or Kingston offer a comparatively cheap solution to Samsung or Intel while still giving you the core benefits of a SSD. Although there are a plethora of tutorials online demonstrating how to install a new drive into your laptop, make sure you’re comfortable with all the dangers before attempting, or simply take your laptop into a repair store to have them do it for you. It’s worth mentioning that when you install a new drive, you will need to reinstall Windows, and all your applications from your old drive.
Memory Upgrade (RAM):
Some laptops have an extra memory slot, or just ship with a lower capacity than what they are capable of holding. Most budget laptops will ship with 4GB of memory, which is often not enough to support both the system, and a game.
Upgrading or increasing memory can give your computer more headroom to process and store data without lagging up your entire system. Unlike with SSD upgrades, memory is very specific and it is very easy to buy a new stick that fits in your computer, but does not function with its other components. It is therefore critical to do your research before buying any more memory for your computer; that includes finding out your model’s maximum capacity, speed, and generation. The online technology store, Newegg, has a service here that can help you find compatible memory types for your machine.
While these tips and tricks can help your computer to run games faster, there is a limit to what hardware is capable of. Budget laptops are great for the price point, and these user tricks will help squeeze out all their potential, but some games will simply not run on your machine. Make sure to check a game’s minimum and recommended specs before purchasing/downloading. If your computer falls short of minimum requirements, it might be time to find a different game or upgrade your setup.
Fun fact: You can type the “é” character on Mac OS by holding down the “e” key until the following menu pops up:
From there, simply select the second option with your mouse and you’ll be right as rain. I’m only telling you this because the application I’ll be discussing today is called Glitché, not “Glitche”.
Glitché is an app that provides users with “a full range of tools and options to turn images into masterpieces of digital art.” That description is from the app’s official website; a website which also proudly displays the following quote:
Either this quote is outdated or Mr. Knight is putting more emphasis on the word “compared” than I’m giving him credit for. While yes, one could argue that contextually a 0.99¢ application would comparatively seem like a free download to someone purchasing a nearly $400 post-production suite, I might be more inclined to ask how you define the word “free”.
You see, Glitché is actually 0.99¢…unless you want the other features. Do you want Hi-Res Exports? That’ll be $2.99. Do you want to be able to edit videos? Another $2.99, please. Do you want camera filters? $2.99 it is!
So Glitché is actually more like $9.96, but that doesn’t sound as good as 0.99¢, does it? You might argue that I’m making a big deal out of this, but I’m just trying to put this all in perspective for you. From here on out I want you to understand that the program I’m critiquing charges $10 for the full experience, which is fairly expensive for a phone application.
Another issue I have with this quote and the description given by the website is that Glitché isn’t trying to compete with Adobe Photoshop. Glitché isn’t a replacement for your post-production suite nor is it your one-stop-shop for turning images into masterpieces of digital art; rather, Glitché strives to give you a wide selection of tools to achieve a very specific look. This aesthetic can best be described as a mixture of To Adrian Rodriguez, With Love and a modern take on cyberpunk. Essentially the app warps and distorts a given image to make it look visually corrupted, glitched, or of VHS quality. It’s a bit hard to describe, so here’s a few examples of some of the more interesting filters.
Unedited photo for reference
The “GLITCH” filter. Holding down your finger on the screen causes the flickering and tearing to increase. Tapping once stops the flickering.
The “CHNNLS” filter. Dragging your finger across the screen sends a wave of rainbow colors across it. The color of the distortion can be changed.
The “SCREEN” filter works like the “CHNNLS” filter, only it distorts the entire image.
The “GRID” filter turns your image into a 3D abstract object akin to something one might see in an EDM music video.
The “LCD” filter lets you move the colors with your thumb while the outline of your image remains fixed.
The “VHS” filter applies VHS scan lines and warps more aggressively if you press your thumb down on the image.
The “DATAMOSH” filter. The direction of the distortion depends on the green dot you press in the center reticle. The reticle disappears once the image is saved.
The “EDGES” filter can be adjusted using both the slider below your image and with your thumb.
The “FISHEYE” filter creates a 3D fisheye overlay you can move around on your image with your thumb.
The “TAPE” filter works in a similar fashion to the “VHS” filter, only moving your thumb across it creates a more subtle distortion.
Listing off some of the individual filters admittedly isn’t doing the app justice. While you are able to use a singular filter, the app also allows you to combine and overlay multiple filters to achieve different effects. Here’s something I made using a combination of five filters:
You can also edit video in a similar fashion (after paying the required $2.99).
The interface itself is simplistic and easy to navigate, though the application lacks certain features one might expect. You can’t save and load projects, you can’t favorite filters, and you can’t perform any complex video editing outside of applying a filter. The app has crashed on me a few times in the past, though this is a rare occurrence. The app is regularly updated with new features and filters.
So, 0.99¢ gets you 33 filters and limits you to Lo-Res exports and GIF exports. $9.96 gets you 33 filters, the ability to export in Hi-Res, the ability to export to GIF, the ability to edit videos, and the ability to record video in the actual application while using said filters.
I keep bringing this back to the cost of the app because that’s really the only place where opinions may vary. The app does what it sets out to do, but the price for the full package leaves a lot to be desired. There are definitely people out there who would gladly pay $10 for this aesthetic, and there are plenty more who would shake their head at it. If any of the filters or images I’ve shown you seem worth $10, then I think you’ll enjoy Glitché. However, if you think this app is a bit too simplistic and overpriced for what it is, I recommend you spend your money elsewhere. It really all boils down to the cost, as the app itself works fine for what it is. In my opinion, the app would be a great deal at $3 or even $5; however, $10 is a bit much to ask for in return for a few nifty filters.
Since the dawn of time, humans have been attempting to record music. For the vast majority of human history, this has been really really difficult. Early cracks at getting music out of the hands of the musician involved mechanically triggered pianos whose instructions for what to play were imprinted onto long scrolls of paper. These player pianos were difficult to manufacture (this was prior to the industrial revolution) and not really viable for casual music listening. There was also the all-important phonograph, which recorded sound itself mechanically onto the surface of a wax cylinder.
If it sounds like the aforementioned techniques were difficult to use and manipulate, it was! Hardly anyone owned a phonograph since they were expensive, recordings were hard to come by, and they really didn’t sound all that great. Without microphones or any kind of amplification, bits of dust and debris which ended up on these phonograph records could completely obscure the original recording behind a wall of noise.
Humanity had a short stint with recording sound as electromagnetic impulses on magnetic tape. This proved to be one of the best ways to reproduce sound (and do some other cool and important things too). Tape was easy to manufacture, came in all different shapes and sizes, and offered a whole universe of flexibility for how sound could be recorded onto it. Since tape recorded an electrical signal, carefully crafted microphones could be used to capture sounds with impeccable detail and loudspeakers could be used to play back the recorded sound at considerable volumes. Also at play were some techniques engineers developed to reduce the amount of noise recorded onto tape, allowing the music to be front and center atop a thin floor of noise humming away in the background. Finally, tape offered the ability to record multiple different sounds side-by-side and play them back at the same time. These side-by-side sounds came to be known as ‘tracks’ and allowed for stereophonic sound reproduction.
Tape was not without its problems though. Cheap tape would distort and sound poor. Additionally, tape would deteriorate over time and fall apart, leaving many original recordings completely unlistenable. Shining bright on the horizon in the late 1970s was digital recording. This new format allowed for low-noise, low cost, and long-lasting recordings. The first pop music record to be recorded digitally was Ry Cooder’s, Bop till you Drop in 1979. Digital had a crisp and clean sound that was rivaled only by the best of tape recording. Digital also allowed for near-zero degradation of sound quality once something was recorded.
Fast-forward to today. After 38 years of Moore’s law, digital recording has become cheap and simple. Small audio recorders are available at low cost with hours and hours of storage for recording. Also available are more hefty audio interfaces which offer studio-quality sound recording and reproduction to any home recording enthusiast.
Basic Components: What you Need
Depending on what you are trying to record, your needs may vary from the standard recording setup. For most users interested in laying down some tracks, you will need the following.
Audio Interface (and Preamplifier): this component is arguably the most important as it connects everything together. The audio interface contains both analog-to-digital converters and a digital-to-analog convert; these allow it to both turn sound into the language of your computer for recording, and turn the language of your computer back into sound for playback. These magical little boxes come in many shapes and sizes; I will discus these in a later section, just be patient.
Digital Audio Workstation (DAW) Software: this software will allow your computer to communicate with the audio interface. Depending on what operating system you have running on your computer, there may be hundreds of DAW software packages available. DAWs vary greatly in complexity, usability, and special features; all will allow you the basic feature of recording digital audio from an audio interface.
Microphone: perhaps the most obvious element of a recording setup, the microphone is one of the most exciting choices you can make when setting up a recording rig. Microphones, like interfaces and DAWs, come in all shapes a sizes. Depending on what sound you are looking for, some microphones may be more useful than others. We will delve into this momentarily.
Monitors (and Amplifier): once you have set everything up, you will need a way to hear what you are recording. Monitors allow you to do this. In theory, you can use any speaker or headphone as a monitor. However, some speakers and headphones offer more faithful reproduction of sound without excessive bass and can be better for hearing the detail in your sound.
Audio Interface: the Art of Conversion
The audio interface can be one of the most intimidating elements of recording. The interface contains the circuitry to amplify the signal from a microphone or instrument, convert that signal into digital information, and then convert that information back to an analog sound signal for listening on headphones or monitors.
Interfaces come in many shapes and sizes but all do similar work. These days, most interfaces offer multiple channels of recording at one time and can record in uncompressed CD-audio quality or better.
Once you step into the realm of digital audio recording, you may be surprised to find a lack of mp3 files. Turns out, mp3 is a very special kind of digital audio format and cannot be recorded to directly; mp3 can only be created from existing audio files in non-compressed formats.
You may be asking yourself, what does it mean for audio to be compressed? As an electrical engineer, it may be hard for me to explain this in a way that humans can understand, but I will try my best. Audio takes up a lot of space. Your average iPhone or Android device maybe has 32 GB of space but most people can keep thousands of songs on their device. This is done using compression. Compression is the computer’s way of listening to a piece of music, and removing all the bits and pieces that most people wont notice. Soft and infrequent noises, like the sound of a guitarist’s fingers scraping a string, are removed while louder sounds, like the sound of the guitar, are left in. This is done using the Fourier Transform and a bunch of complicated mathematical algorithms that I don’t expect anyone reading this to care about.
When audio is uncompressed, a few things are true: it takes up a lot of space, it is easy to manipulate with digital effects, and it often sounds very, very good. Examples of uncompressed audio formats are: .wav on Windows, .aif and .aiff on Macintosh, and .flac for all the free people of the Internet. Uncompressed audio comes in many different forms but all have two numbers which describe their sound quality: ‘word length’ or ‘bit depth’ and ‘sample rate.’
The information for digital audio is contained in a bunch of numbers which indicate the loudness or volume of the sound at a specific time. The sample rate tells you how many times per second the loudness value is captured. This number needs to be at least two times higher than the highest audible frequency, otherwise the computer will perceive high frequencies as being lower than they actually are. This is because of the Shannon Nyquist Theorem which I, again, don’t expect most of you to want to read about. Most audio is captured at 44.1 kHz, making the highest frequency it can capture 22.05 kHz, which is comfortably above the limits of human hearing.
The word length tells you how many numbers can be used to represent different volumes of loudness. The number of different values for loudness can be up to 2^word length. CDs represent audio with a word length of 16 bits, allowing for 65536 different values for loudness. Most audio interfaces are capable of recording audio with a 24-bit word length, allowing for exquisite detail. There are some newer systems which allow for recording with a 32-bit word length but these are, for the majority part, not available at low-cost to consumers.
I would like to add a quick word about USB. There is a stigma, in the business, against USB audio interfaces. Many interfaces employ connectors with higher bandwidth, like FireWire and Thunderbolt, and charge a premium for it. It may seem logical, faster connection, better quality audio. Hear this now: no audio interface will ever be sold which has a connector that is too slow for the quality audio it can record. This is to say, USB can handle 24-bit audio with a 96 kHz sample rate, no problem. If you notice latency in your system, it is from the digital-to-analog and analog-to-digital converters as well as the speed of your computer; latency in your recording setup has nothing to do with what connector your interface uses. It may seem like I am beating a dead horse here, but many people think this and it’s completely false.
One last thing before we move on to the DAW, I mentioned earlier that frequencies above half the recording sample rate will be perceived, by your computer, as lower frequencies. These lower frequencies can show up in your recording and can cause distortion. This phenomena has a name and it’s called aliasing. Aliasing doesn’t just happen with audible frequencies, it can happen with super-sonic sound too. For this reason, it is often advantageous to record at higher sample rates to avoid having these higher frequencies perceived within the audible range. Most audio interfaces allow for recording 24-bit audio with a 96 kHz sample rate. Unless you’re worried about taking up too much space, this format sounds excellent and offers the most flexibility and sonic detail.
Digital Audio Workstation: all Out on the Table
The digital audio workstation, or DAW for short, is perhaps the most flexible element of your home-studio. There are many many many DAW software packages out there, ranging in price and features. For those of you looking to just get into audio recording, Audacity is a great DAW to start with. This software is free and simple. It offers many built-in effects and can handle the full recording capability of any audio interface which is to say, if you record something well on this simple and free software, it will sound mighty good.
Here’s the catch with many free or lower-level DAWs like Audacity or Apple’s Garage Band: they do not allow for non-destructive editing of your audio. This is a fancy way of saying that once you make a change to your recorded audio, you might not be able to un-make it. Higher-end DAWs like Logic Pro and Pro Tools will allow you to make all the changes you want without permanently altering your audio. This allows you to play around a lot more with your sound after its recorded. More expensive DAWs also tend to come with a better-sounding set of built-in effects. This is most noticeable with more subtle effects like reverb.
There are so many DAWs out there that it is hard to pick out a best one. Personally, I like Logic Pro, but that’s just preference; many of the effects I use are compatible with different DAWs so I suppose I’m mostly just used to the user-interface. My recommendation is to shop around until something catches your eye.
The Microphone: the Perfect Listener
The microphone, for many people, is the most fun part of recording! They come in many shapes and sizes and color your sound more than any other component in your setup. Two different microphones can occupy polar opposites in the sonic spectrum.
There are two common types of microphones out there: condenser and dynamic microphones. I can get carried away with physics sometimes so I will try not to write too much about this particular topic.
Condenser microphones are a more recent invention and offer the best sound quality of any microphone. They employ a charged parallel plate capacitor to measure vibrations in the air. This a fancy way of saying that the element in the microphone which ‘hears’ the sound is extremely light and can move freely even when motivated by extremely quiet sounds.
Because of the nature of their design, condenser microphones require a small amplifier circuit built-into the microphone. Most new condenser microphones use a transistor-based circuit in their internal amplifier but older condenser mics employed internal vacuum-tube amplifiers; these tube microphones are among some of the clearest and most detailed sounding microphones ever made.
Dynamic microphones, like condenser microphones, also come in two varieties, both emerging from different eras. The ribbon microphone is the earlier of the two and observes sound with a thin metal ribbon suspended in a magnetic field. These ribbon microphones are fragile but offer a warm yet detailed quality-of-sound.
The more common vibrating-coil dynamic microphone is the most durable and is used most often for live performance. The prevalence of the vibrating-coil microphone means that the vibrating-coil is often dropped from the name (sometimes the dynamic is also dropped from the name too); when you use the term dynamic mic, most people will assume you are referring to the vibrating-coil microphone.
With the wonders of globalization, all microphones can be purchase at similar costs. Though there is usually a small premium to purchase condenser microphones over dynamic mics, costs can remain comfortably around $100-150 for studio-quality recording mics. This means you can use many brushes to paint your sonic picture. Often times, dynamic microphones are used for louder instruments like snare and bass drums, guitar amplifiers, and louder vocalists. Condenser microphones are more often used for detailed sounds like stringed instruments, cymbals, and breathier vocals.
Monitors: can You Hear It?
When recording, it is important to be able to hear the sound that your system is hearing. Most people don’t think about it, but there are many kinds of monitors out there: the screen on our phones and computers which allow us to see what the computer is doing, to the viewfinder on a camera which allows us to see what the camera sees. Sound monitors are just as important.
Good monitors will reproduce sound as neutrally as possible and will only distort at very very high volumes. These two characteristics are important for monitoring as you record, and hearing things carefully as you mix. Mix?
Once you have recorded your sound, you may want to change it in your DAW. Unfortunately, the computer can’t always guess what you want your effects to sound like, so you’ll need to make changes to settings and listen. This could be as simple as changing the volume of one recorded track or it could be as complicated as correcting an offset in phase of two recorded tracks. The art of changing the sound of your recorded tracks is called mixing.
If you are using speakers as monitors, make sure they don’t have ridiculously loud bass, like most speakers do. Mixing should be done without the extra bass; otherwise, someone playing back your track on ‘normal’ speakers will be underwhelmed by a thinner sound. Sonically neutral speakers make it very easy to hear what you finished product will sound like on any system.
It’s a bit harder to do this with headphones as their proximity to your ears makes the bass more intense. I personally like mixing on headphones because the closeness to my ear allows me to hear detail better. If you are to mix with headphones, your headphones must have open-back speakers in them. This means that there is no plastic shell around the back of the headphone. With no set volume of air behind the speaker, open-back headphones can effortlessly reproduce detail, even at lower volumes.
Monitors aren’t just necessary for mixing, they also help to hear what you’re recording as you record it. Remember when I was talking about the number of different loudnesses you can have for 16-bit and 24-bit audio? Well, when you make a sound louder than the loudest volume you can record, you get digital distortion. Digital distortion does not sound like Jimi Hendrix, it does not sound like Metallica, it sounds abrasive and harsh. Digital distortion, unless you are creating some post-modern masterpiece, should be avoided at all costs. Monitors, as well as the volume meters in your DAW, allow you to avoid this. A good rule of thumb is: if it sounds like it’s distorting, it’s distorting. Sometimes you won’t hear the distortion in your monitors, this is where the little loudness bars on your DAW software come in; those bad boys should never hit the top.
A Quick Word about Formats before we Finish
These days, most music ends up as an mp3. Convenience is important so mp3 does have its place. Most higher-end DAWs will allow you to make mp3 files upon export. My advise to any of your learning sound-engineers out there is to just play around with formatting. However, a basic outline of some common formats may be useful…
24-bit, 96 kHz: This is best format most systems can record to. Because of large files sizes, audio in this format rarely leaves the DAW. Audio of this quality is best for editing, mixing, and converting to analog formats like tape or vinyl.
16-bit, 44.1 kHz: This is the format used for CDs. This format maintains about half of the information that you can record on most systems, but it is optimized for playback by CD players and other similar devices. Its file-size also allows for about 80 minutes of audio to fit on a typical CD. Herein lies the balance between excellent sound quality, and file-size.
mp3, 256 kb/s: Looks a bit different, right? The quality of mp3 is measured in kb/s. The higher this number, the less compressed the file is and the more space it will occupy. iTunes uses mp3 at 256 kb/s, Spotify probably uses something closer to 128 kb/s to better support streaming. You can go as high as 320 kb/s with mp3. Either way, mp3 compression is always lossy so you will never get an mp3 to sound quite as good as an uncompressed audio file.
Recording audio is one of the most fun hobbies one can adopt. Like all new things, recording can be difficult when you first start out but will become more and more fulfilling over time. One can create their own orchestras at home now; a feat which would have been near impossible 20 years ago. The world has many amazing sounds and it is up to people messing around with microphone in bedrooms and closets to create more.
Last time we covered the basics of Google’s official IDE for Android app development: Android Studio. You can find that article here. Now we will learn about how an Android app is structured and organized, what files interact with each other, and what they do.
Android is a great platform for a beginner developer to make his or her first smartphone app on. Android apps are written in Java, and the graphics are generally written in XML. Android apps are developed in many well-known IDEs (integrated development environments – programs that typically package together a code editor, compiler, debugger, interpreter, build system, version control system, and deployment system, as well as other tools) such as Eclipse, IntelliJ IDEA, and Android Studio. In this article we will cover the basics of Android Studio.
2016 has given us a lot of exciting new technologies to experiment with and be excited for. As time goes by technology is becoming more and more integrated into our every day lives and it does not seem like we will be stopping anytime soon. Here are some highlights from the past year and some amazing things we can expect to get our hands on in the years to come.
That’s right, we’re adding electronic capabilities to the little circles in your eyes. We’ve seen Google Glass, but this goes to a whole other level. Developers are already working on making lenses that can measure your blood sugar, improve your vision and even display images directly on your eye! Imagine watching a movie that only you can see, because it’s inside your face!
Kokoon started out as a Kickstarter that raised over 2 million dollars to fund its sleep sensing headphones. It is the first of its kind, able to help you sleep and monitor when you have fallen asleep to adjust your audio in real time. It’s the insomnia’s dream! You can find more information on the Kokoon here: http://kokoon.io/
Nuzzle is a pet collar with built in GPS tracking to keep your pet safe in case it gets lost. But it does more than that. Using the collar’s companion app, you can monitor your dogs activity and view wellness statistics. Check it out: http://hellonuzzle.com/
Your ears are the perfect place to measure all sorts of important stuff about your body such as your temperature and heart rate. Many companies are working on earbuds that can sit in your ear and keep statistics on these things in real time. This type of technology could save lives, as it could possibly alert you about a heart attack before your heart even knows it.
Thought it couldn’t get crazier than electronic contacts? Think again. Companies like Chaotic Moon and New Deal Design are working on temporary tattoos than can use the electric currents on the surface of your skin to power them up and do all kinds of weird things including open doors. Whether or not these will be as painful as normal tattoos is still a mystery, but we hope not!
Virtual Reality headsets have been around for a while now, but they represent the ultimate form or wearable technology. These headsets are not mainstream yet and are definitely not perfected, but we can expect to be getting access to them within the next couple of years.
Other impressive types of wearable tech have been greatly improved on this year such as smart watches and athletic clothing. We’re even seeing research done on Smart Houses, which can be controlled completely with your Smart Phone, and holographic image displays that don’t require a screen. The future of wearable technology is more exciting than ever, so get your hands on whatever you can and dress to impress!
Over the past 5 years the term ‘cloud’ has been moving around left and right. If you are asking what the cloud is then I’ll assure you it is not an actual cloud in the sky, but a term used to say that your data is kept for you in a far-off place. This was to make it easy for consumers to conceptualize where their data is, without too much of a misunderstanding.
Understanding the cloud conceptually:
What cloud really is is remote computing and storage, usually provided by corporate servers. A way to understand this best is by simple example: Let’s say you have a photo gallery on your computer and you want to place it in “the cloud”. I tell you I have a cloud service so that you can always have your files available without having them on your computer. You agree, and send the files to me via the internet. I tell you “you are now backed up in the cloud!” since your files are on my computer at this point. You then delete all the files on your device, but it’s okay since there is a copy on my computer. But now you want to view that old photo of yourself at that last weeks Thanksgiving get-together, but it is no longer on your computer. You simply ask me (the cloud) for that file back, and I send you the photo back for you to view.. and when you are done with it you can delete it again, or make changes to it and send me back the changes. Simple as that. I being the cloud, am essentially a remote flash drive or external hard drive to send you data when you need it.
Now obviously this is not exactly how the cloud works, but it is close. Let’s instead change it so my computer is now facebook or Google’s computers in a datacenter far-off from you, and that data is encrypted for security and provided with highspeed enterprise internet to send you your files as quickly as possible. Now we are using the cloud in the way it really works in the real world! Cloud services make it easy to view your files when “they are not there” in your computer. Such as Google Drive; when you install Google Drive you can view what files are available as if they are on your computer. This is Google’s servers telling you what file’s are on them. If you were to open a file, Google’s data center will send your computer that file to be stored as RAM rather than storage; when you click ‘Save’ you simply re-upload that file back to Google’s servers.
WEB APPS! More than just storage:
Most people think of only being able to store their files in the cloud, but there is much more available to them. As we already discussed cloud storage is a way to send data back and forth between computers. This means we can do more than just store your files, but we can also do tasks to those files and send you back the results in web-based applications!
Google Docs is a great example of this. When you open your Google Drive file in Docs, a website displays your file that is stored on Google’s servers. You can make changes to it on your browser and in real-time Google is making changes to that file on their end.
How to adapt to the future:
Cloud computing is starting to become the next big thing. We’re starting to see that we no longer need our computers to have high-end processors and large storage drives as long as we have a good internet connection. This means our computer’s can now be minimal in that they can be thinner, sleeker, and most importantly much cheaper.
Google has expressed this ideology of having a minimalist-type computer with their line of Chromebooks. Anyone trying to get into the web-app lifestyle would love the idea of the device. They’re cheap at ~$250, have fast storage for quick boot times, and have great network cards to maintain a solid internet connection. They only have 32gb of storage, but that’s okay since the entire operating system is based off the chrome browser. How can you manage just using the chrome browser? Again, web apps!
Here is a list of common programs and tasks you can replace with web-apps:
Storage: Box (recommended if you are a UMass affiliate), Google Drive, iCloud, or Mega
Gaming: Gaming Streaming Services! Such as OnLive, PS Now, or you can stream remotely if you have a high-end system.
Photoshop: Pixlr! A great website where you can have most of the features of Photoshop available to you for free all online!
Video-Editing: Use WeVideo, a website to upload videos and edit them all online.
Programming: There are several cloud based programming IDEs available, such as Cloud9 or CodeAnywhere!
Office: Google Docs has everything you need! From word, to powerpoint, to even excel. You and other collaborators can update your documents all in the Google Cloud, and even download them to your computer as a word document, pdf, or image file.
Music: You can use Spotify as a web-app, Google Play Music, or Amazon Prime Music as online subscription-based streaming services!
Movies/Shows: Most of us don’t even save movies anymore. Services like Netflix, Hulu, and Amazon Prime Video let you stream thousands of movies and shows instantly.
Other: If you are in desperate need of say a Windows PC or Mac and you have a desktop at home, then you can stream your computer’s session to your device. Services like TeamViewer, RDP, and Chrome Remote Desktop make this incredibly easy.
As you can see most services can be provided by Google, and is my recommendation to use for being in the Cloud. A simple subscription to Google Play services can get you all the apps you would need to perform most to all computer-related tasks today. Chrome extensions and apps are also limitless as you can download thousands of them online.
My advice is to try to future-proof yourself and use the cloud for all purposes. It’s a great way to keep your data safe and backed up. It’s also a way to spend less on potentially unnecessary computing power. With the world turning to web-based applications, a simple chromebook for $200 could last you years and could save you thousands compared to buying the latest Apple or PC hardware.
Microsoft Office is a useful suite of productivity applications that includes Word, Excel, Powerpoint, Outlook, Access, and OneNote. Microsoft provides a no-cost subscription to college students, faculty, and staff to install these programs on up to 5 devices. Here’s a step-by-step guide on how to get your free access to Microsoft Office 365:
Once on the landing page for Office 365, fill in your UMass email address and click Get started.
A. If you are a student, click on I’m a Student B. Click on I’m a Teacher if you are either a faculty or staff member. The I’m a Teacher option will work if you are either a faculty or staff member.
Check your UMass email for the confirmation email and click the Yes, that’s me link.
Create your account using your personal information.
Click Skip on the invitation page.
Download your software by clicking the Install now button! If you don’t want anything in your web browser changed, make sure to uncheck the two boxes above the Install now button.
A. If you’re on Windows, this will download the installer for Word, Excel, Powerpoint, Outlook, Access, Publisher, Skype for Business, and OneDrive for Business.
B. If you’re on OS X, it will download the installer for Word, Excel, Powerpoint, Outlook, and OneNote.
With the Office 365 subscription, you will also have access to the Office Online suite of productivity software, all of which is listed below the install button.
Once the installer is downloaded, run the installer.
When the software is installed, you will be able to open any Office Suite program and use it as normal.
Note: it may prompt you to sign in. If it does, be sure to use the same email address and password that you used when you signed up for Office 365 at the beginning of this walkthrough.
You’re done! Enjoy Office 365 for the duration of your time at UMass Amherst!