There have been two major rumors in the past month about the future of the Mac. It’s clear in the past several years that much of Apple’s development effort has been geared towards Apple’s mobile operating system, iOS, which powers iPhones and iPads. Apple has also been introducing new platforms, such as Apple Watch and HomePod. Through all of this, the Mac has been gaining features at a snail’s pace. It seems like Apple will only add features when it feels it must in order to match something it introduces first on iOS. But these recent rumors point to a Mac platform that could be revitalized.
The first major rumor is a shared development library between iOS and the Mac. What does this mean to non-developers? It means that we could very well see iOS apps such as Snapchat or Instagram on Mac. MacOS uses a development framework called AppKit. This framework stems back many years to when Apple bought a company called NeXT computer. These systems are what eventually became the Mac, and the underlying framework has stayed largely the same since then. Obviously, there have been changes and many additions, but it is still different from what developers use to make iOS apps for iPhones and iPads. iOS uses a framework called UIKit, which is very different in key areas. Basically, it means that to develop an app for the iPhone and the Mac takes twice the development effort. Supposedly, Apple is working on a framework for the Mac that is virtually identical to UIKit. This means that developers can port their apps to the Mac with basically no work. In theory, the amount of apps on the Mac would increase as developers port over their iOS apps to the Mac. This means many communication apps such as Snapchat and Instagram could be usable desktop apps.
What Apple’s future macOS framework could look like.
The second major rumor is that Apple is expected to switch from Intel provided CPUs to their own ARM based architecture. Apple switched to Intel CPUs in 2006 after using PowerPCs for many years. This transition brought along an almost 2x increase in performance compared to the PowerPC chips they were using. In the last few years, Intel hasn’t seen the year over year performance increases that they used to have. Additionally, Intel has been delaying new architectures as manufacturing smaller chips gets harder and harder. This means Apple is dependent on Intel’s schedule to introduce new features. On the other hand, Apple has been producing industry-leading ARM chips for use in their iPhones and iPads. These chips are starting to benchmark at or above some of the Intel chips that Apple is using in their Mac line. Rumors are saying that the low-power Macs could see these new ARM based chips as soon as 2020. The major caveat with this transition is that developers could have to re-write some of their applications for the new architecture. This means it might take some time for applications to be compatible, and some older applications might never get updated.
Its clear that Apple’s focus in the past several years has been on its mobile platforms and not on its original platform, the Mac. But these two rumors show that Apple is still putting serious engineering work into its desktop operating system. These new features could lead to a thriving Mac ecosystem in the years to come.
The Views and Opinions Expressed in This Article Are Those of Parker Louison and Do Not Necessarily Reflect the Official Policy or Position of UMass Amherst IT
A Note of Intention
I want to start off this article by explaining that I’m not making this in an effort to gloat or brag, and I certainly hope it doesn’t come across that way. I put all of the creative energy I had left this semester into the project I’m about to dissect and discuss, so sadly I won’t be publishing a video this semester (as I’ve done for the past two semesters). One of the reasons I’m making this is because a lot of the reaction towards what I made included people asking how I made it and how long it took me, and trust me, we’ll go in depth on that.
My First Taste
My first experience with high-grade virtual reality was a few weeks before the start of my sophomore year at UMass when my friend Kyle drove down to visit me, bringing along his HTC Vive after finding out that the only experience I’d had with VR was a cheap $20 adapter for my phone. There’s a consensus online that virtual reality as a concept is better pitched through firsthand experience rather than by word of mouth or marketing. The whole appeal of VR relies on subjective perception and organic optical illusions, so I can understand why a lot of people think the whole “you feel like you’re in the game” spiel sounds like nothing but a load of shallow marketing. Remember when Batman: Arkham Asylum came out and nearly every review of it mentioned that it made you feel like Batman? Yeah, well now there’s actually a Batman Arkham VR game, and I don’t doubt it probably does make you actuallyfeel like you’re Batman. The experience I had with VR that night hit me hard, and I came to understand why so many people online were making it out to be such a big deal. Despite my skeptical mindset going in, I found that it’s just as immersive as many have made it out to be.
This wasn’t Microsoft’s Kinect, where the action of taking away the remote actually limited player expression. This was a genuinely deep and fascinating technological breakthrough that opens the door for design innovations while also requiring programmers to master a whole new creative craft. The rulebook for what does and doesn’t work in VR is still being written, and despite the technology still being in its early stages, I wanted in. I wanted in so badly that I decided to try and save up my earnings over the next semester in an effort to buy one. That went about as well as you’d expect; not just because I was working within a college student’s budget, but also because I’m awful with my money. My Art-Major friend Jillian would tell you it’s because I’m a Taurus, but I think it has more to do with me being a giant man-child who impulse-purchases stupid stuff because the process of waiting for something to arrive via Amazon feels like something meaningful in my life. It’s no wonder I got addicted to Animal Crossing over Spring Break…
Anyway, I was sitting in my Comp-Lit discussion class when I got the email about the Digital Media Lab’s new Ready Player One contest, with the first place winner taking home an HTC Vive Headset. I’m not usually one for contests, and I couldn’t picture myself actually winning the thing, but something about the challenge piqued my interest. The task involved creating a pitch video, less than one minute in length, in which I’d have to describe how I would implement Virtual Reality on campus in a meaningful way.
With Virtual Reality, there are a lot of possible implementations relating to different departments. In the Journalism department, we’ve talked at length in some of my classes about the potential applications of VR, but all of those applications were either for the benefit of journalists covering stories or the public consuming them. The task seemed to indicate that the idea I needed to pitch had to be centered more on benefiting the average college student, rather than benefiting a specific major (at least, that’s how I interpreted it).
One of my original ideas was a virtual stress-relief dog, but then I realized that people with anxiety would likely only get even more stressed out with having to put on some weird giant headset… and real-life dogs can give hecking good nuzzles that can’t really be simulated. You can’t substitute soft fur with hard plastic.
I came to college as a journalism major, and a day rarely goes by when I don’t have some doubts about my choice. In High School I decided on journalism because I won this debate at a CT Youth Form thing and loved writing and multi-media, so I figured it seemed like a safe bet. Still, it was a safe bet that was never pitched to me. I had no idea what being a journalist would actually be like; my whole image of what being a reporter entailed came from movies and television. I thought about it for a while, about how stupid and hormonal I was and still am, and realized that I’m kind of stuck. If I hypothetically wanted to switch to chemistry or computer science, I’d be starting from scratch with even more debt to bear. Two whole years of progress would be flushed down the toilet, and I’d have nothing to show for it. College is a place for discovery; where your comfortable environment is flipped on its head and you’re forced to take care of yourself and make your own friends. It’s a place where you work four years for a piece of paper to make your resume look nicer when you put it on an employer’s desk, and you’re expected to have the whole rest of your life figured out when you’re a hormonal teenager who spent his savings on a skateboard he never learned how to ride.
And so I decided that, in this neo-cyberpunk dystopia we’re steadily developing into, it would make sense for simulations to come before rigorous training. Why not create simulated experiences where people could test the waters for free? Put themselves in the shoes of whatever career path they want to explore to see if the shoes fit right, you know?
I mentioned “cyberpunk” there earlier because I have this weird obsession with cyberpunk stuff at the moment and I really wanted to give my pitch video some sort of tongue-in-cheek retrograde 80s hacker aesthetic to mask my cynicism as campy fun, but that had to be cut once I realized I had to make this thing under a minute long.
Gathering My Party and Gear
Anyway, I wrote up a rough script and rented out one of the booths in the Digital Media Lab. With some help from Becky Wandel (the News Editor at WMUA) I was able to cut down my audio to just barely under the limit. With the audio complete, it came time to add visual flair. I originally wanted to do a stop-motion animated thing with flash-cards akin to the intros I’ve made for my Techbytes videos, but I’m slow at drawing and realized that it’d take too much time and effort, which is hilarious because the idea I settled on was arguably even more time-consuming and draining.
I’m the proud owner of a Nikon D80, a hand-me-down DSLR from my mom, which I bring with me everywhere I go, mostly because I like taking pictures, but also because I think it makes me seem more interesting. A while back I got a speck of dust on the sensor, which requires special equipment to clean (basically a glorified turkey baster). I went on a journey to the Best Buy at the Holyoke Mall with two friends to buy said cleaning equipment while documenting the entire thing using my camera. Later, I made a geeky stop-motion video out of all those photos, which I thought ended up looking great, so I figured doing something similar for the pitch video would be kind of cool. I messaged a bunch of my friends, and in a single day I managed to shoot the first 60% of the photos I needed. I then rented out the Vive in the DML and did some photoshoots there.
At one point while I was photographing my friend Jillian playing theBlu, she half-jokingly mentioned that the simulation made her want to study Marine Biology. That kind of validated my idea and pushed me to make sure I made this video perfect. The opposite effect happened when talking to my friend Rachael, who said she was going to pitch something for disability services, to which I immediately thought “damn, she might win with that.”
I then knew what I had to do. It was too late to change my idea or start over, so I instead decided that my best shot at winning was to make my video so stylistically pleasing and attention-grabbing that it couldn’t be ignored. If I wasn’t going to have the best idea, then gosh darn it (I can’t cuss because this is an article for my job) I was going to have the prettiest graphics I could muster.
The Boss Fight
I decided to use a combination of iMovie and Photoshop, programs I’m already familiar with, because teaching myself how to use more efficient software would ironically be less efficient given the short time frame I had to get this thing out the door. Using a drawing tablet I borrowed from my friend Julia, I set out to create the most complicated and ambitious video project I’ve ever attempted to make.
A few things to understand about me: when it comes to passion projects, I’m a bit of a perfectionist and extremely harsh on myself. I can’t even watch my Freshman Year IT video because I accidentally made it sound like a $100 investment in some less than amazing open back headphones was a reasonable decision on my part, and my other IT video makes me cringe because I thought, at the time, it’d be funny to zoom in on the weird hand motions I make while I talk every five seconds.
So in this case, I didn’t hold back and frequently deleted whole sections of my video just because I didn’t like how a single brush stroke animated (with the exception of the way my name is lopsided in the credits, which will haunt me for the rest of my life). For two weeks, I rigorously animated each individual frame in Photoshop, exported it, and imported it into iMovie.
(Above) A visual representation of all the files it took to create the video
(Above) Frame by frame, I lined up my slides in iMovie
The most demanding section was, without a doubt, the one involving my friend Matthew, which I spent one out of the two weeks entirely focused on. For that section, I needed it to animate at a speed faster than 0.04 seconds, which is impossible because 0.04 seconds is the shortest you can make a frame in iMovie’s streamlined interface, so I ended up creating a whole new project file, slowing down my audio by half-speed, editing the frames of that section relative to that slowed down audio before exporting it, putting it into the original project file and doubling its speed just to get it to animate smoothly.
(Above) Some sections required me to find loopholes in the software to get them to animate faster than iMovie would allow
(Above) Some of the scrap paper I scribbled notes on while editing the video together
Each individual border was drawn multiple times with slight variations and all the on-screen text (with the exception of the works cited) was handwritten by me multiple times over so that I could alternate between the frames of animation to make sure everything was constantly moving.
(Above) Boarders were individually drawn and cycled through in order to maintain visual momentum
This was one of my major design philosophies during the development of this project: I didn’t want there to be a single moment in the 59 seconds where nothing was moving. I wanted my video to grab the viewer’s attention, and I feared that losing momentum in the visual movement would cause me to lose the viewer’s interest. The song LACool by DJ Grumble came on my Spotify radio coincidentally right when I was listening over the audio for the section I was editing, and I thought it fit so well I bought it from iTunes on the spot and edited it in.
I finished my video on Monday, March 26th, turned it into the Digital Media Lab, stumbled back to my dorm, and went to bed at 6:00 PM by accident.
(Above) The final video submission
The winner wouldn’t be announced until Wednesday, so for two days I nervously waited until 6:00 PM on March 28th, when I sat on my bed in my dorm room refreshing the Digital Media Lab website every 7 seconds like a stalker on an ex’s Facebook page waiting for the winner to finally be posted. At 6:29 PM I got a call from an unrecognized number from Tallahassee, Florida, and almost didn’t answer because I thought it was a sales call. Turns out it was Steve Acquah, the coordinator of the Digital Media Lab, who informed me that my video won. Soon after, the Digital Media Lab Website was also updated with the announcement.
(Above) A screenshot taken of the announcement on the Digital Media Lab Website
Along with the raw joy and excitement came a sort of surreal disbelief. Looking back on those stressful weeks of work, it all felt like it flew by faster than I could’ve realized once I got that phone call. I’m so grateful for not only the reward but the experience. Making that video was a stressful nightmare, but it also forced me to push myself to my creative limits and challenge myself in so many ways. On a night where I would’ve probably just gone home and watched Netflix by myself, I sprinted around campus to meet up with and take photos of my friends. This project got me to get all my friends together and rent out the Vive in the DML, basically forcing me to play video games and have fun with the people I love. While the process of editing it all together drove me crazy, the journey is definitely going to be a highlight of my time at UMass.
I’m grateful to all of my friends who modeled for me, loaned me equipment, got dinner with me while I was stressing out over editing, played Super Hot VR with me, gave me advice on my audio, pushed me to not give up, and were there to celebrate with me when I won. I’m also immensely grateful to the staff and managers of the DML for providing me with this opportunity, as well as for their compliments and praise for the work I did. This was an experience that means a lot to me and it’s one I won’t soon forget. Thank you.
I picked up my prize the other day at the DML (see photo above the title of this article)! Unfortunately, I have a lot of work going on, so it’s going to be locked up in a safe place until that’s done. Still, it’s not like I could use it right now if I wanted to. My gaming PC hasn’t been touched in ages (since I don’t bring it with me to college) so I’m going to need to upgrade the GPU before I can actually set up the Vive with it. It’s a good thing there isn’t a spike in demand for high-end GPUs at the moment for cryptocurrency mining, right?
(Above) A visual representation of what Bitcoin has done to the GPU market (and my life)
Regardless of when I can actually use the prize I won, this experience was one I’m grateful to have had. The video I made is one I’m extremely proud of, and the journey I went on to create it is one I’ll think about for years to come.
Researchers at MIT’s Computer Science and Artificial Intelligence department have created a Soft Robotic Fish (nicknamed SoFi) which is able to swim and blend in with real fish while observing and gathering data from them. This remarkable bot is not only cool and adorable, but it also paves the way for the future of lifelike Artificial Intelligence.
Think about it: We have already reached the point where we can create a robotic fish which is capable of fooling real fish into thinking that it’s a real fish. Granted, fish aren’t the smartest of the creatures on this planet, but they can usually tell when something is out of the ordinary and quickly swim away. SoFi, however, seems to be accepted as one of their own. How long will it take for us to create a robot that can fool more intelligent species? Specifically, how long will it be until Soft Robotic Humans are roaming the streets as if they weren’t born yesterday? Perhaps more importantly, is this something that we actually want?
The benefits of a robotic animal like SoFi are obvious: It allows us to get up close and personal with these foreign species and learn more about them. This benefit of course translates to other wild animals like birds, bees, lions, etc. We humans cant swim with the fishes, roost with the birds, visit the hive with the bees or roar with the lions, but a robot like SoFi sure can. So it makes sense to invest in this type of technology for research purposes. But when it comes to replicating humanity, things get a bit trickier. I’m pretty confident in saying that most humans in this world would not appreciate being secretly observed in their daily lives “for science.” Of course, it’s still hard to say whether or not this would even be possible, but the existence of Sofi and the technology behind it leads me to believe we may be closer than most of us think.
Regardless of its possible concerning implications, SoFi is a truly amazing feat of engineering. If nothing else, these Soft Robots will bring an epic evolution to the Nature Documentary genre. For more information about the tech behind SoFi, check out the video at the top from MITCSAIL.
Like most other fans of college basketball, I spent an unhealthy amount of time dedicated to the sport the week after Selection Sunday (March 11th). Starting with spending hours filling out brackets, researching rosters, injuries, and FiveThirtyEight’s statistical predictions to fine-tune my perfect bracket, through watching around 30 games over the course of four days. I made it a full six hours into the tournament before my whole bracket busted. The three-punch combo of Buffalo (13) over Arizona (4), Loyola Chicago (11) beating Miami (6), and, most amazingly, the UMBC Retrievers (16) crushing the overall one-seed and tournament favorite, UVA, spelled the end for my predictions. After these three upsets, everyone’s brackets were shattered. The ESPN leaderboards looked like a post-war battlefield. No one was safe.
The UMBC good boys became the only 16th seed to beat a 1st seed in NCAA tournament history
The odds against picking a perfect bracket are astronomical. The probability ranges from 1 in 9.2 quintillion to 1 in 128 billion. Warren Buffet offers $1 million a year for life for Berkshire Hathaway employees who correctly pick a bracket. Needless to say, no one has been able to cash in on the prize. Picking a perfect bracket is nearly impossible, and is (in)famous for being one of the most unlikely statistical probabilities in gambling.
The Yin and Yang of March Madness
To make the chances of making a perfect bracket somewhat feasible, a competition has been set up to see who can beat the odds with machine learning. Hosted by Kaggle, an online competition platform for modeling and analytics that was purchased by Google’s parent company, Alphabet, the competition has people making models to predict which teams will win each game based on prior data. A model that is correct and predicted it with 99% confidence will score better than one with a 95% confidence and so on. The prize is $100,000, split among the teams that made the top 3 brackets. Teams are provided with the results of every men’s and women’s game in the tournament since 1985, the year that the tournament first started with 64 teams. They are also provided with every play since 2009 in the tournament. Despite all this data, it is still very hard to predict, with the best bracket in this competition, which has been hosted for five years, predicting 39 games correct. Many unquantifiable factors, such as hot streaks and team chemistry, play a large factor in the difficulty in choosing, so it looks like we’re still years off from having our computers picking the perfect bracket.
Congratulations! You’ve made a virtual box of your favorite linux distro. But now you want to download a picture of your cat and find out that you’ve run out of disk space. Image: habrahabr.ru
Rather than free up space by deleting the other pics of Snuffles, you decide you’d rather just make the virtual machine have more disk space. But you’ll find out quickly that Oracle has not made this super-easy to do. The process is not simple, but it can be if you just use the following steps:
Open the Command Line on your windows machine. (Open Start and type cmd)
You can then navigate to you vitualbox installation folder. It’s default location is C:\Program Files\Oracle\VirtualBox\
Once there, type this command to resize the .vdi file:
VBoxmanage modified LOCATION –resize SIZE
Replace LOCATION with the absolute file path to your .vdi image (just drag the .vdi file from file explorer to you cmd window) and replace SIZE with the new size you want (measured in MB) 1 GB = 1000 MB
Now your .vdi is resized, but the disk space is unallocated in the virtual machine. You’ll need to resize it. To do this, download gparted live. Make a new virtual machine. It is going to simulate a live CD boot where you can modify your virtual partition.
If your filesystem is ext4, like mine was when I did this, you’ll need to delete the linux-swap file located in-between your partition and the unallocated space. Make sure you leave at least 4 GB of unallocated space so that you can add the linux-swap partition back later.
After you’ve resized your partition, you’ll be done. Boot into the virtual machine as normal and you’ll notice you have more space for Snuffles.
If you are a fan of Marvel Comics or the Marvel Cinematic Universe, you are likely aware of J.A.R.V.I.S., Tony Stark’s personal artificial intelligence(AI) program. J.A.R.V.I.S. helps Tony Stark reach his full potential as Iron Man by helping run operations and diagnostics on the Iron Man suit, as well as gathering information and running simulations. J.A.R.V.I.S. also has a distinct personality, sometimes displaying sarcasm and wit, no doubt programmed in by Stark. With artificial intelligence and machine learning developing at a breakneck pace, it’s worth asking if an AI like J.A.R.V.I.S. is even possible.
One of the most prominent AI programs in use right now is IBM Watson. Watson made its debut in 2011 as a contestant on Jeopardy in a special broadcast against two of the show’s best contestants and won. Commercial use of Watson began in 2013. Watson is now being used for a variety of functions from tracking elevator use in support of maintenance efforts, to planning irrigation systems for farms. (For more stories about Watson’s many jobs, look here.)
As far as hardware is concerned, Watson relies on a cluster of 90 IBM Power 750 servers that each have a 3.5GHz processor, 16 terabytes of RAM. This allows Watson to process the equivalent of one million books per second. The estimated cost of Watson’s hardware was 300 million dollars.
When Watson competed on Jeopardy, all of the information Watson had access to had to be stored on the machine’s RAM because it would not have been able to access it within a competitive time frame if it was stored on the machine’s hard drive. Since Watson’s bout on Jeopardy, solid state drives have started to emerge, which would allow information that is used more often to be accessed at a faster rate than if the same information was stored on a standard hard drive. With further advances in memory storage technology, information could be accessed at faster rates.
IBM’s Watson appears to be a step in the direction toward AI similar to J.A.R.V.I.S. With quantum computing as an expanding frontier, processing speeds could become even faster, making something like J.A.R.V.I.S. a more realizable reality. Personally, I believe such a feat is possible, and could even be achieved in our lifetime.
People always ask me, “Are Macs better than PCs?” or “What kind of computer should I buy?” so I’m here to clear some confusion and misconceptions about computers and hopefully help you find the computer best suited to your purposes.
Computers can generally be separated into two large operating system groups: MacOS and Windows. There are Linux and Ubuntu users, but the majority of consumers will never use these operating systems, so I’ll focus on the big two for this article. Computers can also be separated into two physical categories, desktops and laptops.
Desktops, as the name suggests, sit on top of (or under) your desk, and are great for a number of reasons. Firstly, they are generally the most cost-efficient. With the ability to custom-build a desktop, you’re able to the best bang for your buck. And even if you choose to buy a prebuilt, the cost differences nowadays between prebuilt and custom builds are small. Desktops also serve as being very powerful machines, with the best performance, as they aren’t constrained to physical size like laptops are. Many laptop parts have to be altered to fit the limited space, but desktops have as much space as the case has to offer. More space within the case means bigger/more powerful parts, better ventilation for cooling, etc. Additionally, desktops are generally more future-proof. If a hard drive runs out of space, you can buy and install another. If your graphics card can’t support modern games anymore, you can order one that fits your budget and just replace the old one. Overall, desktops are ideal… as long as you don’t want to move them around a lot. A full setup consisting of a tower, monitor, and peripherals can be very heavy and inconvenient to move around, not including the many cables required to connect everything together. If you are looking for a good machine that will last the years, and don’t need to move it around often, then you might be looking for a desktop. I will go over the details of operating systems further down.
If you’re looking for a portable machine, then you’re looking for a laptop. But here too there’s a lot of variety: You have Chromebooks, which are incredibly fast, light, and (importantly) cheap machines that use ChromeOS for very basic functionalities. Unlike other OSes, this one is designed to be used while connected to the internet, with documents and files in the cloud. The applications are limited to the what’s available of the Chrome store. If all you need a laptop to do is use the internet and edit things on Google Drive, then a Chromebook might be perfect for you.
Next are your middle-of-the-line to high-end laptops, the majority of laptops. This is where you’ll find your MacBooks, your ultrabooks, the all around laptop for most functionalities. This is what most people will prefer, as they can do the most, and retain portability. There is also a ton of variety within this group. There are touch screens, super-bendable hinges, I/O ports, etc. Here, what it’s going to come down to is personal preference. There too many options to write about, but I encourage everyone to try to assess a number of different computers, before deciding which ones they like the best.
Lastly, I’d like to discuss operating system, primarily MacOS and Windows. I did briefly mention ChromeOS, but that’s only really for Chromebooks and it’s a very basic system. With MacOS, what people like is the convenience. Apple has created an “ecosystem” of devices that, if you are a part of this ecosystem, everything works perfectly in harmony. MacOS is very user-friendly and easy to pickup, and if you own an iPhone, an Apple Watch, an iPad, any iOS device, you can connect it to your computer and use it in sync all together. iMessage, Photos, Apple Cloud, are all there to keep your devices connected and make it super easy to swap between. Windows doesn’t have an “ecosystem,” but what it lacks in user-friendliness it makes up for versatility and user power. Windows is good at being customizable. You have a lot more freedom when it comes to making changes. This comes back to the device it’s on. Mac devices have top-of-the-line build quality. They’re constructed beautifully, and are extremely good at what they do, but they come with a high price tag. Their devices are built in a way to discourage user-modification like adding storage/memory, etc. Microsoft laptops range from $150 well into the thousands for gaming machines, where the common MacBooks start near $1000. If you’re looking for gaming, Windows is also the way to go. If you aren’t choosing a desktop, there are many gaming laptops out for sale. Although you won’t find the same performance per dollar, they are laptops and portable.
With this, hopefully you have everything you need to buy the perfect laptop for you the next time you need one.
Digital audio again? Ah yes… only in this article, I will set out to examine a simple yet complicated question: how does the sampling rate of digital audio affect its quality? If you have no clue what the sampling rate is, stay tuned and I will explain. If you know what sampling rate is and want to know more about it, also stay tuned; this article will go over more than just the basics. If you own a recording studio and insist on recording every second of audio in the highest possible sampling rate to get the best quality, read on and I hope inform you of the mathematical benefits of doing so…
What is the Sampling Rate?
In order for your computer to be able to process, store, and play back audio, the audio must be in a discrete-time form. What does this mean? It means that, rather than the audio being stored as a continuous sound-wave (as we hear it), the sound-wave is broken up into a bunch of infinitesimally small points. This way, the discrete-time audio can be represented as a list of numerical values in the computer’s memory. This is all well and good but some work needs to be done to turn a continuous-time (CT) sound-wave into a discrete-time (DT) audio file; that work is called sampling.
Sampling is the process of observing and recording the value of a complex signal during uniform intervals of time. Figure 1(a) is ‘analog’ sampling where this recorded value is not modified by the sampling process and figure 1(b) is digital sampling where the recorded value is quarantined so it can be represented with a binary word.
During sampling, the amplitude (loudness) of the CT wave is measured and recorded at regular intervals to create the list of values that make up the DT audio file. The inverse of this sampling interval is known as the sample rate and has a unit of Hertz (Hz). By far, the most common sample rate for digital audio is 44100 Hz; this means that the CT sound-wave is sampled 44100 times every second.
This is a staggering number of data points! On a audio CD, each sample is represented by two bytes; that means that one second of audio will take up over 170 KB of space! Why is all this necessary? you may ask…
The Nyquist-Shannon Sampling Theorem
Some of you more interested readers may have heard already of the Nyquist-Shannon Sampling Theorem (some of you may also know this theorem simply as the Nyquist Theorem). The Nyquist-Shannon Theorem asserts that any CT signal can be sampled, turned into a DT file, and then converted back into a CT signal with no loss in information so long as one condition is met: the CT signal is band-limited at the Nyquist Frequency. Let’s unpack this…
Firstly, what does it mean for a signal to be band-limited? Every complex sound-wave is made up of a whole myriad of different frequencies. To illustrate this point, below is the frequency spectrum (the graph of all the frequencies in a signal) of All Star by Smash Mouth:
Smash Mouth is band-limited! How do we know? Because the plot of frequencies ends. This is what it means for a signal to be band-limited: it does not contain any frequencies beyond a certain point. Human hearing is band-limited too; most humans cannot hear any frequencies above 20,000 Hz!
So, I suppose then we can take this to mean that, if the Nyquist frequency is just right, any audible sound can be represented in digital form with no loss in information? By this theorem, yes! Now, you may ask, what does does the Nyquist frequency have to be for this to happen?
For the Shannon-Nyquist Sampling Theorem to hold, the Nyquist frequency must be greater than twice the highest frequency being sampled. For sound, the highest frequency is 20 kHz; and thus, the Nyquist frequency required for sampled audio to capture sound with no loss in information is… 40 kHz. What was that sample-rate I mentioned earlier? You know, that one that is so common that basically all digital audio uses it? It was 44.1 kHz. Huzzah! Basically all digital audio is a perfect representation of the original sound it is representing! Well…
Aliasing: the Nyquist Theorem’s Complicated Side-Effect
Just because we cannot hear sound about 20 kHz does not mean it does not exist; there are plenty of sound-waves at frequencies higher than humans can hear.
So what happens to these higher sound-waves when they are sampled? Do they just not get recorded? Unfortunately no…
A visual illustration of how under-sampling a frequency results in some unusual side-effects. This unique kind of error is known as ‘aliasing’
So if these higher frequencies do get recorded but frequencies above the Nyquist frequency cannot be sampled correctly, then what happens to them? They are falsely interprated as lower frequencies and superimposed over the correctly sampled frequencies. The distance between the high frequency and the Nyquist frequency govern what lower frequency these high-frequency signals will be interpreted as. To illustrate this point, here is an extreme example…
Say we are trying to sample a signal that contains two frequencies: 1 Hz and 3 Hz. Due to poor planning, the Nyquist frequency is selected to be 2 Hz (meaning we are sampling at a rate of 4 Hz). Further complicating things, the 3 Hz cosine-wave is offset by 180° (meaning the waveform is essentially multiplied by -1). So we have the following two waveforms….
1 Hz cosine waveform
3 Hz cosine waveform with 180° phase offset
When the two waves are superimposed to create one complicated waveform, it looks like this…
Superimposed waveform constructed from the 1 Hz and 3 Hz waves
Pretty, right? Well unfortunately, if we try to sample this complicated waveform at 4 Hz, do you know what we get? Nothing! Zero! Zilch! Why is this? Because when the 3 Hz cosine wave is sampled and reconstructed, it is falsely interpreted as a 1 Hz wave! Its frequency is reflected about the Nyquist frequency of 2 Hz. Since the original 1 Hz wave is below the Nyquist frequency, it is interpreted with the correct frequency. So we have two 1 Hz waves but one of them starts at 1 and the other at -1; when they are added together, they create zero!
Another way we can see this phenomena is by looking at the graph. Since we are sampling at 4 Hz, that means we are observing and recording four evenly-spaced points between zero and one, one and two, three and four, etc… Take a look at the above graph and try to find 4 evenly-space points between zero and one (but not including one). You will find that every single one of these points corresponds with a value of zero! Wow!
So aliasing can be a big issue! However, designers of digital audio recording and processing systems are aware of this and actually provision special filters (called anti-aliasing filters) to get rid of these unwanted effects.
So is That It?
Nope! These filters are good, but they’re not perfect. Analog filters cannot just chop-off all frequencies above a certain point, they have to, more or less, gradually attenuate them. So this means designers have a choice: either leave some high frequencies and risk distortion from aliasing or roll-off audible frequencies before they’re even recorded.
And then there’s noise… Noise is everywhere, all the time, and it never goes away. Modern electronics are rather good at reducing the amount of noise in a signal but they are far from perfect. Furthermore noise tends to be mostly present at higher frequencies; exactly the frequencies that end up getting aliased…
What effect would this have on the recorded signal? Well if we believe that random signal noise is present at all frequencies (above and below the Nyquist frequency), then our original signal would be masked with a layer of infinitely-loud aliased noise. Fortunately for digitally recorded music, the noise does stop at very high frequencies due to transmission-line effects (a much more complicated topic).
What can be Learned from All of This?
The end result of this analysis on sample rate is that the sample rate alone does not tell the whole story about what’s being recorded. Although 44.1 kHz (the standard sample rate for CDs and MP3 files) may be able to record frequencies up to 22 kHz, in practice a signal being sampled at 44.1 kHz will have distortion in the higher frequencies due to high frequency noise beyond the Nyquist frequency.
So then, what can be said about recording at higher sample rates? Some new analog-to-digital converts for musical recording sample at 192 kHz. Most, if not all, of the audio recording I do is done at a sample rate of 96 kHz. The benefit to recording at the higher sample rates is that you can recording high-frequency noise without it causing aliasing and distortion in the audible range. With 96 kHz, you get a full 28 kHz of bandwidth beyond the audible range where noise can exist without causing problems. Since signals with frequencies up to around 9.8 MHz can exist in a 10 foot cable before transmission line effects kick in, this is extremely important!
And with that, a final correlation can be predicted: the greater the sample rate, the less noise will result in aliasing in the audible spectrum. To those of you out there who have insisted that the higher sample rates sound better, maybe now you’ll have some heavy-duty math to back up your claims!
With the advent of smart technology, the relative ease with which we access information is changing. The smart watch puts much of what a person does on their phone, on their wrist, and on the internet. While we make these technological advances, some things remain constant, like professional sports. With the exception of some minor rule changes here and there, many of the most-watched games in the U.S. have remained the same. Recently, The Red Sox allegedly used smart watches to steal signs from The Yankees, which raises an important question: should smart watches be allowed in professional sports?
Most smart watches have the common ability to monitor the wearer’s heart rate. This data could be useful in monitoring players condition so the coach knows when to make substitutions, but it could also be used for medical research. If every professional athlete wore a smart device while they played in games and did workouts, the amount of data that could be made available to medical professionals in one year would be astounding. This data could lead to a better understanding than we have now of the human body at work.
While wearing smart watches in professional sports hold potential societal gain, the reality of the situation is not as optimistic. Many sports involve physical contact, which leads to a risk of either the smartwatch breaking, or increased injury due to contact with a smart watch on a player’s wrist. There is also an increased risk of cheating if players and coaches can view text messages on their wrist.
In my opinion, sports would be better off without smart technology becoming part of any game. The beauty of sporting matches is that they are meant to display the raw athletic abilities of players in competition. Adding smart technology to the game could lead to records that have asterisks by them, similar to home run records set by players who used steroids.
In this day and age, it’s safe to assume that most of you know a thing or to about how to use a computer, one of those things being keyboard shortcuts. Keyboard shortcuts, for the uninitiated, are really handy combinations of buttons, usually two or three, that perform certain functions that would otherwise take somewhat longer to do manually with just the mouse. For example, highlighting a piece of text and pressing Control (CTRL) + C copies the text to your clipboard, and subsequently pressing CTRL + V pastes that copied text to wherever you’re entering text.
Most people tend to know copy and paste, as well as a handful of other shortcuts, but beyond them are an abundance of shortcuts that can potentially save time and make your computer-using experience that much more convenient. In this article, I’ll go over some commonly known keyboard shortcuts and several most likely not very well known ones as well.
Most of these keyboard shortcuts will be primarily on Windows, although some can also apply on Mac as well, usually substituting CTRL with the Command button.
CTRL + C – As mentioned above copies any highlighted text to the clipboard.
CTRL + V – Also mentioned above, pastes any copied text into any active text field.
CTRL + X – Cuts any highlighted text; as the wording suggests, instead of just copying the text, it will “cut” it and remove it from the text field. Essentially rather than copying, the text will be moved to the clipboard instead.
CTRL + Z – Undo an action. An action can be just about anything; since this is a fairly universal shortcut, an action can be what you last typed in Microsoft Word, a line/shape drawn in Photoshop, or just any “thing” previously done in an application.
CTRL + Y – Redo an action. For example, if you changed your mind about undoing the last action, you can use this shortcut to bring that back.
CTRL + A – Selects all items/text in a document or window, i.e. highlights them.
CTRL + D – Deletes the selected file and moves it to the Recycle Bin.
CTRL + R – Refreshes the active window. Generally you’ll only use this in the context of Internet browsers. Can also be done with F5.
CTRL + Right Arrow – Moves the cursor to the beginning of the next word.
CTRL + Left Arrow – Moves the cursor to the beginning of the previous word.
CTRL + Down Arrow – Moves the cursor to the beginning of the next paragraph.
CTRL + Up Arrow – Moves the cursor to the beginning of the previous paragraph.
Alt + Tab – Displays all open applications and while holding down Alt, by pressing Tab, will cycle through which application to switch to from left to right.
CTRL + Alt + Tab – Displays all open applications. Using the arrow keys and Enter, you can switch to another application.
CTRL + Esc – Opens the Start Menu, can also be done with Windows Key.
Shift + Any arrow key, when editing text, selects text in the direction corresponding to the arrow key. Selects text character by character.
CTRL + Shift + Any arrow key – When editing text, selects a block of text, i.e. a word.
CTRL + Shift + Esc – Opens Task Manager directly.
Alt + F4 – Close the active item or exit the active application.
CTRL + F4 – In applications that are full screen and let you have multiple documents open, closes the active document, instead of the entire application.
Alt + Enter – Displays the properties for a selected file.
Alt + Left Arrow – Go back, usually in the context of Internet browsers.
Alt + Right Arrow – Go forward, same as above.
Shift + Delete – Deletes a selected file without moving it to the Recycle Bin first, i.e. deletes it permanently.
Windows Logo Key Shortcuts:
Windows logo key ⊞ + D – Displays and hides the desktop.
Windows logo key ⊞ + E – Opens File Explorer
Windows logo key ⊞ + I – Opens Windows Settings
Windows logo key ⊞ + L – Locks your PC or switches accounts.
Windows logo key ⊞ + M – Minimize all open windows/applications.
Windows logo key ⊞ + Shift + M – Restore minimized windows/applications on the desktop.
Windows logo key ⊞ + P – When connecting your computer to a projector or second monitor, opens up a menu to select how you want Windows to be displayed on the secondary display. You can select from PC screen only (uses only the computer’s screen), Duplicate (shows what is on your computer screen on the secondary display), Extend (Extends the desktop, allowing you to move applications/windows to the secondary display, and keep content on the primary screen off the secondary display), and Second Screen Only (Only the secondary display will be used).
Windows logo key ⊞ + R – Opens the Run Dialog Box. Typing and entering in the file names for applications will open the file/application, useful for troubleshooting scenarios.
Windows logo key ⊞ + T – Cycle through open applications on the taskbar; pressing Enter will switch to the selected application.
Windows logo key ⊞ + Comma (,) – Temporarily peeks at the desktop.
Windows logo key ⊞ + Pause Break – Displays System Properties window in Control Panel. You can find useful information here about your computer such as the version of Windows you are running, general info about the hardware of the computer, etc.
Windows logo key ⊞ + Tab – Opens Task view, which is similar to CTRL + Alt + Tab.
Windows logo key ⊞ + Up/Down – Maximizes or minimizes a window/application respectively.
Windows logo key ⊞ + Left/Right – Maximizes a window to the left or right side of the screen.
Windows logo key ⊞ + Shift + Left/Right – When you have more than one monitor, moves a window/application from one monitor to another.
Windows logo key ⊞ + Space bar – When you have more than one keyboard/input method installed (usually for typing in different languages), switches between installed input methods.
That just about covers most common keyboard shortcuts you can use on a Windows computer. The list goes on however, as there are so many more keyboard shortcuts and functions you can perform, which is even further expanded when taking into account that certain applications have their own keyboard shortcuts when those are in use.
You might end up never using half of the keyboard shortcuts on this list, much less of all keyboard shortcuts in general, favoring the good old fashioned way using the mouse and clicking, and that’s fine. The amount of time you save using a keyboard shortcut versus the clicking your way through things to perform a function is arguably negligible and most of the time is just a quality of life preference at the end of the day. But depending on how you use your computer and what kind of work you do on it, chances are picking up some of these keyboard shortcuts could save you a lot of frustration down the line.
While it may seem like a strange question to ask, there is an interesting history behind the largest (online and brick and mortar) storefront for video games. The control exerted by Steam on the market it controls has wide ranging implications for both consumers and developers. The availability of indie games is a relatively recent development in Steam’s history; so are the current trends pushing the near-exponential growth of the Steam library.
Back when Steam launched, the library selection was very limited, relying on the IP (Intellectual Property) that Valve (Steam’s parent company) had built up over the past half-decade. For the first 2 years of Steam’s life you could only find games created and published by Valve (Half Life and Counterstrike 1.6 being the most notable), but in late 2005 that changed as Steam inked a deal with Strategy First, a small Canadian publisher, and games started flowing onto the service. For the next 5 years the steam library was very limited as generally only large/influential publishers were able to get their games on Steam. This created tension in the Steam community, as many people want indie games to be featured and make their way onto the storefront. The tension broke when Steam agreed to allow indie games on the platform.
By 2010, the issues were obvious: Steam had no way to discern which indie games people wanted and which were not suitable for the platform. Two years later, in response to these concerns, Steam implemented the Green Light system, designed to get quality indie games on Steam. Initially Green Light was received positively. Black Mesa (A popular mod that ported Valve’s original Half Life to the Half Life 2 engine) and other releases of quality games inspired confidence. All seemed good. Fast forward to late 2015: Several disturbing trends had begun to emerge.
An enterprising “developer” realized that you can buy assets for the unity engine store, and with very minimal effort create a “game” that you could get on Green Light. These “games” were often just the unity assets with AI zombies that would slowly follow you around, providing little to no engaging content and which hardly could be considered a game. These games should have never made it through Green Light, but the developers got creative in getting people to vote for their games. Some would give “review” keys away pending a vote/good review on their page while others promised actual monetary profit through the Steam’s Trading Card economy.
Asset flips are just one example of how Green Light was exploited (not to mention the cartel-like behavior behind some of the asset flippers). By 2016 Steam was in full damage control, as the effects of Green Light were becoming apparent, the curated garden that once was Steam became overgrown and flooded with sub-par games. So overabundant was the flow of content that by the end of 2016, nearly 40% of Steam’s whole library was released in that year alone. 13 years of content control and managing customers’ expectations were nullified in the span of a year. (The uptick began in 2014, but 2016 was the real breaking point).
Steam, now in damage-control mode, decided to abandon content control in favor of an open marketplace that uses algorithms to recommend games to consumers. This “fix” has only hid the enormity of sub-par games that make up most of the Steam library now. And while an algorithm can recommend games, it will often end up recommending the same types of games, creating an echo chamber effect as you are only recommended the games you express interest in, and not those that would appeal to you the most.
In 2017, Steam abandoned Green Light in favor of Steam Direct, an updated method of allowing developers to publish games, this time without community interaction. Steam re-assumed the mantle of gatekeeper, taking back responsibility for quality control, albeit with standards so low, one can hardly call it vetting. (Some approved games don’t even include an .exe in the download)
If you’re anything like me, you will (or already have) accidentally wiped your Macbook’s ssd. It may seem like you just bricked your MacBook, but luckily there is a remedy.
The way forward is to use the built-in “internet recovery” which, on startup, can be triggered via pressing “cmd + R”.
There is a bit of a catch: if you do this straight away, there is a good chance that the Mac will get stuck here and throw up an error – error -3001F in my personal experience. This tends to be because the Mac assumes it is already connected to Wi-Fi (when its not) and gives an error after it fails to connect to apple servers. If instead your MacBook lets you select a Wi-Fi network during this process, you’re in the clear and can skip the next paragraph.
Luckily there is another way to connect, via apple’s boot menu. To get there, power the computer on, hit the power button and very soon after, hold the option key. Eventually you will see a screen where you can pick a Wi-Fi network.
Unfortunately if you’re at UMass, eduroam (or UMASS) won’t work, however you can easily connect to any typical home Wi-Fi or a mobile hotspot (although you should make sure you have unlimited data first).
Once you’re connected, you want to hit “cmd + R” from that boot screen. Do not restart the computer. If you had been able to connect without the boot menu, you should be already be in internet recovery and do not need to press anything.
Now that the wifi is connected, you need to wait. Eventually you will see the Macbook’s recovery tools. First thing you need to do is to select disk utility, select your Macbook’s hard drive and hit erase – this may seem redundant but I’ll explain in a moment. Now go back into the main repair menu by closing the disk utility.
Unless you created a “time machine” backup, you’ll want to pick the reinstall Mac OS X option. After clicking through for a bit, you will see a page asking you to select a drive. If you properly erased the hard drive a few moments before, you will be able to select the hard drive and continue on. If you hadn’t erased the drive again, there is a good chance no drive will appear in the drive selection. To fix that, all you have to do is to erase the drive again with the disk utility mention earlier – the one catch is that you can only get back to the recovery tools if you restart the computer and start internet recovery again, which as you may have noticed, is a slow process.
Depending on the age of your Macbook, there is a solid chance that you will end up with an old version of Mac OS. If you have two step verification enabled, you may have issues updating the the latest Mac OS version.
Out of my own experience, OS X Mavericks will not allow you to login to the app store if you have two step verification – but I would recommend trying, your luck could be better than mine. The reason why we need to App Store is because it is required to upgrade to High Sierra/the present version of OS X.
If you were unable to login, there is a work around – that is to say, OS X Mavericks will let you make a new Apple ID, which luckily are free. Since you will be creating this account purely for the sake of updating the MacBook, I wouldn’t recommend using your primary email or adding any form of payment to the account.
Once you’re logged in, you should be free to update and after some more loading screens, you will have an fully up-to-date MacBook. The last thing remaining (if you had to create a new Apple ID) is to log out of the App Store and login to your personal Apple ID.
The next generation of Smartphone security is here! Mostly clear fingerprint sensors can now be embedded behind or under the screen. There has been a huge push in phones this year to make the bezels as tiny as possible. Of course this means finding a place for the fingerprint scanner. Many phones have moved it to the back. LG was the first to do it, and it was relatively well executed. Samsung followed suit, and many complain it’s too hard to tell apart from the camera bump. The Pixel and Pixel 2 have one on the back that works well and has gestures! To minimize the bezel, the iPhone X removed the scanner all-together, and instead hid a plethora of sensors inside its iconic notch to usher in the era of faceID.
The iPhone 5s debuts touchID
The Pixel 2 has a sensor on the back and supports swipe gestures
The S8 removed their iconic home button in favor of curved edges. Many complain this sensor is too hard to find.
The iPhone X removes the fingerprint sensor all together
But now two android phones are being released that place the fingerprint scanner, almost completely invisibly, under the screen. The first, a VIVO X20 Plus UD, won an award for best in show at CES 2018. The sensor is a small pad where a traditional scanner should be. Any time that area of the phone is touched, that area of the phone flashes brightly, and the sensor looks for the light reflected off of your finger. Check it out here:
Vivo’s concept phone brings the concept a bit further, with the fingerprint scanner taking up a larger pad, allowing you to touch anywhere on roughly 1/3 of the screen. This concept phone also pushes the bezel-less concept another level by moving the selfie-cam to a piece of plastic that extends in and out from the top of the phone. Is this the future?
It’s a bit “slow” right now. (It takes about a second.) The cool animation should be enough to hold you over. But keep in mind it’s also the first generation of a product. It will only get quicker with time.
The phone needs to have an OLED screen. While not uncommon, many phones, Iphones included, have LCD displays. OLED screens allow individual pixels to turn on and off, rather than the whole screen or none of it, like LED displays require.
And finally, yes, at very specific lighting conditions and viewing angles, you can see the sensor through the screen.
If you’ve paid attention in the news this week, you may have heard the name “Cambridge Analytica” tossed around or something about a “Facebook data breach.” At a glance, it may be hard to tell what these events are all about and how they relate to you. The purpose of this article is to clarify those points and to elucidate what personal information one puts on the internet when using Facebook. As well, we will look at what you can do as a user to protect your data.
The company at the heart of this Facebook data scandal is Cambridge Analytica: a private data analytics firm based in Cambridge, UK, specializing in strategic advertising for elections. They have worked on LEAVE.EU (a pro-Brexit election campaign), as well as Ted Cruz’s and Donald Trump’s 2016 presidential election campaigns. Cambridge Analytica uses “psychographic analysis” to predict and target the kind of people who are most likely to respond to their advertisements. “Psychographic analysis”, simply put, is gathering data on individuals’ psychological profiles and using it to develop and target ads. They get their psychological data from online surveys that determine personality traits of individuals. They compare this personality data with data from survey-takers’ Facebook profiles, and extrapolate the correlations between personality traits and more readily accessible info (likes, friends, age group) onto Facebook users who have not even taken the survey. According to CEO Alexander Nix, “Today in the United States we have somewhere close to four or five thousand data points on every individual […] So we model the personality of every adult across the United States, some 230 million people.”. This wealth of data under their belts is extremely powerful in their business, because they know exactly what kind of people could be swayed by a political ad. By affecting individuals across the US, they can sway whole elections.
Gathering data on individuals who have not waived away their information may sound shady, and in fact it breaks Facebook’s terms and conditions. Facebook allows its users’ data to be collected for academic purposes, but prohibits the sale of that data to “any ad network, data broker or other advertising or monetization-related service.” Cambridge Analytica bought their data from Global Science Research, a private business analytics research company. The data in question was collected by a personality survey (a Facebook app called “thisisyourdigitallife”, a quiz that appears similar to the silly quizzes one often sees while browsing Facebook). This app, with its special academic privileges, was able to harvest data not just from the user who took the personality quiz, but from all the quiz-taker’s friends as well. This was entirely legal under Facebook’s terms and conditions, and was not a “breach” at all. Survey-takers consented before taking it, but their friends were never notified about their data being used. Facebook took down thisisyourdigitallife in 2015 and requested Cambridge Analytica delete the data, however ex-Cambridge Analytica employee Christopher Wylie says, “literally all I had to do was tick a box and sign it and send it back, and that was it. Facebook made zero effort to get the data back.”
This chain of events makes it clear that data analytics companies (as well as malicious hackers) are not above breaking rules to harvest your personal information, and Facebook alone will not protect it. In order to know how your data is being used, you must be conscious of who has access to it.
What kind of data does Facebook have?
If you go onto your Facebook settings, there will be an option to download a copy of your data. My file is about 600 MB, and contains all my messages, photos, and videos, as well as my friends list, advertisement data, all the events I’ve ever been invited to, phone numbers of contacts, posts, likes, even my facial recognition data! What is super important in the realm of targeted advertisement (though not the only info people are interested in) are the ad data, friends list, and likes. The “Ads Topics” section, a huge list of topics I may be interested in that
determines what kind of ads I see regularly, has my character pinned down.Though some of these are admittedly absurd, (Organism? Mason, Ohio? Carrot?) knowing I’m interested in computer science, cooperative businesses, Brian Wilson, UMass, LGBT issues, plus the knowledge that I’m from Connecticut and friends with mostly young adults says a lot about my character even without “psychographic analysis”—so imagine what kind of in-depth record they have of me up at Cambridge Analytica! I implore you, if interested, to download this archive yourself and see what kind of person the ad-brokers of Facebook think you are.
Is there a way to protect my data on Facebook?
What’s out there is out there, and from the Cambridge Analytica episode we know third-party companies may not delete data they’ve already harvested, and Facebook isn’t particularly interested in getting it back, so even being on Facebook could be considered a risk by some. However, it is relatively easy to remove applications that have access to your information, and that is a great way to get started protecting your data from shady data harvesters. These applications are anything that requires you to sign in with Facebook. This can mean other social media networks that link with Facebook (like Spotify, Soundcloud, or Tinder), or Facebook hosted applications (things like Truth Game, What You Would Look Like As The Other Gender, or Which Meme Are You?). In Facebook’s settings you can view and remove applications that seem a little shady.
You can do so by visiting this link, or by going into settings, then going into Apps.
After that you will see a screen like this, and you can view and remove apps from there.
However, according to Facebook, “Apps you install may retain your info after you remove them from Facebook.” They recommend to “Contact the app developer to remove this info”. There is a lot to learn from the events surrounding Facebook and Cambridge Analytica this month, and one lesson is to be wary of who you allow to access your personal information.
If you are anything like me, you have numerous passwords that you have to keep track of. I can also safely assume, that unless you are in the vast minority or people, you also have autofill/remember passwords turned on for all of your accounts. I’m here to tell you that there is an easy way to remember your passwords so that using these convenient insecurities can be avoided.
The practice that I use and advocate for remembering and creating passwords is called The Roman Room. I’ll admit, this concept is not my own. I’ve borrowed it from a TV show called Leverage. I found it to be a neat concept, and as such I have employed it since. The practice works as follows: Imagine a room, it can be factual or fictional. Now imagine specific, detailed items that you can either “place” in the room, or that exist in the room in real life. This place could be your bedroom, your family’s RV, really anywhere that you have a vivid memory of, and can recall easily. I suggest thinking of items that you know very well, as this will make describing them later easier. Something like a piece of artwork, a unique piece of furniture, or a vacation souvenir. Something that makes a regular appearance in the same spot or something that has a permanence about it.
Now comes the challenging part: creating the password. The difficulty comes in creating a password that fulfills the password requirements at hand. This technique is most useful when you have the option to have a longer password (16+ characters), as that adds to more security, as well as allows for a more memorable/unique password. Let’s say for example that I often store my bicycle by hanging it on my bedroom wall. It’s a black and red mountain bike, with 7 speeds. I could conjure up the password “Black&RedMountain7Sp33d”.
Alternatively, I could create a password that describes that state of the bike opposed to its appearance. This example reminds me of how the bike looks when its hung on the wall, it looks like its floating. Which reminds me of that scene from ET. I could then create the password “PhoneHomeB1cycle”, or something along those lines. This technique is just something that I find useful when I comes time to create a new password, and as a means to remember them easily that also prevents me from being lazy using the same password again, and again. Though this method doesn’t always generate the most secure password (by that I mean gibberish-looking password), it is a means to help you create better passwords and remember them without having to store them behind yet another password (in a password manager). What good is a password if you can’t remember or have to write it down?
The maker movement is a growing trend in the DIY world which involve using microcontroller technologies such as Arduino to develop and create small or large scale projects such as home automation, gadgets, robotics and electronic devices. There is no need in prior knowledge!
Projects vary from home automation to robotics but can be used to pretty much anything; automatic door locks, Phone controlled sprinklers and even portable chargers are just a few examples for the endless possibilities. With all the information available over the internet, virtually anyone can create simple projects without a deep knowledge of electricity and programming. Most products come preconfigured, open source and all the documentation is available online. The movement brings collaboration to the front line of development and projects the work you do inside a computer to the outside physical world
Unlike the past, starting your own project is easy and highly available. No longer the mystery of engineering and computer science wizards prevent you from making your own garage opener. The increase in the demand and the growing interest in DIY projects caused an increase in manufacturing which brought down prices – cables, resistors and transistors are sold for less than dollar each and microcontrollers such as the Arduino Uno would cost only 3$. Start your own project in less than 5$, make that pocket change your next adventure!
Microcontrollers have been heavily integrated in Hackathons in recent years. Hackathon is a design sprint-like event that usually takes two to three days in which people collaborate intensively on software projects within the time limitation. These days Hackathons also include hardware competition categories such as robotics and home automation. So if you’re looking for a way to winning your first Hackathon or interested in finding an internship from a Hackathon the microcontroller categories are somewhat simple to compete with years’ worth of knowledge.
Furthermore, since so many people began working on project there was a need for a community to support and help people out so in addition to the online community there also physical hubs that started to pop up. Those are called “Makerspace”, the makerspace is environment which provides the individual with the tools and knowledge to excel in his task and complete his goal. Even here at UMass Amherst there’s a work in progress to build a makerspace where students can come and get introduced to the topic.
In conclusion, the maker movement combined with the Arduino technologies create an endless possibilities for projects and provides a new visual way for anyone to learn physics, programming and circuit design, It is a way for people to express their creativity.
Future proofing, at least when it comes to technology, is a philosophy that revolves around buying the optimal piece of tech at the optimal time. The overall goal of future proofing is to save you money in the long run by purchasing devices that take a long time to become obsolete.
But, you might ask, what exactly is the philosophy? Sure, it’s easy to say that its best to buy tech that will last you a long time, but how do you actually determine that?
There are four basic factors to consider when trying to plan out a future proof purchase.
Does what you’re buying meet your current needs, as well as needs you might have in the foreseeable future?
Can what you’re buying be feasibly upgraded down the line?
Is what you’re buying about to be replaced by a newer, better product?
What is your budget?
I’m going to walk you through each of these 4 ideas, and by the end you should have a pretty good grasp on how to make smart, informed decisions when future-proofing your tech purchases!
Does what you’re buying meet your current needs, as well as needs you might have in the foreseeable future?
This is the most important factor when trying to make a future-proof purchase. The first half is obvious: nobody is going to buy anything that doesn’t do everything they need it to do. It’s really the second half which is the most important aspect.
Let’s say you’re buying a laptop. Also, let’s assume that your goal is to spend the minimum amount of money possible to get the maximum benefit. You don’t want something cheap that you’ll get frustrated with in a few months, but you’re also not about to spend a downpayment on a Tesla just so you can have a useful laptop.
Let’s say you find two laptops. They’re mostly identical, albeit for one simple factor: RAM. Laptop A has 4gb of RAM, while Laptop B has 8gb of RAM. Let’s also say that Laptop A is 250 dollars, while Laptop B is 300 dollars. At a difference of 50 dollars, the question that comes to mind is whether or not 4gb of RAM is really worth that.
What RAM actually does is act as short term storage for your computer, most important in determining how many different things your computer can remember at once. Every program you run uses up a certain amount of RAM, with things such as tabs on Google Chrome famously taking up quite a bit. So, essentially, for 50 dollars you’re asking yourself whether or not you care about being able to keep a few more things open.
Having worked retail at a major tech store in my life, I can tell you from experience that probably a little over half of everyone asked this question would opt for the cheaper option. Why? Because they don’t think that more RAM is something that’s worth spending extra money at the cash register. However, lots of people will change their mind on this once you present them with a different way of thinking about it.
Don’t think of Laptop A as being 250 and Laptop B as being 300. Instead, focus only on the difference in price, and whether or not you think you’d be willing to pay that fee as an upgrade.
You see, in half a year, when that initial feeling of spending a few hundred dollars is gone, it’s quite likely that you’ll be willing to drop an extra 50 dollars so you can keep a few more tabs open. While right now it seems like all you’re doing is making an expensive purchase even more expensive, what you’re really doing is making sure that Future_You doesn’t regret not dropping the cash when they had an opportunity.
Don’t just make sure the computer your buying fits your current needs. Make sure to look at an upgraded model of that computer, and ask yourself; 6 months down the line, will you be more willing to spend the extra 50 dollars for the upgrade? If the answer is yes, then I’d definitely recommend considering it. Don’t just think about how much money you’re spending right now, think about how the difference in cost will feel when you wish that you’d made the upgrade.
For assistance in this decision, check the requirements for applications and organizations you make use of. Minimum requirements are just that, and should not be used as a guide for purchasing a new machine. Suggested requirements, such as the ones offered at UMass IT’s website, offer a much more robust basis from which to future-proof your machine.
Can what you’re buying be meaningfully upgraded down the line?
This is another important factor, though not always applicable to all devices. Most smartphones, for example, don’t even have the option to upgrade their available storage, let alone meaningful hardware like the RAM or CPU.
However, if you’re building your own PC or making a laptop/desktop purchase, upgradeability is a serious thing to consider. The purpose of making sure a computer is upgradeable is to ensure that you can add additional functionality to the device while having to replace the fewest possible components.
Custom PCs are the best example of this. When building a PC, one of the most important components that’s often overlooked is the power supply. You want to buy a power supply with a high enough wattage to run all your components, but you don’t want to overspend on something with way more juice than you need, as you could have funneled that extra cash into a more meaningful part.
Lets say you bought a power supply with just enough juice to keep your computer running. While that’s all fine right now, you’ll run into problems once you try to make an upgrade. Let’s say your computer is using Graphics Card A, and you want to upgrade to Graphics Card B. While Graphics Card A works perfectly fine in your computer, Graphics Card B requires more power to actually run. And, because you chose a lower wattage power supply, you’re going to need to replace it to actually upgrade to the new card.
In summary, what you planned to just be a simple GPU swap turned out to require not only that you pay the higher price for Graphics Card B, but now you need to buy a more expensive power supply as well. And, sure, you can technically sell your old power supply, you would have saved much more money (and effort) in the long run by just buying a stronger power supply to start. By buying the absolute minimum that you could to make your computer work, you didn’t leave yourself enough headroom to allow the computer to be upgraded.
This is an important concept when it comes to computers. Can your RAM be upgraded by the user? How about the CPU? Do you need to replace the whole motherboard just to allow for more RAM slots? Does your CPU socket allow for processors more advanced than the one you’re currently using, so you can buy cheap upgrades once newer models come out?
All of these ideas are important when designing a future-proof purchase. By ensuring that your device is as upgradeable as possible, you’re increasing its lifespan by allowing hardware advancements in the future to positively increase your device’s longevity.
Is what you’re buying about to be replaced by a newer, better product?
This is one of the most frustrating, and often one of the hardest-to-determine aspects of future proofing.
We all hate the feeling of buying the newest iPhone just a month before they reveal the next generation. Even if you’re not the type of person that cares about having the newest stuff, it’s to your benefit to make sure you aren’t making purchases too close to the release of the ‘next gen’ of that product. Oftentimes, since older generations become discounted upon the release of a replacement, you’d even save money buying the exact same thing by just waiting for the newer product to be released.
I made a mistake like this once, and it’s probably the main reason I’m including this in the article. I needed a laptop for my freshman year at UMass, so I invested in a Lenovo y700. It was a fine laptop — a little big but still fine — with one glaring issue: the graphics card.
I had bought my y700 with the laptop version of a GTX 960 inside of it, NVidias last-gen hardware. The reason this was a poor decision was because, very simply, the GTX 1060 had already been released. That is, the desktop version had been released.
My impatient self, eager for a new laptop for college, refused to wait for the laptop version of the GTX 1060, so I made a full price purchase on a laptop with tech that I knew would be out of date in a few months. And, lo and behold, that was one of the main reasons I ended up selling my y700 in favor of a GTX 1060 bearing laptop in the following summer.
Release dates on things like phones, computer hardware and laptops can often be tracked on a yearly release clock. Did Apple reveal the current iPhone in November of last year? Maybe don’t pay full price on one this coming October, just in case they make that reveal in a similar time.
Patience is a virtue, especially when it comes to future proofing.
What is your budget?
This one is pretty obvious, which is why I put it last. However, I’m including it in the article because of the nuanced nature of pricing when buying electronics.
Technically, I could throw a 3-grand budget at a Best Buy employee’s face and ask them to grab me the best laptop they’ve got. It’ll almost definitely fulfill my needs, will probably not be obsolete for quite awhile, and might even come with some nice upgradeability that you may not get with a cheaper laptop.
However, what if I’m overshooting? Sure, spending 3 grand on a laptop gets me a top-of-the-line graphics card, but am I really going to utilize the full capacity of that graphics card? While the device you buy might be powerful enough to do everything you want it to do, a purchase made by following my previously outlined philosophy on future proofing will also do those things, and possibly save you quite a bit of money.
That’s not to say I don’t advocate spending a lot of money on computer hardware. I’m a PC enthusiast, so to say that you shouldn’t buy more than you need would be hypocritical. However, if your goal is to buy a device that will fulfill your needs, allow upgrades, and be functional in whatever you need it to do for the forseeable future, throwing money at the problem isn’t really the most elegant way of solving it.
Buy smart, but don’t necessarily buy expensive. Unless that’s your thing, of course. And with that said…
…throwing money at a computer does come with some perks.
Raspbian may be the most common OS on Raspberry Pi devices, but it is definitely not alone in the market. Arch Linux is one such competitor, offering a minimalist disk image that can be customized and specialized for any task, from the ground up – with the help of Arch Linux’s superb package manager, Pacman.
The office website for Arch Linux Arm contains all the necessary files and detailed instructions for the initial setup. After a reasonably straightforward process, plugging in the Raspberry Pi will great you with a command line interface, CLI, akin to old Microsoft DOS.
Luckily for those who enjoy a graphical interface, Arch Linux supports a wide variety in its official repository, but for that, we need the internet. Plenty of tutorials detail how to connect to a typical home wifi, but Eduroam is a bit more challenging. To save everyone several hours of crawling through wikis and forums, the following shall focus on Eduroam.
To begin, we will need root privilege; by default this can be done with the following command:
After entering the password, we need to make the file:
Quick note: The file doesn’t need to be named eduroam.
Now that we’re in the nano text editor we need to write the configuration for eduroam. Everything except the indentity and password field needs to be copied exactly. For the propose of this Tutorial I’ll be John Smith, firstname.lastname@example.org, with password Smith12345.
Provided everything is set correctly, you will see “wlan0: link becomes ready” halfway through the last line of the page, hit enter and just one more command.
Now, just to check we’re connected, we’ll ping google
ping google.com -c 5
If everything is set, you should see 5 packets transmitted, 5 packets received.
Now that we’re connected, its best to do a full update
At this point, you are free to do what you’d like with Arch. For the sake of brevity I will leave off here, for extra help I highly recommend the official Arch Linux Wiki. For a graphical UI, I highly recommend setting up XFCE4, as well as a network (wifi) manager.
Example of a customized XFCE4 desktop by Erik Dubois
Disclaimer: UMass IT does not currently offer technical support for Raspberry Pi.
Glitch art is an increasingly popular form of art that uses digital interference or glitches to make interesting art. In this tutorial I will be showing you how to use Audacity to edit photos as if they are sound, which can create some cool effects.
Here’s what you need:
Adobe Photoshop (I use the CC version so your experience may vary.)
The first step is to open the image in Photoshop. Go to File> Open > Your_file. After opening, we need to save this file as a format that Audacity can understand. We will use the .tiff format. So go to File>Save As Then go to .tiff next to “Save as type”. See the below photo for an example of how this should look:
Then Photoshop will ask you about the settings for the .tiff file. Leave everything as it is except “Pixel Order” change it to Per Channel. Per channel splits up where the color data for the photo is stored, allowing us to edit individual parts of the RGB spectrum. See below photo again:
Once the file is saved as a .tiff file, open up Audacity and click File>Import Raw Data then select your .tiff file. Once this is complete Audacity will ask for some settings to import the raw data. Change “encoding” to “U-Law” and “Byte Order” to “Little-endian” then click import. See photo of how it should look below:
You now have your image in Audacity as a sound file! Here is where the creativity comes in. To glitch up the image, use the effect tab in Audacity and play around with different effects. Most images have a part in the beginning of the file that is needed to open the image so if you get an error trying to open the picture don’t worry; just don’t start the effect so close to the beginning next time. There should also be some noticeable sections in the waveform — these represent the different RGB colors. So if you only select one color, you can make an effect only happen to one color. Once you finish your effects, it’s time to export.
To export go to File>Export. When prompted set the file type to “Other uncompressed files”. See photo of how it should look below:
Then click option at the bottom right. For “header” select “RAW (header-less) and for “encoding” select “U-Law” again. Then hit “OK” and save your file. Now you should be able to open the RAW file and see how your work came out. See photo of how it should look below:
You’ve probably heard of Bitcoin. Maybe you’ve even heard of other cryptocurrencies, like Ethereum. Maybe you’ve heard that these cryptocurrencies are mined, but maybe you don’t understand how exactly a digital coin could be mined. We’re going to discuss what cryptocurrency miners do and why they do it. We will be discussing the Bitcoin blockchain in particular, but keep in mind that Bitcoin has grown several orders of magnitude greater in the 9-10 years it’s been around. Though other cryptocurrencies change some things up a bit, the same general concepts apply to most blockchain-based cryptocurrencies.
What is Bitcoin?
Bitcoin is the first and the most well-known cryptocurrency. Bitcoin came about in 2009 after someone (or someones, nobody really knows) nicknamed Satoshi Nakamoto released a whitepaper describing a concept for a decentralized peer-to-peer digital currency based on a distributed ledger called a blockchain, and created by cryptographic computing. Okay, those are a lot of fancy words, and if you’ve ever asked someone what Bitcoin is then they’ve probably thrown the same word soup at you without much explanation, so let’s break it down a bit:
Decentralized means that the system works without a main central server, such as a bank. Think of a farmer’s market versus a supermarket; a supermarket is a centralized produce vendor whereas a farmer’s market is a decentralized produce vendor.
Peer-to-peer means that the system works by each user communicating directly with other user. It’s like talking to someone face-to-face instead of messaging them through a middleman like Facebook. If you’ve ever used BitTorrent (to download Linux distributions and public-domain copies of the U.S. Constitution, of course), you’ve been a peer on a peer-to-peer BitTorrent network.
Blockchain is a hot topic right now, but it’s one of the harder concepts to describe. A blockchain performs the job of a ledger at a bank, keeping track of what transactions occurred. What makes blockchain a big deal is that it’s decentralized, meaning that you don’t have to trust a central authority with the list of transactions. Blockchains were first described in Nakamoto’s Bitcoin whitepaper, but Bitcoin itself is not equivalent to blockchain. Bitcoin uses a blockchain. A blockchain is made up of a chain of blocks. Each block contains a set of transactions, and the hash of the previous block, thus chaining them together.
Hashing is the one-way (irreversible) process of converting any input into a string of bits. Hashing is useful in computer science and cryptography because it’s really easy to get the hash of something, but it’s almost impossible to find out what input originally made a particular hash. Any input will always have the same output, but any little difference will make a completely different hash. For example, in the hashing algorithm that Bitcoin uses called SHA-256, “UMass” will always be:
Those are the general details that you need to know to understand cryptocurrency. Miners are just one kind of participant in cryptocurrency.
Who are miners?
Anybody with a Bitcoin wallet address can participate in the blockchain, but not everybody who participates has to mine. Miners are the ones with the big, beefy computers that run the blockchain network. Miners run a mining program on their computer. The program connects to other miners on the network and constantly requests the current state of the blockchain. The miners all race against each other to make a new block to add to the blockchain. When a miner successfully makes a new block, they broadcast it to the other miners in the network. The winning miner gets a reward of 12.5 BTC for successfully adding to the blockchain, and the miners begin the race again.
Okay, so what are the miners doing?
Miners can’t just add blocks to the blockchain whenever they want. This is where the difficulty of cryptocurrency mining comes from. Miners construct candidate blocks and hash them. They compare that hash against a target.
Now get ready for a little bit of math: Remember those 256-bit hashes we talked about? They’re a big deal because there are 2^256 possible hashes (that’s a LOT!), ranging from all 0’s to all 1’s. The Bitcoin network has a difficulty value that changes over time to make finding a valid block easier or harder. Every time a miner hashes a candidate block, they look at the binary value of the hash, and in particular, how many 0s the hash starts with. When a candidate block fails to meet the target, as they often do, the miner program tries to construct a different block. If the number of 0’s at the start of the hash is at least the target amount specified by the difficulty, then the block is valid!
Remember that changing the block in any way makes a completely different hash, so a block with a hash one 0 short of the target isn’t any closer to being valid than another block with a hash a hundred 0’s short of the target. The unpredictability of hashes makes mining similar to a lottery. Every candidate block has as good of a chance of having a valid hash as any other block. However, if you have more computer power, you have better odds of finding a valid block. In one 10 minute period, a supercomputer will be able to hash more blocks than a laptop. This is similar to a lottery; any lottery ticket has the same odds of winning as another ticket, but having more tickets increases your odds of winning.
Can I become a miner?
You probably won’t be able to productively mine Bitcoin alone. It’s like buying 1 lottery ticket when other people are buying millions. Nowadays, most Bitcoin miners pool their mining power together into mining pools. They mine Bitcoin together to increase the chances that one of them finds the next block, and if one of the miners gets the 12.5 BTC reward, they split their earnings with the rest of the pool pro-rata: based on the computing power (number of lottery tickets) contributed.
The U.S. dollar used to be tied to the supply of gold. A U.S. dollar bill was essentially an I.O.U. from the U.S. Federal Reserve for some amount of gold, and you could exchange paper currency for gold at any time. The gold standard was valuable because gold is rare and you have to mine for it in a quarry. Instead of laboring by digging in the quarries, Bitcoin miners labor by calculating hashes. Nobody can make fraudulent gold out of thin air. Bitcoin employs the same rules, but instead of making the scarce resource gold, they made it computer power. It’s possible for a Bitcoin miner to get improbably lucky and find 8 valid blocks in one day and earn 100 BTC, just like it’s possible but improbable to find a massive golden boulder while mining underground one day. These things are effectively impossible, but it is actually impossible for someone to fake a block on the blockchain (The hash would be invalid!) or to fake a golden nugget. (You can chemically detect fool’s gold!)
Other cryptocurrencies work in different ways. Some use different hashing algorithms. For example, Zcash is based on a mining algorithm called Equihash that is designed to be best mined by the kinds of graphics cards found in gaming computers. Some blockchains aren’t mined at all. Ripple is a coin whose cryptocurrency “token” XRP is mostly controlled by the company itself. All possible XRP tokens already exist and new ones cannot be “minted” into existence, unlike the 12.5 BTC mining reward in Bitcoin, and most XRP tokens are still owned by the Ripple company. Some coins, such as NEO, are not even made valuable by scarcity of mining power at all. Instead of using “proof of work” like Bitcoin, they use “proof of stake” to validate ownership. You get paid for simply having some NEO, and the more you have, the more you get!
Blockchains and cryptocurrencies are have become popular buzzwords in the ever-connected worlds of computer science and finance. Blockchain is a creative new application of cryptography, computer networking, and processing power. It’s so new that people are still figuring out what else blockchains can be applied to. Digital currency seems to be the current trend, but blockchains could one day revolutionize health care record-keeping or digital elections. Research into blockchain technology has highlighted many weaknesses in the concept; papers have been published on doublespend attacks, selfish mining attacks, eclipse attacks, Sybil attacks, etc. Yet the technology still has great potential. Cryptocurrency mining has already brought up concerns over environmental impact (mining uses a lot of electricity!) and hardware costs (graphics card prices have increased dramatically!), but mining is nevertheless an engaging, fun and potentially profitable way to get involved in the newest technology to change the world.
By now, we’ve all seen or heard stories about a recent scare in Hawai’i where residents were bombarded (ironically) with an emergency notification warning of a ballistic missile heading towards the isolated island state. Within seconds, the people of Hawai’i panicked, contacting their families, friends, loved ones, and stopping everything that they were doing in their final minutes of their lives.
Of course, this warning turned out to be false.
The chaos that ensued in Hawai’i was the result of an accidental warning fired off by a government employee of the Emergency Management Agency. Not only did this employee send off a massive wave of crisis alert notifications to Hawaiians everywhere. In some cases, it took up to 30+ minutes to signal to people that this was a false flag warning. With the rising tensions between the United States and the trigger-happy North Korea, you could imagine that this could be problematic, to put it simply.
The recent mishap in Hawai’i opens up a conversation about Phone notifications when responding to crisis situations. While Hawaiians, and more broadly Americans, aren’t used to seeing this type of notification appear on their lock screen, this is a common and very effective tool in the middle east, where Israel uses push notifications to warn of nearby short range missiles coming in from Syria and the Gaza Strip/West Bank.
In a region full hostilities and tense situations, with possible threats from all angles, Israel keeps its land and citizens safe using a very effective system of Red Alert, an element of Israel’s Iron Dome. According to Raytheon, a partner in developing this system, the Iron Dome “works to detect, assess and intercept incoming rockets, artillery and mortars. Raytheon teams with Rafael on the production of Iron Dome’s Tamir interceptor missiles, which strike down incoming threats launched from ranges of 4-70 km.” With this system comes the Red Alert, which notifies Israelis in highly populated areas of incoming attacks, in case the system couldn’t stop the missile in time. Since implementation in 2011 and with more people receiving warnings due to growing cell phone use, Israelis have been kept safe and are notified promptly, leading to a 90% success rate of the system and keeping civilian injuries/casualties at very low levels.
If this Hawaiian missile alert was true, this could have saved many lives. In an instant, everyone was notified and people took their own precautions to be aware of the situation at hand. This crucial muff in the alert system can be worked on in the future, leading to faster, more effective approaches to missile detection, protection, and warnings, saving lives in the process.
In an era of constant complaint about the ubiquity of cell phone use, some of the most positive implications of our connected world have been obscured. Think back to 1940: London bombing raids were almost surprises, with very late warnings and signals that resulted in the destruction of London and many casualties. With more advanced weapons, agencies are designing even more advanced defense notification systems, making sure to reach every possible victim as fast as possible. In an age where just about everyone has a cell phone, saving lives has never been easier.
For more reading, check out these articles on Washington Post and Raytheon:
By now it’s likely you’ve heard of Solid State Drives, or SSDs as a blazing fast storage drive to speed up old computers, or provide reliable uptime compared to their replacement, Hard Drives, or HDDs. But there are countless options available, so what is the best drive?
There are several connector types that SSDs use to interface with a computer, including SATA, PCIe, M.2, U.2, mSATA, SATA Express, and even none, as some SSDs now come soldered to the board. For a consumer, the most common options are SATA and M.2. SATA is known as the old two-connector system that hard drives used, including a SATA Power and SATA data cable. SATA-based SSDs are best for older computers that lack newer SSD connector types and have only SATA connections. A great way to boost the speed of an older computer with a spinning hard drive is to clone the drive to an SSD, and replace the Hard Drive with an SSD, increasing the computer’s ability to read/write data, possibly by tenfold. However it should be noted that these SATA drives are capped at a maximum theoretical transfer speed of 600MB/s, whereas other un-bottlenecked SSDs have recently exceeded 3GB/s, nearly five times the SATA maximum. This means SATA-based SSDs cannot utilize the speed and efficiency of newer controllers such as NVMe.
NVMe, or Non-Volatile Memory Express, is a new controller used to replace AHCI, or Advance Host Controller Interface. AHCI is the controller that Hard Drives traditionally use to interface between the SATA bus of a Hard Drive and the computer it is connected to. AHCI as a controller also provides a bottleneck to SSDs in the form of latency the same way the SATA bus provides a bandwidth bottleneck to an SSD. The AHCI controller was never intended for use with SSDs, where the NVMe controller was built specifically with SSDs only in mind. NVMe promises lower latency by operating with higher efficiency, working with Solid State’s parallelization abilities by being able to run more than two thousand times more commands to or from the drive than compared to a drive on the AHCI controller. To get the optimal performance out of an NVMe drive, make sure it uses PCIe (Peripheral Component Interconnect Express) as a bus which alleviates all the bottlenecks that would come with using SATA as a bus.
If the latest and greatest speeds and efficiencies that come with an NVMe SSD is a must have, then there’s a couple things to keep in mind. First, make sure the computer receiving the drive has the M.2 connector type for that type of drive. Most consumer NVMe drives only support the M.2 “M” key (5 pins), which is the M.2 physical edge connector. SATA based SSDs use the “B” key (6 pins) but there are some connectors that feature “B + M” which can accept both a SATA and NVMe drive. Second, the computer needs to be compatible with supporting and booting to an NVMe drive. Many older computers and operating systems may not support booting to or even recognize an NVMe drive due to how new it is. Third, expect to pay a premium. The PCIe NVMe drives are the newest and greatest of the SSD consumer market, so cutting edge is top price. And finally, make sure an NVMe drive fits the usage case scenario. The performance improvement will only be seen with large read/writes to and from the drive or large amounts of small read/writes. Computers will boot faster, files will transfer and search faster, programs will boot faster, but it won’t make a Facebook page load any faster.
In conclusion, SSDs are quickly becoming ubiquitous in the computing world and for good reason. Their prices are plummeting, their speeds are unmatched, they’re smaller fitting into thinner systems, and they’re far less likely to fail, especially after a drop or shake of the device. If you have an old computer with slow loading times in need of a performance boost, a great speed-augmenting solution is to buy a SATA SSD. But if being cutting edge and speed is what is what you’re looking for, nothing that beats a PCIe NVMe M.2 drive.
When I listen to a podcast, there is often an ad for ZipRecruiter. ZipRecruiter “is the fastest way to find great people,” or so it says on the homepage of their website. Essentially, employers post a job to ZipRecruiter and the job posting gets pushed to all sorts of job searching websites like Glassdoor, job.com, geebo, and a bunch of others I have never heard of before. You just fill out the information once and your job gets posted to 200 different sites. That’s kind of cool. But there is a big problem with that. HR now has to deal with hundreds of applications; and if you are applying to a company that uses ZipRecruiter, they are probably having a robot go through your resume and cover letter to look for words like “manage”, “teamwork”, or “synergize.”
But I don’t want my resume looked at by a bot. I want my resume to be looked at by a real human being. I have applied through these websites before and I don’t even get a rejection letter from the company in question, yet alone an idea that someone printed out my carefully crafted resume and cover letter and then read them. This is where you reach a hurdle on the path to post-graduation-job-nirvana. I want to find jobs, so I look on Glassdoor, job.com, & geebo but then I want to stand out from the pack. How do I do that? I have no idea. Instead I am offering a solution to avoid those websites.
1. The other day I was sitting, looking at a magazine, when I realized something great about the thing in my hand. Everyone in the industry takes part in the magazine. Let’s say you are a psychology major looking for an internship. Why not pick up the latest version of Psychology Today and go through the pages and check out companies that advertise? My point is that your favorite magazines already reflect your passions, why not go through the pages of your passions to look for the company that you didn’t think to apply to?
2. Now that you’ve identified where you want to apply, keep a list. There are some tutorials out there on the internet on how to keep a proper list of applications. I don’t really like those. They include things like: application deadline, if you’ve completed the cover letter, other application materials, and people in the company you may know.
I really disagree with this strategy. Most employers announce in advance when the postings are going up and most employers have already found a match by the end of the deadline. Instead of a “application deadline” field, I prefer a “check during ___ (season)” field. Then, once the application is open, I write the cover letter and send of my resume in one sitting. Just to get it out of the way. I don’t need to check in with my checklist.
3. Everyone always says that the only sure way to get a job is through people you know. While I can agree that networking is probably the most consistent way to get your foot in the door, it isn’t always possible for all people. That’s why I’ve been using UMass career fairs as pure networking opportunities. Instead of spamming my resume across the career fair, I talk to a few recruiters that I know are just as passionate as I am on finding a job that’s the right fit.
4. City websites are my other secret weapon to avoid ZipRecruiter. I will search things like “Best Places to work in Seattle” and then I apply to all of those. Or I will search “Businesses with offices in the Prudential Building, Boston” because I dream of one day working there. I am always just looking for more names to put on my list that don’t get hundreds of applicants that all sound exactly like me.
5. I also tend to look at the products around me that I don’t necessarily think about. Odwalla and IMAX are both companies that I see all the time, but I wouldn’t think of applying to those because I don’t write them down.
There are ways to avoid your resume getting lost in a stack a mile high, it just takes some planning and forethought to avoid it.
As the consumer drone market becomes increasingly competitive, DJI has emerged as an industry leader of drones and related technologies both on the consumer end, as well as the professional and industrial markets. Today we’re taking a look at DJI’s three newest drones.
First up is the DJI Spark, DJI’s cheapest consumer drone available at time of writing. The drone is a very small package, using Wi-Fi and the DJI GO Smartphone app to control the drone. The drone features a 12-megapixel camera, capable of 1080p video at 30 fps. The DJI Spark features a 16 minute runtime removable battery. Starting at $399, this drone is best for simple amateur backyard style learners just getting into the drone market. User-friendly and ultra-portable, this drone is limited in advanced functionality and is prone to distance and connectivity problems, but is an essential travel item for the casual and amateur drone user looking to take some photos from the sky without dealing with the hassle of advanced photography and flying skills that are required on some of DJI’s other offerings.
DJI’s most recent offering is the DJI Mavic Air, DJI’s intermediate offering for drone enthusiasts. The drone is a compact, foldable package, using Wi-Fi and the DJI GO Smartphone app in conjunction with a controller to control the drone. The drone features a 12-megapixel camera, capable of 4K video at 30 fps. The DJI Mavic Air features a 21-minute runtime removable battery. Starting at $799, this drone is a step up from DJI’s lower priced offerings but bundles a package of features that crater to both the amateur drone photographer and hobbyist/enthusiast drone flyer such as advanced collision avoidance sensors, panorama mode, and internal storage. While heavier and bigger than its smaller brother the DJI Spark, the DJI Mavic Air’s foldability creates an unbelievably portable package with user-friendly features and one of DJI’s best camera sensors to ship in their consumer drone lineup. Also plagued with Wi-Fi limitations, the DJI Mavic Air is an excellent travel drone for more serious photographers and videographers if you don’t venture out too far.
One of DJI’s most ambitious and most popular consumer drones is the DJI Mavic Pro, a well-rounded, no compromise consumer drone with advanced photography and flying abilities. The drone is a compact, foldable package like the DJI Mavic Air, using the DJI GO Smartphone app in conjunction with a controller using OcuSync Transmission technology to provide a clear, long range, live feedback video system usually free of interference. The drone features a 12-megapixel camera, capable of 4K video at 30 fps. The DJI Mavic Pro features a 30 minute runtime removable battery. Starting at $999, this drone is not cheap, but is an essential tool for the photographer or drone enthusiast requiring the best flying and photography capture features in DJI’s best portable drone offering.
Disclaimer: Operation of a drone, regardless of recreational or commercial intent, is subject to rules and regulations outlined by the Federal Aviation Administration (FAA). All drone operators should operate aircraft in compliance with local, state, and federal laws. Compliant and suggested practices include operating aircraft with the presence of a spotter, maintaining line of sight on your aircraft, registering your aircraft with the FAA, sharing airspace with other recreational and commercial aircraft, knowing your aircraft and its impact when operating around people & animals, and not flying your aircraft in FAA restricted zones. For more information, please visit the FAA website on Unmanned Aerial Systems as it pertains to you: https://www.faa.gov/uas/faqs/
If you ever visit a college campus you will notice the plethora of Apple laptops. Apple seems to supply a huge percentage of college students’ laptops, but why?
To start off with, Apple has a brand image that few other companies can match. From my experience in IT many people think that Apple machines “last longer” and “won’t break as easily” when compared to their PC rivals. And from my experience that isn’t necessarily false. Certainly in terms of build quality the average Mac will beat the average PC, but it’s not really a fair comparison. Macs cost far more than the average PC, and this higher build quality is priced-in. That said, even some higher-end PCs have build qualities that seem to degrade over time in a way that Macs don’t. My guess for why this happens is that PCs are constantly trying new things to differentiate themselves from the pack of similarly specced competitors which leads to constant experimentation. Trying new things isn’t bad, but with this throw-everything-at-the-wall mentality there will certainly be a few products that weren’t truly tested and that may have been pushed out to quickly. Apple on the other hand has a handful of laptops which for the most part have been around for years. They have mastered the art of consistently making reliable laptops. It’s that consistency that is really important. In all likelihood every major laptop manufacturer has made a very reliable computer, but very few have the track record that Apple has. It’s this track recorded that makes people trust that their new computer will last all four years.
To add to their reliability Apple also has the upper hand in its physical locations. Every urban area in the U.S. has an Apple store, somewhere to take your device if it’s acting up or check out a new laptop before you buy it. I think this plays a big role in Apples success. Being able to try out a product before buying it is a clear advantage. People get to know what the product will be like in person, which might make them more likely to buy it. Secondly, knowing that if anything happens to your device there is a physical location where you can bring it can be very reassuring. If you buy an Apple laptop no longer will you have to wait on the phone for 3 hours trying to get ahold of someone helpful. Just walk into the store and you’ll get the assistance you need.
I am not the only one to notice that stores are a big part of Apple’s success, as Microsoft has been building more and more stores to help compete. They realized that Apple would always have better customer service if they didn’t make there own stores. This has become even more important for Microsoft as they built up there hardware.
Finally one of the biggest reasons in my mind is that people buy Apple laptops, because they have Apple phones. It’s seems logical that one would buy more products from a company if they are satisfied with the one they have. I think this is what is happening with Apple computers. iPhones are incredibly popular with college-aged kids, so naturally they will gravitate towards the laptop manufacturer that makes their phones. Furthermore iPhones and Apple laptops work together in a way that a PC and an iPhone can’t. Apple devices can send iMessages, they integrate with iCloud seamlessly, and they share similar programs which can make picking either one up faster.