The University of Massachusetts Amherst
Categories
Operating System

The Future of the Mac

There have been two major rumors in the past month about the future of the Mac. It’s clear in the past several years that much of Apple’s development effort has been geared towards Apple’s mobile operating system, iOS, which powers iPhones and iPads. Apple has also been introducing new platforms, such as Apple Watch and HomePod. Through all of this, the Mac has been gaining features at a snail’s pace. It seems like Apple will only add features when it feels it must in order to match something it introduces first on iOS. But these recent rumors point to a Mac platform that could be revitalized.

The first major rumor is a shared development library between iOS and the Mac. What does this mean to non-developers? It means that we could very well see iOS apps such as Snapchat or Instagram on Mac. MacOS uses a development framework called AppKit. This framework stems back many years to when Apple bought a company called NeXT computer. These systems are what eventually became the Mac, and the underlying framework has stayed largely the same since then. Obviously, there have been changes and many additions, but it is still different from what developers use to make iOS apps for iPhones and iPads. iOS uses a framework called UIKit, which is very different in key areas. Basically, it means that to develop an app for the iPhone and the Mac takes twice the development effort. Supposedly, Apple is working on a framework for the Mac that is virtually identical to UIKit. This means that developers can port their apps to the Mac with basically no work. In theory, the amount of apps on the Mac would increase as developers port over their iOS apps to the Mac. This means many communication apps such as Snapchat and Instagram could be usable desktop apps.

What Apple’s future macOS framework could look like.

The second major rumor is that Apple is expected to switch from Intel provided CPUs to their own ARM based architecture. Apple switched to Intel CPUs in 2006 after using PowerPCs for many years. This transition brought along an almost 2x increase in performance compared to the PowerPC chips they were using. In the last few years, Intel hasn’t seen the year over year performance increases that they used to have. Additionally, Intel has been delaying new architectures as manufacturing smaller chips gets harder and harder. This means Apple is dependent on Intel’s schedule to introduce new features. On the other hand, Apple has been producing industry-leading ARM chips for use in their iPhones and iPads. These chips are starting to benchmark at or above some of the Intel chips that Apple is using in their Mac line. Rumors are saying that the low-power Macs could see these new ARM based chips as soon as 2020. The major caveat with this transition is that developers could have to re-write some of their applications for the new architecture. This means it might take some time for applications to be compatible, and some older applications might never get updated.

Its clear that Apple’s focus in the past several years has been on its mobile platforms and not on its original platform, the Mac. But these two rumors show that Apple is still putting serious engineering work into its desktop operating system. These new features could lead to a thriving Mac ecosystem in the years to come.

Categories
Hardware Library Mac OSX Software Windows

A Reflection on Winning The Vive

By Parker Louison 

The Views and Opinions Expressed in This Article Are Those of Parker Louison and Do Not Necessarily Reflect the Official Policy or Position of UMass Amherst IT 

A Note of Intention

I want to start off this article by explaining that I’m not making this in an effort to gloat or brag, and I certainly hope it doesn’t come across that way. I put all of the creative energy I had left this semester into the project I’m about to dissect and discuss, so sadly I won’t be publishing a video this semester (as I’ve done for the past two semesters). One of the reasons I’m making this is because a lot of the reaction towards what I made included people asking how I made it and how long it took me, and trust me, we’ll go in depth on that.

My First Taste

My first experience with high-grade virtual reality was a few weeks before the start of my sophomore year at UMass when my friend Kyle drove down to visit me, bringing along his HTC Vive after finding out that the only experience I’d had with VR was a cheap $20 adapter for my phone. There’s a consensus online that virtual reality as a concept is better pitched through firsthand experience rather than by word of mouth or marketing. The whole appeal of VR relies on subjective perception and organic optical illusions, so I can understand why a lot of people think the whole “you feel like you’re in the game” spiel sounds like nothing but a load of shallow marketing. Remember when Batman: Arkham Asylum came out and nearly every review of it mentioned that it made you feel like Batman? Yeah, well now there’s actually a Batman Arkham VR game, and I don’t doubt it probably does make you actually feel like you’re Batman. The experience I had with VR that night hit me hard, and I came to understand why so many people online were making it out to be such a big deal. Despite my skeptical mindset going in, I found that it’s just as immersive as many have made it out to be. 

This wasn’t Microsoft’s Kinect, where the action of taking away the remote actually limited player expression. This was a genuinely deep and fascinating technological breakthrough that opens the door for design innovations while also requiring programmers to master a whole new creative craft. The rulebook for what does and doesn’t work in VR is still being written, and despite the technology still being in its early stages, I wanted in. I wanted in so badly that I decided to try and save up my earnings over the next semester in an effort to buy one. That went about as well as you’d expect; not just because I was working within a college student’s budget, but also because I’m awful with my money. My Art-Major friend Jillian would tell you it’s because I’m a Taurus, but I think it has more to do with me being a giant man-child who impulse-purchases stupid stuff because the process of waiting for something to arrive via Amazon feels like something meaningful in my life. It’s no wonder I got addicted to Animal Crossing over Spring Break… 

The Task

Anyway, I was sitting in my Comp-Lit discussion class when I got the email about the Digital Media Lab’s new Ready Player One contest, with the first place winner taking home an HTC Vive Headset. I’m not usually one for contests, and I couldn’t picture myself actually winning the thing, but something about the challenge piqued my interest. The task involved creating a pitch video, less than one minute in length, in which I’d have to describe how I would implement Virtual Reality on campus in a meaningful way. 

With Virtual Reality, there are a lot of possible implementations relating to different departments. In the Journalism department, we’ve talked at length in some of my classes about the potential applications of VR, but all of those applications were either for the benefit of journalists covering stories or the public consuming them. The task seemed to indicate that the idea I needed to pitch had to be centered more on benefiting the average college student, rather than benefiting a specific major (at least, that’s how I interpreted it). 

One of my original ideas was a virtual stress-relief dog, but then I realized that people with anxiety would likely only get even more stressed out with having to put on some weird giant headset… and real-life dogs can give hecking good nuzzles that can’t really be simulated. You can’t substitute soft fur with hard plastic. 

I came to college as a journalism major, and a day rarely goes by when I don’t have some doubts about my choice. In High School I decided on journalism because I won this debate at a CT Youth Form thing and loved writing and multi-media, so I figured it seemed like a safe bet. Still, it was a safe bet that was never pitched to me. I had no idea what being a journalist would actually be like; my whole image of what being a reporter entailed came from movies and television. I thought about it for a while, about how stupid and hormonal I was and still am, and realized that I’m kind of stuck. If I hypothetically wanted to switch to chemistry or computer science, I’d be starting from scratch with even more debt to bear. Two whole years of progress would be flushed down the toilet, and I’d have nothing to show for it. College is a place for discovery; where your comfortable environment is flipped on its head and you’re forced to take care of yourself and make your own friends. It’s a place where you work four years for a piece of paper to make your resume look nicer when you put it on an employer’s desk, and you’re expected to have the whole rest of your life figured out when you’re a hormonal teenager who spent his savings on a skateboard he never learned how to ride.

And so I decided that, in this neo-cyberpunk dystopia we’re steadily developing into, it would make sense for simulations to come before rigorous training. Why not create simulated experiences where people could test the waters for free? Put themselves in the shoes of whatever career path they want to explore to see if the shoes fit right, you know?

I mentioned “cyberpunk” there earlier because I have this weird obsession with cyberpunk stuff at the moment and I really wanted to give my pitch video some sort of tongue-in-cheek retrograde 80s hacker aesthetic to mask my cynicism as campy fun, but that had to be cut once I realized I had to make this thing under a minute long.

Gathering My Party and Gear

Anyway, I wrote up a rough script and rented out one of the booths in the Digital Media Lab. With some help from Becky Wandel (the News Editor at WMUA) I was able to cut down my audio to just barely under the limit. With the audio complete, it came time to add visual flair. I originally wanted to do a stop-motion animated thing with flash-cards akin to the intros I’ve made for my Techbytes videos, but I’m slow at drawing and realized that it’d take too much time and effort, which is hilarious because the idea I settled on was arguably even more time-consuming and draining.

I’m the proud owner of a Nikon D80, a hand-me-down DSLR from my mom, which I bring with me everywhere I go, mostly because I like taking pictures, but also because I think it makes me seem more interesting. A while back I got a speck of dust on the sensor, which requires special equipment to clean (basically a glorified turkey baster). I went on a journey to the Best Buy at the Holyoke Mall with two friends to buy said cleaning equipment while documenting the entire thing using my camera. Later, I made a geeky stop-motion video out of all those photos, which I thought ended up looking great, so I figured doing something similar for the pitch video would be kind of cool. I messaged a bunch of my friends, and in a single day I managed to shoot the first 60% of the photos I needed. I then rented out the Vive in the DML and did some photoshoots there. 

At one point while I was photographing my friend Jillian playing theBlu, she half-jokingly mentioned that the simulation made her want to study Marine Biology. That kind of validated my idea and pushed me to make sure I made this video perfect. The opposite effect happened when talking to my friend Rachael, who said she was going to pitch something for disability services, to which I immediately thought “damn, she might win with that.”

I then knew what I had to do. It was too late to change my idea or start over, so I instead decided that my best shot at winning was to make my video so stylistically pleasing and attention-grabbing that it couldn’t be ignored. If I wasn’t going to have the best idea, then gosh darn it (I can’t cuss because this is an article for my job) I was going to have the prettiest graphics I could muster.   

The Boss Fight 

I decided to use a combination of iMovie and Photoshop, programs I’m already familiar with, because teaching myself how to use more efficient software would ironically be less efficient given the short time frame I had to get this thing out the door. Using a drawing tablet I borrowed from my friend Julia, I set out to create the most complicated and ambitious video project I’ve ever attempted to make. 

A few things to understand about me: when it comes to passion projects, I’m a bit of a perfectionist and extremely harsh on myself. I can’t even watch my Freshman Year IT video because I accidentally made it sound like a $100 investment in some less than amazing open back headphones was a reasonable decision on my part, and my other IT video makes me cringe because I thought, at the time, it’d be funny to zoom in on the weird hand motions I make while I talk every five seconds.

So in this case, I didn’t hold back and frequently deleted whole sections of my video just because I didn’t like how a single brush stroke animated (with the exception of the way my name is lopsided in the credits, which will haunt me for the rest of my life). For two weeks, I rigorously animated each individual frame in Photoshop, exported it, and imported it into iMovie. 

(Above) A visual representation of all the files it took to create the video

(Above) Frame by frame, I lined up my slides in iMovie

The most demanding section was, without a doubt, the one involving my friend Matthew, which I spent one out of the two weeks entirely focused on. For that section, I needed it to animate at a speed faster than 0.04 seconds, which is impossible because 0.04 seconds is the shortest you can make a frame in iMovie’s streamlined interface, so I ended up creating a whole new project file, slowing down my audio by half-speed, editing the frames of that section relative to that slowed down audio before exporting it, putting it into the original project file and doubling its speed just to get it to animate smoothly. 

 (Above) Some sections required me to find loopholes in the software to get them to animate faster than iMovie would allow

(Above) Some of the scrap paper I scribbled notes on while editing the video together

Each individual border was drawn multiple times with slight variations and all the on-screen text (with the exception of the works cited) was handwritten by me multiple times over so that I could alternate between the frames of animation to make sure everything was constantly moving. 

(Above) Boarders were individually drawn and cycled through in order to maintain visual momentum

This was one of my major design philosophies during the development of this project: I didn’t want there to be a single moment in the 59 seconds where nothing was moving. I wanted my video to grab the viewer’s attention, and I feared that losing momentum in the visual movement would cause me to lose the viewer’s interest. The song LACool by DJ Grumble came on my Spotify radio coincidentally right when I was listening over the audio for the section I was editing, and I thought it fit so well I bought it from iTunes on the spot and edited it in.

I finished my video on Monday, March 26th, turned it into the Digital Media Lab, stumbled back to my dorm, and went to bed at 6:00 PM by accident. 

The Video

(Above) The final video submission 

The winner wouldn’t be announced until Wednesday, so for two days I nervously waited until 6:00 PM on March 28th, when I sat on my bed in my dorm room refreshing the Digital Media Lab website every 7 seconds like a stalker on an ex’s Facebook page waiting for the winner to finally be posted. At 6:29 PM I got a call from an unrecognized number from Tallahassee, Florida, and almost didn’t answer because I thought it was a sales call. Turns out it was Steve Acquah, the coordinator of the Digital Media Lab, who informed me that my video won. Soon after, the Digital Media Lab Website was also updated with the announcement.

(Above) A screenshot taken of the announcement on the Digital Media Lab Website 

Thank You

Along with the raw joy and excitement came a sort of surreal disbelief. Looking back on those stressful weeks of work, it all felt like it flew by faster than I could’ve realized once I got that phone call. I’m so grateful for not only the reward but the experience. Making that video was a stressful nightmare, but it also forced me to push myself to my creative limits and challenge myself in so many ways. On a night where I would’ve probably just gone home and watched Netflix by myself, I sprinted around campus to meet up with and take photos of my friends. This project got me to get all my friends together and rent out the Vive in the DML, basically forcing me to play video games and have fun with the people I love. While the process of editing it all together drove me crazy, the journey is definitely going to be a highlight of my time at UMass. 

I’m grateful to all of my friends who modeled for me, loaned me equipment, got dinner with me while I was stressing out over editing, played Super Hot VR with me, gave me advice on my audio, pushed me to not give up, and were there to celebrate with me when I won. I’m also immensely grateful to the staff and managers of the DML for providing me with this opportunity, as well as for their compliments and praise for the work I did. This was an experience that means a lot to me and it’s one I won’t soon forget. Thank you.

Epilogue

I picked up my prize the other day at the DML (see photo above the title of this article)! Unfortunately, I have a lot of work going on, so it’s going to be locked up in a safe place until that’s done. Still, it’s not like I could use it right now if I wanted to. My gaming PC hasn’t been touched in ages (since I don’t bring it with me to college) so I’m going to need to upgrade the GPU before I can actually set up the Vive with it. It’s a good thing there isn’t a spike in demand for high-end GPUs at the moment for cryptocurrency mining, right?

(Above) A visual representation of what Bitcoin has done to the GPU market (and my life)

…Oh.

Regardless of when I can actually use the prize I won, this experience was one I’m grateful to have had. The video I made is one I’m extremely proud of, and the journey I went on to create it is one I’ll think about for years to come.

Categories
Operating System

SoFi the Robotic Fish

Researchers at MIT’s Computer Science and Artificial Intelligence department have created a Soft Robotic Fish (nicknamed SoFi) which is able to swim and blend in with real fish while observing and gathering data from them. This remarkable bot is not only cool and adorable, but it also paves the way for the future of lifelike Artificial Intelligence.

Think about it: We have already reached the point where we can create a robotic fish which is capable of fooling real fish into thinking that it’s a real fish. Granted, fish aren’t the smartest of the creatures on this planet, but they can usually tell when something is out of the ordinary and quickly swim away. SoFi, however, seems to be accepted as one of their own. How long will it take for us to create a robot that can fool more intelligent species? Specifically, how long will it be until Soft Robotic Humans are roaming the streets as if they weren’t born yesterday? Perhaps more importantly, is this something that we actually want?

The benefits of a robotic animal like SoFi are obvious: It allows us to get up close and personal with these foreign species and learn more about them. This benefit of course translates to other wild animals like birds, bees, lions, etc. We humans cant swim with the fishes, roost with the birds, visit the hive with the bees or roar with the lions, but a robot like SoFi sure can. So it makes sense to invest in this type of technology for research purposes. But when it comes to replicating humanity, things get a bit trickier. I’m pretty confident in saying that most humans in this world would not appreciate being secretly observed in their daily lives “for science.” Of course, it’s still hard to say whether or not this would even be possible, but the existence of Sofi and the technology behind it leads me to believe we may be closer than most of us think.

Regardless of its possible concerning implications, SoFi is a truly amazing feat of engineering. If nothing else, these Soft Robots will bring an epic evolution to the Nature Documentary genre. For more information about the tech behind SoFi, check out the video at the top from MITCSAIL.

Categories
Operating System

Building a Better Bracket: Beating the Odds with Machine Learning

Like most other fans of college basketball, I spent an unhealthy amount of time dedicated to the sport the week after Selection Sunday (March 11th). Starting with spending hours filling out brackets, researching rosters, injuries, and FiveThirtyEight’s statistical predictions to fine-tune my perfect bracket, through watching around 30 games over the course of four days. I made it a full six hours into the tournament before my whole bracket busted. The three-punch combo of Buffalo (13) over Arizona (4), Loyola Chicago (11) beating Miami (6), and, most amazingly, the UMBC Retrievers (16) crushing the overall one-seed and tournament favorite, UVA, spelled the end for my predictions. After these three upsets, everyone’s brackets were shattered. The ESPN leaderboards looked like a post-war battlefield. No one was safe.

The UMBC good boys became the only 16th seed to beat a 1st seed in NCAA tournament history

The odds against picking a perfect bracket are astronomical. The probability ranges from 1 in 9.2 quintillion to 1 in 128 billion. Warren Buffet offers $1 million a year for life for Berkshire Hathaway employees who correctly pick a bracket. Needless to say, no one has been able to cash in on the prize. Picking a perfect bracket is nearly impossible, and is (in)famous for being one of the most unlikely statistical probabilities in gambling.

The Yin and Yang of March Madness

To make the chances of making a perfect bracket somewhat feasible, a competition has been set up to see who can beat the odds with machine learning. Hosted by Kaggle, an online competition platform for modeling and analytics that was purchased by Google’s parent company, Alphabet, the competition has people making models to predict which teams will win each game based on prior data. A model that is correct and predicted it with 99% confidence will score better than one with a 95% confidence and so on. The prize is $100,000, split among the teams that made the top 3 brackets. Teams are provided with the results of every men’s and women’s game in the tournament since 1985, the year that the tournament first started with 64 teams. They are also provided with every play since 2009 in the tournament. Despite all this data, it is still very hard to predict, with the best bracket in this competition, which has been hosted for five years, predicting 39 games correct. Many unquantifiable factors, such as hot streaks and team chemistry, play a large factor in the difficulty in choosing, so it looks like we’re still years off from having our computers picking the perfect bracket.