The University of Massachusetts Amherst
Categories
Operating System

[Sidebar] How to Resize a VirtualBox .vdi

Congratulations! You’ve made a virtual box of your favorite linux distro. But now you want to download a picture of your cat and find out that you’ve run out of disk space. 
Image: habrahabr.ru

Rather than free up space by deleting the other pics of Snuffles, you decide you’d rather just make the virtual machine have more disk space. But you’ll find out quickly that Oracle has not made this super-easy to do. The process is not simple, but it can be if you just use the following steps:

Open the Command Line on your windows machine. (Open Start and type cmd)

You can then navigate to you vitualbox installation folder. It’s default location is C:\Program Files\Oracle\VirtualBox\

Once there, type this command to resize the .vdi file:

VBoxmanage modified LOCATION –resize SIZE

Replace LOCATION with the absolute file path to your .vdi image (just drag the .vdi file from file explorer to you cmd window) and replace SIZE with the new size you want (measured in MB) 1 GB = 1000 MB

Now your .vdi is resized, but the disk space is unallocated in the virtual machine. You’ll need to resize it. To do this, download gparted live. Make a new virtual machine. It is going to simulate a live CD boot where you can modify your virtual partition.

If your filesystem is ext4, like mine was when I did this, you’ll need to delete the linux-swap file located in-between your partition and the unallocated space. Make sure you leave at least 4 GB of unallocated space so that you can add the linux-swap partition back later.

After you’ve resized your partition, you’ll be done. Boot into the virtual machine as normal and you’ll notice you have more space for Snuffles.

Image: wideopenpets.com
Categories
Operating System

Is Artificial Intelligence like J.A.R.V.I.S. Possible?

If you are a fan of Marvel Comics or the Marvel Cinematic Universe,  you are likely aware of J.A.R.V.I.S., Tony Stark’s personal artificial intelligence(AI) program. J.A.R.V.I.S. helps Tony Stark reach his full potential as Iron Man by helping run operations and diagnostics on the Iron Man suit, as well as gathering information and running simulations. J.A.R.V.I.S. also has a distinct personality, sometimes displaying sarcasm and wit, no doubt programmed in by Stark. With artificial intelligence and machine learning developing at a breakneck pace, it’s worth asking if an AI like J.A.R.V.I.S. is even possible.

One of the most prominent AI programs in use right now is IBM Watson. Watson made its debut in 2011 as a contestant on Jeopardy in a special broadcast against two of the show’s best contestants and won. Commercial use of Watson began in 2013.  Watson is now being used for a variety of functions from tracking elevator use in support of maintenance efforts, to planning irrigation systems for farms. (For more stories about Watson’s many jobs, look here.)

As far as hardware is concerned, Watson relies on a cluster of 90 IBM Power 750 servers that each have a 3.5GHz processor, 16 terabytes of RAM. This allows Watson to process the equivalent of one million books per second. The estimated cost of Watson’s hardware was 300 million dollars.

When Watson competed on Jeopardy, all of the information Watson had access to had to be stored on the machine’s RAM because it would not have been able to access it within a competitive time frame if it was stored on the machine’s hard drive. Since Watson’s bout on Jeopardy, solid state drives have started to emerge, which would allow information that is used more often to be accessed at a faster rate than if the same information was stored on a standard hard drive. With further advances in memory storage technology, information could be accessed at faster rates.

IBM’s Watson appears to be a step in the direction toward AI similar to J.A.R.V.I.S. With quantum computing as an expanding frontier, processing speeds could become even faster, making something like J.A.R.V.I.S. a more realizable reality. Personally, I believe such a feat is possible, and could even be achieved in our lifetime.

Categories
Operating System

Which Computer Is Right for You: A Beginner’s Guide

People always ask me, “Are Macs better than PCs?” or “What kind of computer should I buy?” so I’m here to clear some confusion and misconceptions about computers and hopefully help you find the computer best suited to your purposes.

Computers can generally be separated into two large operating system groups: MacOS and Windows. There are Linux and Ubuntu users, but the majority of consumers will never use these operating systems, so I’ll focus on the big two for this article. Computers can also be separated into two physical categories, desktops and laptops.

Desktops, as the name suggests, sit on top of (or under) your desk, and are great for a number of reasons. Firstly, they are generally the most cost-efficient. With the ability to custom-build a desktop, you’re able to the best bang for your buck. And even if you choose to buy a prebuilt, the cost differences nowadays between prebuilt and custom builds are small. Desktops also serve as being very powerful machines, with the best performance, as they aren’t constrained to physical size like laptops are. Many laptop parts have to be altered to fit the limited space, but desktops have as much space as the case has to offer. More space within the case means bigger/more powerful parts, better ventilation for cooling, etc. Additionally, desktops are generally more future-proof. If a hard drive runs out of space, you can buy and install another. If your graphics card can’t support modern games anymore, you can order one that fits your budget and just replace the old one. Overall, desktops are ideal… as long as you don’t want to move them around a lot. A full setup consisting of a tower, monitor, and peripherals can be very heavy and inconvenient to move around, not including the many cables required to connect everything together. If you are looking for a good machine that will last the years, and don’t need to move it around often, then you might be looking for a desktop. I will go over the details of operating systems further down.

If you’re looking for a portable machine, then you’re looking for a laptop. But here too there’s a lot of variety: You have Chromebooks, which are incredibly fast, light, and (importantly) cheap machines that use ChromeOS for very basic functionalities. Unlike other OSes, this one is designed to be used while connected to the internet, with documents and files in the cloud. The applications are limited to the what’s available of the Chrome store. If all you need a laptop to do is use the internet and edit things on Google Drive, then a Chromebook might be perfect for you.

Next are your middle-of-the-line to high-end laptops, the majority of laptops. This is where you’ll find your MacBooks, your ultrabooks, the all around laptop for most functionalities. This is what most people will prefer, as they can do the most, and retain portability. There is also a ton of variety within this group. There are touch screens, super-bendable hinges, I/O ports, etc. Here, what it’s going to come down to is personal preference. There too many options to write about, but I encourage everyone to try to assess a number of different computers, before deciding which ones they like the best.

Lastly, I’d like to discuss operating system, primarily MacOS and Windows. I did briefly mention ChromeOS, but that’s only really for Chromebooks and it’s a very basic system. With MacOS, what people like is the convenience. Apple has created an “ecosystem” of devices that, if you are a part of this ecosystem, everything works perfectly in harmony. MacOS is very user-friendly and easy to pickup, and if you own an iPhone, an Apple Watch, an iPad, any iOS device, you can connect it to your computer and use it in sync all together. iMessage, Photos, Apple Cloud, are all there to keep your devices connected and make it super easy to swap between. Windows doesn’t have an “ecosystem,” but what it lacks in user-friendliness it makes up for versatility and user power. Windows is good at being customizable. You have a lot more freedom when it comes to making changes. This comes back to the device it’s on. Mac devices have top-of-the-line build quality. They’re constructed beautifully, and are extremely good at what they do, but they come with a high price tag. Their devices are built in a way to discourage user-modification like adding storage/memory, etc. Microsoft laptops range from $150 well into the thousands for gaming machines, where the common MacBooks start near $1000. If you’re looking for gaming, Windows is also the way to go. If you aren’t choosing a desktop, there are many gaming laptops out for sale. Although you won’t find the same performance per dollar, they are laptops and portable.

With this, hopefully you have everything you need to buy the perfect laptop for you the next time you need one.

Categories
Hardware Software

A [Mathematical] Analysis of Sample Rates and Audio Quality

 

Digital audio again? Ah yes… only in this article, I will set out to examine a simple yet complicated question: how does the sampling rate of digital audio affect its quality? If you have no clue what the sampling rate is, stay tuned and I will explain. If you know what sampling rate is and want to know more about it, also stay tuned; this article will go over more than just the basics. If you own a recording studio and insist on recording every second of audio in the highest possible sampling rate to get the best quality, read on and I hope inform you of the mathematical benefits of doing so…

What is the Sampling Rate?

In order for your computer to be able to process, store, and play back audio, the audio must be in a discrete-time form. What does this mean? It means that, rather than the audio being stored as a continuous sound-wave (as we hear it), the sound-wave is broken up into a bunch of infinitesimally small points. This way, the discrete-time audio can be represented as a list of numerical values in the computer’s memory. This is all well and good but some work needs to be done to turn a continuous-time (CT) sound-wave into a discrete-time (DT) audio file; that work is called sampling.

 

Sampling is the process of observing and recording the value of a complex signal during uniform intervals of time. Figure 1(a) is ‘analog’ sampling where this recorded value is not modified by the sampling process and figure 1(b) is digital sampling where the recorded value is quarantined so it can be represented with a binary word.

During sampling, the amplitude (loudness) of the CT wave is measured and recorded at regular intervals to create the list of values that make up the DT audio file. The inverse of this sampling interval is known as the sample rate and has a unit of Hertz (Hz). By far, the most common sample rate for digital audio is 44100 Hz; this means that the CT sound-wave is sampled 44100 times every second.

This is a staggering number of data points! On a audio CD, each sample is represented by two bytes; that means that one second of audio will take up over 170 KB of space! Why is all this necessary? you may ask…

The Nyquist-Shannon Sampling Theorem

Some of you more interested readers may have heard already of the Nyquist-Shannon Sampling Theorem (some of you may also know this theorem simply as the Nyquist Theorem). The Nyquist-Shannon Theorem asserts that any CT signal can be sampled, turned into a DT file, and then converted back into a CT signal with no loss in information so long as one condition is met: the CT signal is band-limited at the Nyquist Frequency. Let’s unpack this…

Firstly, what does it mean for a signal to be band-limited? Every complex sound-wave is made up of a whole myriad of different frequencies. To illustrate this point, below is the frequency spectrum (the graph of all the frequencies in a signal) of All Star by Smash Mouth:

Smash Mouth is band-limited! How do we know? Because the plot of frequencies ends. This is what it means for a signal to be band-limited: it does not contain any frequencies beyond a certain point. Human hearing is band-limited too; most humans cannot hear any frequencies above 20,000 Hz!

So, I suppose then we can take this to mean that, if the Nyquist frequency is just right, any audible sound can be represented in digital form with no loss in information? By this theorem, yes! Now, you may ask, what does does the Nyquist frequency have to be for this to happen?

For the Shannon-Nyquist Sampling Theorem to hold, the Nyquist frequency must be greater than twice the highest frequency being sampled. For sound, the highest frequency is 20 kHz; and thus, the Nyquist frequency required for sampled audio to capture sound with no loss in information is… 40 kHz. What was that sample-rate I mentioned earlier? You know, that one that is so common that basically all digital audio uses it? It was 44.1 kHz. Huzzah! Basically all digital audio is a perfect representation of the original sound it is representing! Well…

Aliasing: the Nyquist Theorem’s Complicated Side-Effect

Just because we cannot hear sound about 20 kHz does not mean it does not exist; there are plenty of sound-waves at frequencies higher than humans can hear.

So what happens to these higher sound-waves when they are sampled? Do they just not get recorded? Unfortunately no…

A visual illustration of how under-sampling a frequency results in some unusual side-effects. This unique kind of error is known as ‘aliasing’

So if these higher frequencies do get recorded but frequencies above the Nyquist frequency cannot be sampled correctly, then what happens to them? They are falsely interprated as lower frequencies and superimposed over the correctly sampled frequencies. The distance between the high frequency and the Nyquist frequency govern what lower frequency these high-frequency signals will be interpreted as. To illustrate this point, here is an extreme example…

Say we are trying to sample a signal that contains two frequencies: 1 Hz and 3 Hz. Due to poor planning, the Nyquist frequency is selected to be 2 Hz (meaning we are sampling at a rate of 4 Hz). Further complicating things, the 3 Hz cosine-wave is offset by 180° (meaning the waveform is essentially multiplied by -1). So we have the following two waveforms….

1 Hz cosine waveform
3 Hz cosine waveform with 180° phase offset

When the two waves are superimposed to create one complicated waveform, it looks like this…

Superimposed waveform constructed from the 1 Hz and 3 Hz waves

Pretty, right? Well unfortunately, if we try to sample this complicated waveform at 4 Hz, do you know what we get? Nothing! Zero! Zilch! Why is this? Because when the 3 Hz cosine wave is sampled and reconstructed, it is falsely interpreted as a 1 Hz wave! Its frequency is reflected about the Nyquist frequency of 2 Hz. Since the original 1 Hz wave is below the Nyquist frequency, it is interpreted with the correct frequency. So we have two 1 Hz waves but one of them starts at 1 and the other at -1; when they are added together, they create zero!

Another way we can see this phenomena is by looking at the graph. Since we are sampling at 4 Hz, that means we are observing and recording four evenly-spaced points between zero and one, one and two, three and four, etc… Take a look at the above graph and try to find 4 evenly-space points between zero and one (but not including one). You will find that every single one of these points corresponds with a value of zero! Wow!

So aliasing can be a big issue! However, designers of digital audio recording and processing systems are aware of this and actually provision special filters (called anti-aliasing filters) to get rid of these unwanted effects.

So is That It?

Nope! These filters are good, but they’re not perfect. Analog filters cannot just chop-off all frequencies above a certain point, they have to, more or less, gradually attenuate them. So this means designers have a choice: either leave some high frequencies and risk distortion from aliasing or roll-off audible frequencies before they’re even recorded.

And then there’s noise… Noise is everywhere, all the time, and it never goes away. Modern electronics are rather good at reducing the amount of noise in a signal but they are far from perfect. Furthermore noise tends to be mostly present at higher frequencies; exactly the frequencies that end up getting aliased…

What effect would this have on the recorded signal? Well if we believe that random signal noise is present at all frequencies (above and below the Nyquist frequency), then our original signal would be masked with a layer of infinitely-loud aliased noise. Fortunately for digitally recorded music, the noise does stop at very high frequencies due to transmission-line effects (a much more complicated topic).

What can be Learned from All of This?

The end result of this analysis on sample rate is that the sample rate alone does not tell the whole story about what’s being recorded. Although 44.1 kHz (the standard sample rate for CDs and MP3 files) may be able to record frequencies up to 22 kHz, in practice a signal being sampled at 44.1 kHz will have distortion in the higher frequencies due to high frequency noise beyond the Nyquist frequency.

So then, what can be said about recording at higher sample rates? Some new analog-to-digital converts for musical recording sample at 192 kHz. Most, if not all, of the audio recording I do is done at a sample rate of 96 kHz. The benefit to recording at the higher sample rates is that you can recording high-frequency noise without it causing aliasing and distortion in the audible range. With 96 kHz, you get a full 28 kHz of bandwidth beyond the audible range where noise can exist without causing problems. Since signals with frequencies up to around 9.8 MHz can exist in a 10 foot cable before transmission line effects kick in, this is extremely important!

And with that, a final correlation can be predicted: the greater the sample rate, the less noise will result in aliasing in the audible spectrum. To those of you out there who have insisted that the higher sample rates sound better, maybe now you’ll have some heavy-duty math to back up your claims!

Categories
Operating System

Should smart watches be allowed in professional sports?

With the advent of smart technology, the relative ease with which we access information is changing. The smart watch puts much of what a person does on their phone, on their wrist, and on the internet. While we make these technological advances, some things remain constant, like professional sports. With the exception of some minor rule changes here and there, many of the most-watched games in the U.S. have remained the same. Recently, The Red Sox allegedly used smart watches to steal signs from The Yankees, which raises an important question: should smart watches be allowed in professional sports?

Most smart watches have the common ability to monitor the wearer’s heart rate. This data could be useful in monitoring players condition so the coach knows when to make substitutions, but it could also be used for medical research. If every professional athlete wore a smart device while they played in games and did workouts, the amount of data that could be made available to medical professionals in one year would be astounding. This data could lead to a better understanding than we have now of the human body at work.

While wearing smart watches in professional sports hold potential societal gain, the reality of the situation is not as optimistic. Many sports involve physical contact, which leads to a risk of either the smartwatch breaking, or increased injury due to contact with a smart watch on a player’s wrist. There is also an increased risk of cheating if players and coaches can view text messages on their wrist.

In my opinion, sports would be better off without smart technology becoming part of any game. The beauty of sporting matches is that they are meant to display the raw athletic abilities of players in competition. Adding smart technology to the game could lead to records that have asterisks by them, similar to home run records set by players who used steroids.

Categories
Operating System

An Extensive Guide to Keyboard Shortcuts

In this day and age, it’s safe to assume that most of you know a thing or to about how to use a computer, one of those things being keyboard shortcuts. Keyboard shortcuts, for the uninitiated, are really handy combinations of buttons, usually two or three, that perform certain functions that would otherwise take somewhat longer to do manually with just the mouse. For example, highlighting a piece of text and pressing Control (CTRL) + C copies the text to your clipboard, and subsequently pressing CTRL + V pastes that copied text to wherever you’re entering text.

Most people tend to know copy and paste, as well as a handful of other shortcuts, but beyond them are an abundance of shortcuts that can potentially save time and make your computer-using experience that much more convenient. In this article, I’ll go over some commonly known keyboard shortcuts and several most likely not very well known ones as well.

Most of these keyboard shortcuts will be primarily on Windows, although some can also apply on Mac as well, usually substituting CTRL with the Command button.

General shortcuts:

CTRL + C – As mentioned above copies any highlighted text to the clipboard.

CTRL + V – Also mentioned above, pastes any copied text into any active text field.

CTRL + X – Cuts any highlighted text; as the wording suggests, instead of just copying the text, it will “cut” it and remove it from the text field. Essentially rather than copying, the text will be moved to the clipboard instead.

CTRL + Z – Undo an action. An action can be just about anything; since this is a fairly universal shortcut, an action can be what you last typed in Microsoft Word, a line/shape drawn in Photoshop, or just any “thing” previously done in an application.

CTRL + Y – Redo an action. For example, if you changed your mind about undoing the last action, you can use this shortcut to bring that back.

CTRL + A – Selects all items/text in a document or window, i.e. highlights them.

CTRL + D – Deletes the selected file and moves it to the Recycle Bin.

CTRL + R – Refreshes the active window. Generally you’ll only use this in the context of Internet browsers. Can also be done with F5.

CTRL + Right Arrow – Moves the cursor to the beginning of the next word.

CTRL + Left Arrow – Moves the cursor to the beginning of the previous word.

CTRL + Down Arrow – Moves the cursor to the beginning of the next paragraph.

CTRL + Up Arrow – Moves the cursor to the beginning of the previous paragraph.

Alt + Tab – Displays all open applications and while holding down Alt, by pressing Tab, will cycle through which application to switch to from left to right.

CTRL + Alt + Tab – Displays all open applications. Using the arrow keys and Enter, you can switch to another application.

CTRL + Esc – Opens the Start Menu, can also be done with Windows Key.

Shift + Any arrow key, when editing text, selects text in the direction corresponding to the arrow key. Selects text character by character.

CTRL + Shift + Any arrow key – When editing text, selects a block of text, i.e. a word.

CTRL + Shift + Esc – Opens Task Manager directly.

Alt + F4 – Close the active item or exit the active application.

CTRL + F4 – In applications that are full screen and let you have multiple documents open, closes the active document, instead of the entire application.

Alt + Enter – Displays the properties for a selected file.

Alt + Left Arrow – Go back, usually in the context of Internet browsers.

Alt + Right Arrow – Go forward, same as above.

Shift + Delete – Deletes a selected file without moving it to the Recycle Bin first, i.e. deletes it permanently.

Windows Logo Key Shortcuts:
Windows logo key ? + D – Displays and hides the desktop.

Windows logo key ? + E – Opens File Explorer

Windows logo key ? + I – Opens Windows Settings

Windows logo key ? + L – Locks your PC or switches accounts.

Windows logo key ? + M – Minimize all open windows/applications.

Windows logo key ? + Shift + M – Restore minimized windows/applications on the desktop.

Windows logo key ? + P – When connecting your computer to a projector or second monitor, opens up a menu to select how you want Windows to be displayed on the secondary display. You can select from PC screen only (uses only the computer’s screen), Duplicate (shows what is on your computer screen on the secondary display), Extend (Extends the desktop, allowing you to move applications/windows to the secondary display, and keep content on the primary screen off the secondary display), and Second Screen Only (Only the secondary display will be used).

Windows logo key ? + R – Opens the Run Dialog Box. Typing and entering in the file names for applications will open the file/application, useful for troubleshooting scenarios.

Windows logo key ? + T – Cycle through open applications on the taskbar; pressing Enter will switch to the selected application.

Windows logo key ? + Comma (,) – Temporarily peeks at the desktop.

Windows logo key ? + Pause Break – Displays System Properties window in Control Panel. You can find useful information here about your computer such as the version of Windows you are running, general info about the hardware of the computer, etc.

Windows logo key ? + Tab – Opens Task view, which is similar to CTRL + Alt + Tab.

Windows logo key ? + Up/Down – Maximizes or minimizes a window/application respectively.

Windows logo key ? + Left/Right – Maximizes a window to the left or right side of the screen.

Windows logo key ? + Shift + Left/Right – When you have more than one monitor, moves a window/application from one monitor to another.

Windows logo key ? + Space bar – When you have more than one keyboard/input method installed (usually for typing in different languages), switches between installed input methods.

That just about covers most common keyboard shortcuts you can use on a Windows computer. The list goes on however, as there are so many more keyboard shortcuts and functions you can perform, which is even further expanded when taking into account that certain applications have their own keyboard shortcuts when those are in use.

You might end up never using half of the keyboard shortcuts on this list, much less of all keyboard shortcuts in general, favoring the good old fashioned way using the mouse and clicking, and that’s fine. The amount of time you save using a keyboard shortcut versus the clicking your way through things to perform a function is arguably negligible and most of the time is just a quality of life preference at the end of the day. But depending on how you use your computer and what kind of work you do on it, chances are picking up some of these keyboard shortcuts could save you a lot of frustration down the line.

Categories
Operating System

How Do Games Get on Steam?

While it may seem like a strange question to ask, there is an interesting history behind the largest (online and brick and mortar) storefront for video games. The control exerted by Steam on the market it controls has wide ranging implications for both consumers and developers. The availability of indie games is a relatively recent development in Steam’s history; so are the current trends pushing the near-exponential growth of the Steam library.

Back when Steam launched, the library selection was very limited, relying on the IP (Intellectual Property) that Valve (Steam’s parent company) had built up over the past half-decade.  For the first 2 years of Steam’s life you could only find games created and published by Valve (Half Life and Counterstrike 1.6 being the most notable), but in late 2005 that changed as Steam inked a deal with Strategy First, a small Canadian publisher, and games started flowing onto the service. For the next 5 years the steam library was very limited as generally only large/influential publishers were able to get their games on Steam. This created tension in the Steam community, as many people want indie games to be featured and make their way onto the storefront. The tension broke when Steam agreed to allow indie games on the platform.

By 2010, the issues were obvious: Steam had no way to discern which indie games people wanted and which were not suitable for the platform. Two years later, in response to these concerns, Steam implemented the Green Light system, designed to get quality indie games on Steam. Initially Green Light was received positively. Black Mesa (A popular mod that ported Valve’s original Half Life to the Half Life 2 engine) and other releases of quality games inspired confidence. All seemed good. Fast forward to late 2015: Several disturbing trends had begun to emerge.

An enterprising “developer” realized that you can buy assets for the unity engine store, and with very minimal effort create a “game” that you could get on Green Light. These “games” were often just the unity assets with AI zombies that would slowly follow you around, providing little to no engaging content and which hardly could be considered a game. These games should have never made it through Green Light, but the developers got creative in getting people to vote for their games. Some would give “review” keys away pending a vote/good review on their page while others promised actual monetary profit through the Steam’s Trading Card economy.

Asset flips are just one example of how Green Light was exploited (not to mention the cartel-like behavior behind some of the asset flippers). By 2016 Steam was in full damage control, as the effects of Green Light were becoming apparent, the curated garden that once was Steam became overgrown and flooded with sub-par games. So overabundant was the flow of content that by the end of 2016, nearly 40% of Steam’s whole library was released in that year alone. 13 years of content control and managing customers’ expectations were nullified in the span of a year. (The uptick began in 2014, but 2016 was the real breaking point).

Steam, now in damage-control mode, decided to abandon content control in favor of an open marketplace that uses algorithms to recommend games to consumers. This “fix” has only hid the enormity of sub-par games that make up most of the Steam library now. And while an algorithm can recommend games, it will often end up recommending the same types of games, creating an echo chamber effect as you are only recommended the games you express interest in, and not those that would appeal to you the most.

In 2017, Steam abandoned Green Light in favor of Steam Direct, an updated method of allowing developers to publish games, this time without community interaction. Steam re-assumed the mantle of gatekeeper, taking back responsibility for quality control, albeit with standards so low, one can hardly call it vetting. (Some approved games don’t even include an .exe in the download)

 

Categories
Mac OSX

Restoring a MacBook with an Erased Hard Drive


If you’re anything like me, you will (or already have) accidentally wiped your Macbook’s ssd. It may seem like you just bricked your MacBook, but luckily there is a remedy.

The way forward is to use the built-in “internet recovery” which, on startup, can be triggered via pressing “cmd + R”.

There is a bit of a catch: if you do this straight away, there is a good chance that the Mac will get stuck here and throw up an error – error -3001F in my personal experience. This tends to be because the Mac assumes it is already connected to Wi-Fi (when its not) and gives an error after it fails to connect to apple servers. If instead your MacBook lets you select a Wi-Fi network during this process, you’re in the clear and can skip the next paragraph.

Luckily there is another way to connect, via apple’s boot menu. To get there, power the computer on, hit the power button and very soon after, hold the option key. Eventually you will see a screen where you can pick a Wi-Fi network.

Unfortunately if you’re at UMass, eduroam (or UMASS) won’t work, however you can easily connect to any typical home Wi-Fi or a mobile hotspot (although you should make sure you have unlimited data first).

Once you’re connected, you want to hit “cmd + R” from that boot screen. Do not restart the computer. If you had been able to connect without the boot menu, you should be already be in internet recovery and do not need to press anything.

Now that the wifi is connected, you need to wait. Eventually you will see the Macbook’s recovery tools. First thing you need to do is to select disk utility, select your Macbook’s hard drive and hit erase – this may seem redundant but I’ll explain in a moment. Now go back into the main repair menu by closing the disk utility.

Unless you created a “time machine” backup, you’ll want to pick the reinstall Mac OS X option. After clicking through for a bit, you will see a page asking you to select a drive. If you properly erased the hard drive a few moments before, you will be able to select the hard drive and continue on. If you hadn’t erased the drive again, there is a good chance no drive will appear in the drive selection. To fix that, all you have to do is to erase the drive again with the disk utility mention earlier – the one catch is that you can only get back to the recovery tools if you restart the computer and start internet recovery again, which as you may have noticed, is a slow process.

Depending on the age of your Macbook, there is a solid chance that you will end up with an old version of Mac OS. If you have two step verification enabled, you may have issues updating the the latest Mac OS version.

Out of my own experience, OS X Mavericks will not allow you to login to the app store if you have two step verification – but I would recommend trying, your luck could be better than mine. The reason why we need to App Store is because it is required to upgrade to High Sierra/the present version of OS X.

If you were unable to login, there is a work around – that is to say, OS X Mavericks will let you make a new Apple ID, which luckily are free. Since you will be creating this account purely for the sake of updating the MacBook, I wouldn’t recommend using your primary email or adding any form of payment to the account.

Once you’re logged in, you should be free to update and after some more loading screens, you will have an fully up-to-date MacBook. The last thing remaining (if you had to create a new Apple ID) is to log out of the App Store and login to your personal Apple ID.

Categories
Operating System

Smartphone Fingerprint Scanners

The next generation of Smartphone security is here! Mostly clear fingerprint sensors can now be embedded behind or under the screen. There has been a huge push in phones this year to make the bezels as tiny as possible. Of course this means finding a place for the fingerprint scanner. Many phones have moved it to the back. LG was the first to do it, and it was relatively well executed. Samsung followed suit, and many complain it’s too hard to tell apart from the camera bump. The Pixel and Pixel 2 have one on the back that works well and has gestures! To minimize the bezel, the iPhone X removed the scanner all-together, and instead hid a plethora of sensors inside its iconic notch to usher in the era of faceID.

 

But now two android phones are being released that place the fingerprint scanner, almost completely invisibly, under the screen. The first, a VIVO X20 Plus UD, won an award for best in show at CES 2018. The sensor is a small pad where a traditional scanner should be. Any time that area of the phone is touched, that area of the phone flashes brightly, and the sensor looks for the light reflected off of your finger. Check it out here:

Vivo’s concept phone brings the concept a bit further, with the fingerprint scanner taking up a larger pad, allowing you to touch anywhere on roughly 1/3 of the screen. This concept phone also pushes the bezel-less concept another level by moving the selfie-cam to a piece of plastic that extends in and out from the top of the phone. Is this the future?

 

Limitations:

It’s a bit “slow” right now. (It takes about a second.) The cool animation should be enough to hold you over.  But keep in mind it’s also the first generation of a product. It will only get quicker with time.

The phone needs to have an OLED screen.  While not uncommon, many phones, Iphones included, have LCD displays.  OLED screens allow individual pixels to turn on and off, rather than the whole screen or none of it, like LED displays require.

And finally, yes, at very specific lighting conditions and viewing angles, you can see the sensor through the screen.