The University of Massachusetts Amherst
Categories
Software

5G – Can it change the world?

The world has seen several generations of wireless technologies, and now comes the fifth generation of wireless technology – 5G. With each generation being better and improved, 5G is one of the fastest and most powerful wireless technology the world has seen.

 

Most phones of today likely support 4G and so the high speeds you enjoy are powered by 4G network on your smartphones. 5G will provide an even greater speed boost. Statistically, the average 4G speed is about 16.9 megabits per second (Mbps), according to Open Signal. 5G promises to deliver Gigabit speeds (>1Gbps). This incremental innovation will likely allow its users to stream not only HD but 4K HDR and much more with ease, all thanks to its speed.

Even though speed is a great part of 5G, 5G is not all about speed. This new technology also induces a change in the number of cell sites that are required for coverage and the number of devices that can connect to a single cell sight. With technological advances, the number of devices owned by a person also increases. Furthermore, with radical technological changes in cars (like self-driven cars), they too need connectivity to the network. Due to this fact, more devices need connectivity in smaller regions. 5G cell sites will be able to cover the connectivity of more devices to the network in a small area. However, to provide high speed to several devices, there needs to be more number of 5G cell sites.

A current issue with today’s network is latency. Latency is defined as “the delay before a transfer of data begins following an instruction for its transfer”. One of the goals of 5G is to reduce the latency. Reduced latency can provide an improved experience for gamers and in virtual reality. Furthermore, latency becomes a very important factor in the automotive industry. In the future, it is possible the cars communicate with each other based on 5G network and these conversations can prevent crashes when such technologies are incorporated in the crash-avoidance systems. The reduced latency and be very effective in preventing accidents in car crashes especially with the upcoming technology of self-driven cars.

Categories
Hardware iOS Mac OSX

What’s New With AirPods 2?

Apple’s AirPods have quickly become the best selling wireless headphones and are now the second-best selling Apple product. The small white buds have quickly become ubiquitous across the U.S. and are many people’s go-to wireless earbud option. This week, Apple has refreshed the AirPods with a newer model, giving them additional features. These new second generation AirPods look identical to the first generation on the outside, but on the inside much has changed. Utilizing Apple’s new H1 Chip (as opposed to the W1 chip inside the first generation), the new AirPods are able to pair to your iPhone more quickly than ever, and are now able to switch between devices in a much shorter time-frame (a common complaint with the first generation AirPods). Additionally, the new AirPods offer lower latency, which means audio will be more in sync with videos and games. Battery life as also seen an improvement, with talk time now up to 3 hours on a single charge.

Perhaps the biggest feature of these new AirPods does not have anything to do with the earbuds themselves. The case that the new AirPods ship with is now wireless-charging enabled. This means that AirPods can now be charged wirelessly using any Qi-enabled wireless charging pad. Additionally, the new AirPods with Wireless Charging Case will be compatible with Apple’s upcoming AirPower Mat, which will charge an iPhone, Apple Watch, and AirPods, all wirelessly. For those of you with first generation AirPods, don’t fret! Apple is looking to share the wireless charging features with all AirPods owners. The Wireless Charging Case is cross-compatible with both generations of AirPods, and is available for separate purchase for a reduced price. This means that if you already own a pair of AirPods, you are able to purchase the new Wireless Charging Case individually and use it with your first generation AirPods.

With the continued success of AirPods and the continued removal of analog headphone ports from mobile devices, the wireless headphone market will be one that will continue to evolve rapidly for the foreseeable feature. Seeing what features Apple will add to future AirPods to entice customers to continue purchasing them will be interesting, as will seeing how their competitors in the space will improve their products to compete.

Categories
Hardware Software Web

Cryptocurrency – Why decentralization is a big deal.

TheMerkle Bitcoin Ethereum

Cryptocurrencies have taken a seemingly permanent foothold in the world of technology and banking; more and more people are reaching out and investing or making transactions with Bitcoin and similar online coins. The potential impact that these decentralized coins have on our society is enormous for laypeople and tech enthusiasts alike.

Why is decentralization a big deal?

Throughout all of history, from the great Roman Empire to modern-day United States, money has been backed, affiliated, printed, and controlled by a governing body of the state. Artificial inflation rates, adjustable interest rates, and rapid economic collapses and growth are all side-effects of a governing body with an agenda controlling the money and its supply.

Bitcoin, for example, is one of many online cryptocurrencies, and has no official governing entity. This is completely uncharted territory, as not only is it not being manipulated artificially, but it is not associated with any governing body or any regulations and laws that may come with it. The price is determined solely on the open market – supply and demand.

No other currency has ever been free of a governing body and state like cryptocurrencies are today. The major effect of this is what it will do to the banking industry. Banks rely on governments to control interest rates, and they rely on there being a demand for money, specifically a demand for money to be spent and saved. Banks are intertwined with our identity, it is assumed everyone has a checking account and is a member of a large bank, and thus the forfeiting of all of our privacy and personal information that goes along with creating a bank account and identity. The opportunity to choose whether or not to be part of a bank, and further to be your own bank and hold your own cryptocurrencies in your own locked vault, is a privilege none of our ancestors were ever granted.

The implications of a mass of people determining to be their own bank is catastrophic for banking entities. Purchasing and transacting things will become more secure, and more private. People will not be able to be tracked by where they swiped their credit card, as Bitcoin by it’s very nature is anonymous and leaves no trail. The demand for banks will go down and change the entire workings of the very foundation of our government – if enough people choose to take this route.

What’s the catch?

A heated discussion is currently present on the usability of cryptocurrency in today’s world, this is a topic that is under heavy scrutiny as ultimately it will determine how successful it is for cryptocurrencies to be the major player in today’s economy.

The con’s of cryptocurrency currently lie in the usability for small and/or quick transactions in today’s society. In order for Bitcoin to be able to be used, it most be supported by both a buyer and a seller. That means that business owners must have a certain threshold of “tech saviness” to be able to even entertain the thought of accepting bitcoin as a payment.

Bitcoin transaction visualization

In conjunction with needing to be supported on both ends, the fees for transacting are determined by how quickly the transaction needs to “go through” the network – see this article on how bitcoin transactions work on the tech side – and how big the transaction is monetarily. For example, a $100 transaction to another person that needs to get to that person in 20 minutes will likely be significantly more expensive than a $100 transaction that needs to get to that person in a 24 hour period. This spells trouble for small transactions, like your local coffee shop. If a coffee shop wants to accept bitcoin, they have two options. They can either take the gamble and allow a longer period of time for transactions to process – running the risk of someone not actually sending a transaction and skimming a free service from them – or require a quick 20 minute transaction but have higher fees for the buyer, and in turn a possible drop in sales via bitcoin.

The last point is crucial to understanding and predicting the future of cryptocurrencies in our world. If the fees and time for transactions to complete are lowered and made more efficient, Bitcoin will almost inevitably take a permanent resting place in our society as a whole, and perhaps be the most used currency, changing the game and freeing money up from regulation, agendas, and politics.

Categories
Operating System

My Phone’s Battery Drains Too Fast! Let’s Fix That.

It seems to me that my phone’s battery drains way too fast sometimes. I use it semi regularly throughout the day but still in the evening I’m at 15% when I think there’s no good reason for me to be. Fortunately, there’s an explainable reason this happens. Let’s first take a look at why the phone needs power:

smartphone-1641909_960_720

Everything your phone does requires what’s called a process. A process is all the calculations and tasks the phone has to do in the background so that you can enjoy it the way it was meant to be used. Processes can build up quickly, especially if you’re like me, and you have a lot of apps on your phone that you switch between.

For instance, your phone is making sure you can receive calls; there’s a process for that. It is checking that the screen is at the correct brightness; there’s a process for that. It is looking out for new text messages or SnapChats or Facebook notifications or Instagram updates. They all require processes and they’re all running even when you lock the screen.

I have some tips that will allow your battery to remain as charged as possible:

1.Disable the fancy settings.

This is one of the easiest ways to increase battery life. Your phone came to you with all sorts of features that, on the surface, are fun to use and make your experience better. However, they all require processes that will eat away at your battery life. Fancy settings include but are not limited to:  Bluetooth, location services, auto-rotate, auto-brightness, NFC, Hey Siri/Ok Google, Gestures.

2. Lower the brightness.

I know, I know, you want to be able to see your screen in its most amazing clarity. But that requires power, unfortunately. Setting the screen to a low brightness when it’s dark in your surroundings will help you conserve power. The screen is the one of the most power-draining parts of the phone because of the energy required to light it up. If you can handle a dimly lit display, you’ll really reduce battery consumption.

3. Close apps not in use.

The apps you open throughout the day have an impact on battery life after you’re done using them. Try to remember to close all the apps frequently.

4. Uninstall apps you don’t use.

Some apps have 24/7 processes to check for notifications. SnapChat and Facebook are examples of these. If you have other apps like them that you simply don’t use anymore, uninstall them to make sure they aren’t draining power unnecessarily.

5. Keep a battery bank with you.

If all else fails, having a battery bank with you will allow you to charge your device on the go.

Categories
Apps Hardware Security

Data Backups

Broken laptops happen to anyone and everyone, and they generally choose the least convenient time to break down. Whether it’s right at the beginning of an online test, as soon as you finish a long and important paper, or you just finished all your work and really just want to watch netflix, your laptop seems to know exactly when you least want it to break. However, while a ruined Netflix session might be unfortunate, there’s not much worse than losing all of your files.

Nowadays computers are used to store everything from irreplaceable home movies to 100 page long thesis papers, and backing up data is more important than ever. If your computer crashes, there’s no guarantee that your data will still be there if it turns on again. If that happens, the best way to save yourself some heartbreak and frustration is to have a regular backup of your data, or even two (or three if it’s something as important as your thesis!). For someone who barely uses their laptop, backing up once a month might be plenty. However anyone who regularly uses their laptop to write up or edit documents (which is the case for most students) should be backing up their machine at least once a week if not even more frequently.

So how and where can you backup your data? Well there’s a few popular options, namely on an external drive or in the cloud.

External

For external drives, 1TB is a standard size, although you might want to get a bigger one if you have a really large amount of files that you want to back up (or a million photos and videos). Some popular brands are Seagate, Western Digital, and Toshiba and they run about $50 for 1TB drives. Also be sure to get one that has USB 3.0, as that will increase the speed of the data transfer.

Image result for hard drive back up

Cloud

UMass provides unlimited secure online storage through Box. With Box you are able to securely store and share your files online, so that they can be accessible through multiple devices and so that you won’t lose them if your laptop decides suddenly to stop working. To read more about Box or get started with backing up your files you can go to https://www.umass.edu/it/box.

Image result for box cloud backup

Categories
Windows

How to Add Languages to Your Windows 10 Keyboard

Are you beginning to type in a foreign language? Do you often find yourself copy-and-pasting special characters like é and wish there was an easy shortcut? Thankfully, Windows 10 allows users to easily add and switch between different languages without having to buy a separate physical keyboard.

Personally, I often use the French and Japanese keyboards on my laptop. The French keyboard allows me to quickly enter letters with diacritics (à, ê, ï, etc.). The Japanese keyboard automatically translates Latin characters into hiragana (????), katakana (??), or kanji (??).

The following instructions will help you add new languages to Windows 10.

  1. Navigate to Windows Settings by clicking on the gear on the left side of the Start Menu.
  2. Click on “Time & Language”, then click on “Region & language” in the left sidebar.
  3. Under “Languages”, click “Add a language”.
  4. Find the language that you would like to add. After clicking on it, you may be asked to specify a regional dialect. You will be returned to the “Region & language” page.

Once you have followed these steps, a new icon will appear next to the date and time on the bottom-right of your screen. Most likely, it will say “ENG” for English, the current keyboard language. Click on this icon to open a window listing the currently added languages. From here, you can select a language to change your keyboard’s settings. You may also hold down the Windows ? key and press Space to quickly change languages.

By default, some languages use a different keyboard layout than the QWERTY layout used for US English keyboards. Once you have switched to the new language, test it out by typing in Word, Notepad, or any other program that allows you to enter text. If the keys you type do not match the letters on the screen, the following instructions can help you fix this issue.

  1. On the “Region & language” page, under “Languages”, click the language you just added, then click “Options”.
  2. Scroll down to “Keyboards”, then click “Add a keyboard”.
  3. Scroll down to “United States-International” and click on it. This keyboard follows the QWERTY layout, but also supports some special characters in other languages.
  4. Under “Keyboards”, click the other keyboard, then click “Remove”.

Congratulations! You have now added another language’s keyboard to your computer. Feel free to add as many additional languages as you would like.

Here are a few diacritics you can type using the United States-International keyboard:

  • Acute accent (é) – Type an apostrophe (‘), followed by a letter.
  • Grave accent (à) – Type a grave accent (`), followed by a letter.
  • Diaeresis (ü) – Type a double quote (“) by pressing Shift + ‘, followed by a letter.
  • Circumflex (î) – Type a circumflex/caret (^) by pressing Shift + 6, followed by a letter.
  • Tilde (ñ) – Type a tilde (~) by pressing Shift + `, followed by a letter.
Categories
Operating System

3D Printing: A Multitude of Machines & Materials-SLA/DLP Printing

3D Printing: A Multitude of Machines & Materials- SLA/DLP Printing 

 

3D printing comes in more forms than you may realize. In a previous article we focused on FDM (Fused Deposition Modeling) 3D printing, the most common and popular form of 3D printing. I’d like to introduce you to a more complex and precise method of 3D printing which is also consumer available. Let’s talk about Stereolithography and Digital Light Processing 3D printing.

The basics of both processes is that a photosensitive resin is selectively hardened and adhered to a gradually moving platform. Let’s break that down a bit, shall we? Like FDM printing, SLA and DLP printing work on the premise of building up layer after layer of material in order to create an object. Unlike FDM printing, which takes solid plastic, melts it into a liquid, then cools it back into a solid, SLA and DLP printing turn a liquid resin into a solid using light. Both SLA and DLP printing use some form of light to harden their photo sensitive resin. SLA uses a laser to draw out each layer, in a sort of winding path. DLP exposes an entire layer of a model to the light at one time using a specialized projector. If you are interested in looking at more the intricacies of the two processes, I suggest looking at this article from Formlabs.

Seen above: The Form 2, an example of an SLA printer. https://formlabs.com/3d-printers/form-2/

Let’s talk materials. Whereas FDM printing can print in a variety of plastics and hybrid filaments, SLA and DLP printers are far more limited. The resins used in SLA and DLP can be had in many generic colors, and in a few different transparencies, but “exotic” resins akin to metal/wood hybrid FDM filaments have yet to become available.

How about print area? Most consumer available SLA/DLP printers print areas are noticeably smaller than their FDM counter parts. In general, hobbyist FDM printers (sub $1000 range) have print areas from 4”x4”x4” to 8”x8”x8”. Most consumer available resin printers have print areas in the ballpark of 4”x4”x4” to 6”x6”x6”. Note, these measurements are by no means exact. Resin printers often have the interesting quality of having rectangular print areas (opposed to more common square print areas). If you want to print anything massive, stick to FDM, your sanity and wallet will thank you later. You don’t need the amount of detail that resin printing offers on something larger than a softball. Which is why resin printing is used mostly for very intricate operations.

Seen above: The AnyCubic Photon, an example of a DLP resin printer. http://www.anycubic3d.com/products/show/1359.html

Another component to resin printing is the higher cost compared to FDM. Though FDM and resin printing is already like comparing apples to oranges, let’s do our best to not throw and bananas into the mix. For this comparison lets focus on the costs associated with using generic resins and generic PLA filament.

A 1kg spool of generic PLA plastic for FDM printing can be had for ~$20. SLA/DLP resins commonly come in 500g bottles, the prices vary a bit, but you can expect to spend ~$50 per 500g bottle. In both cases, buying in bulk can save money, whereas fancy colors/effects bump up the price (these numbers are derived from a quick search of Amazon for both products, a greater study about the costs of different printing types can be found at this link by All3DP). But how far does this material get you? This question is hard to answer, as changing the smallest print setting can drastically affect the amount of material used for a print. Infill percentage, infill type, types of external support structure, wall thickness, these are just a few settings which can affect the amount of materials used. The point being, resin printing is generally slower, prints smaller things, and is more expensive compared to FDM printing.

So why would you ever use a printer which is slower, less versatile, more expensive to own and use? The most significant pro for resin printing is the resolution at which it can print. If you recall from my previous article, it was mentioned that in general, FDM printers are capable of .1mm or 100-micron printing. Meaning that they can produce layers which are 100 microns thick, the thinner the layers the more layers are required, which means more time, but also means more detail. Where an average FDM printer can print 100-micron layers, and an expensive FDM printer can print ~50-micron layers, whereas resin printers can print ~25-micron layers. This means that you can get more detail into your print where it counts. Why might you need this extra level of detail you ask?

There are several applications/use cases where you might want/need this high level of detail. One of these applications is for tabletop game figurines/pieces. If you find yourself engaging in a game of DnD for example. Players can design their characters and have accurate physical representations of them for playing the game. Though you can print these models with an FDM printer, their details may not be accurately recreated due to inaccuracies and limitations of FDM printing, and due to the scale of the figures desired.

Another high detail application is the creation of jewelry. When a high level of dimensional accuracy is key, especially on a small scale, resin printing is appropriate. Whether you are printing a piece which will be used in the casting of jewelry (in which case metal will replace the plastic and the form will be an exact copy), or as an example of the final product, you want that piece to be an accurate representation of the final product. This same mentality can be applied to the prototyping of small mechanical devices where the dimensions of parts must be exact.

A third example for high detail resin printing is for medical applications. The most common application for this type of 3D printing in the medical field is to make dental aligners, those plastic retainer devices. Each patients mouth is different, meaning that their teeth are in different positions and in need of different levels of correction. A scan or mold (which can then be scanned) can be made of the patients mouth which can then be made into an alignment device, which is custom printed for the client. This article by CNN details how a college student did just this, saving himself tons of money.

So, resin printing is not only more expensive, and has a more limited niche of uses, but it has another significant factor to consider. Where FDM printing requires that you remove the scaffolding (support material which allows overhangs to be printed), resin printing requires this step and more to finalize a print. Most resins require that you clean the print gently with isopropyl alcohol. Once you’ve done this, you still have another step. Most resins also require that you cure them with UV light before they are ready to use/display. Hobbyists have done this by setting their prints outside or by a window on a sunny day. Others have used UV lamp devices (commonly used to set manicure products) to accomplish the same thing. High end products do exist which are effectively a large version of one of those UV nail polish curing stations, but they allow for the speedy curing of larger prints.

So, is resin printing for you? That I can’t really say, but hopefully this information has helped you decide if ponying up the extra cash for a resin printer and its accompanying tools is worth it for you. If high levels of detail are your goal, and you don’t mind the smelly resins and cleaning solutions and the accompanying price tag, maybe pick one up and give it a try.

Categories
Operating System

What is Decentralization On The Web and Why Does It Matter?

These days, there are a few large technology companies that handle most of the web’s information. Amazon, Google, Facebook, and others have ownership over the lion’s share of our data. Many of these companies have been in hot water recently over data privacy violations for misusing the vast amounts of data they have on their customers. Furthermore, these companies’ business models depend on gathering as much data as possible to sell ads against.

Many years ago, the web was much less centralized around these huge companies. For instance, before Gmail it was much more common to host your own email or use a much smaller service. You had much more control over your own service. Today, your data isn’t in your hands, it’s in Google or Facebooks. Furthermore, they can kick you off your platform for a number of reasons without any warning. Additionally, there are political reasons for not wanting all of your information in these centralized silos. Being a part of these platforms means that you must conform with their rules and guidelines, no matter how much you don’t like them.

Decentralized systems fix this by giving you control over your information. Instead of one centralized company with one running copy of the service, decentralized services work a lot like email. Anyone can run their own email server and have control over their own information. This has been true about email since it was formed. But for social networks and other sites, this kind of distributed model is now becoming an option as well.

Mastodon is a twitter-like social network based on federation and decentralization. Federation means that individual versions of the service run by different people can talk to one another. This means that I can follow someone with an account on Mastodon.com from my account at Mastodon.xyz. This works very similarly to how email works: you can email anyone from your gmail account, not just other gmail accounts. Federation means that I can run my own server with my own rules if I wanted to. I can choose to allow certain content or people and know that my data is in my control.

Many people are starting to call decentralized technology “Web 3.0”. While Web 2.0 saw people using the internet for more and more things, this came at the cost of consolidation and large companies taking over much of the control of the web. With decentralization and federation, the web can once again be for the people, and not only for large companies.

 

Categories
Linux

Quick Guide to Patching Linux Icon Packs

One of the nice advantages of using Linux is the wealth of customization options available. A key area of  that customization are app icons. For those unfamiliar, it is similar to how icons change across Android versions, even though the apps themselves are often the same.

Now I could go over how to install icons packs, however there are countless sites who already explain the process very well, such as Tips on Ubuntu. As for obtaining icon packs, a great site is OpenDesktop.org, which among other things hosts a wide variety of free to use icon packs.

So, now onto something less commonly covered, patching icon sets (no coding or art skills required). First a little background, if you use one of the more popular icon packs, you’ll likely have no issues. However many of the smaller icon packs either only support certain distros or lack icons for lesser known programs.

Take for example my all time favorite icon pack, Oranchelo, it is geared mainly towards Ubuntu. Lately I’ve been using Fedora 28 and while some icons work liked they’re supposed to, others, which work in Ubuntu, do not. So why is that?

Each program has an icon name, and while many programs keep a consistent icon name across distros, this is not always the case, such as with some of the default Gnome apps in Fedora vs Ubuntu. To fix this we need to either change the app’s icon name or create symbolic links to the current icon name, the latter generally being the better option.

First we need to find the current icon name, for all common (and probably for the uncommon ones as well) Linux desktop environments the way to do this is:

cd /usr/share/applications/

Here you will see (using the ls command) a bunch of .desktop files, these are config files that set how a program will be shown on the desktop, such as what will it be called under English or under Polish, as well as what the icon should be.

I’ve already patched Oranchelo for most of my applications but I’ve yet to do it for the IceCat browser (located towards the bottom on the screenshot), so I’ll use it as an example. Since we already have our terminal located in the right folder, lets just search for the right .desktop file:

ls | grep -i icecat

From that command we now know that full name of the file is “icecat.desktop”. Now that we see the file, we just need to find what it’s icon name is:

cat icecat.desktop | grep -i icon

Now that we know what the icon is called (“icecat”, icon names aren’t always this simple), we needed to open up our installed icon pack, if you’re not sure where that is, look back on the install instructions you used for your pack and just find where you placed said pack.

I generally prefer to look at the icons in a graphical file manager, so that I can make sure I pick the right one.

So now that we’re in the file, we want to go into apps, then scalable. Here we have all our icons, once we found one that we like, we need to create a symbolic link.

Since Oranchelo doesn’t currently have an Icecat logo, I will use the firefox nightly icon.

Now we need to open a terminal at this location and do the following command (depending on where you installed your icons, you may or may not need to use sudo)

ln -s icon.svg desiredApp.svg

Or in my case:

ln -s firefox-nightly-icon.svg icecat.svg

Now all we need to do is toggle the icons, simply switch to the system default icon set and then switch back and the correct icons will show.

As you can see, the desired icon is now set for Icecat

However this is not the only way to patch icon sets, some icon sets are designed to only replace a small amount of icons, such as folders. These often use inheritance to fill the void, however you may not always like the set from which they inherit the remainder, or you may simply prefer if it inherited from a different set.

One of my favorite Gnome themes, Canta, comes with its own icon pack that replaces folders but inherits the rest from Numix icon pack, but since I haven’t installed Numix, it defaults to the system default. However, I have the Flat Remix pack installed, I’ll make it instead inherit from that.

As before, we need to go into the icon pack folder. Once we’re in the canta icon pack, we need to open up the index.theme file with a text editor (as before, depending on where it is installed, you may or may not need sudo).

A few lines from the bottom you will see a “Inherits=”. For Canta it is:

"Inherits=Numix-Circle,Adwaita,gnome,hicolor"

So if we want it to inherit from flat remix (provided flat remix is installed correctly), all we need to do is add it in, changing the line to:

"Inherits=Flat-Remix,Numix-Circle,Adwaita,gnome,hicolor"

Once you save the file, all the missing icons should be automatically inherited from Flat Remix.

Best of luck with all your Linux customization.

Categories
Operating System

Handling Media Files in MatLab

You might be wondering: does anyone love anything as much as I love MatLab?  I get it, another MatLab article… Well, this one is pretty cool.  Handling media files in MatLab is, not only extremely useful, but is also rewarding.  To the programming enthusiast, it can be hard to learn about data structures and search algorithms and have only the facilities to apply this knowledge to text documents and large arrays of numbers.  Learning about how to handle media files allows to you see how computation effects pictures, and hear how it effects music.  Paired with some of the knowledge for my last two articles, one can begin to see how a variety of media-processing tools can be created using MatLab.

 

Audio

Audio is, perhaps, the simplest place to start.  MathWorks provides two built-in functions for handling audio: audioread() & audiowrite().  As the names may suggest, audioread can read-in an audio file from your machine and turn it into a matrix; audiowrite can take a matrix and write it to your computer as a new audio file.  Both functions can tolerate most conventional audio file formats (WAV, FLAC, M4A, etc…); however, there is an asymmetry between the two function in that, while audioread can read-in MP3 files, audiowrite cannot write MP3 files.   Still, there are a number of good, free MP3 encoders out there that can turn your WAV or FLAC file into an MP3 after you’ve created it.

So let’s get into some details… audioread has only one input argument (actually, it can be used with more than one, but for our purposed, you only have to use one), the filename.  Please note, filename here means the directory too (:C\TheDirectory\TheFile.wav).  If you want to select the file off your computer, you could use uigetfile for this.

The audioread function has two output arguments: the matrix of samples from the audio file & the sample rate.  I would encourage the reader to save both since the sample rate will prove to be important in basically every useful process you could perform on the audio.  Sample values in the audio matrix are represented by doubles and are normalized (the maximum value is 1).

Once you have the audio file read-in to MatLab, you can do a whole host of things to it.  MatLab has in-built filtering and other digital signal processing tools that you can use to modify the audio.  You can also make plots of the audio magnitude as well as it’s frequency contents using the fft() function.  The plot shown below is of the frequency content of All Star by Smashmouth.Once you’re finished processing the audio, you can write it back to a file on your computer.  This is done using the audiowrite() function.  The input arguments to audiowrite are the filename, audio matrix in Matlab, and sample rate.  Once again, the filename should also include the directory you want to save in.  This time, the filename should also include the file extension (.wav, .ogg, .flac, .m4a, .mp4).  With only this information, MatLab will produce a usable audio file that can then be played through any of your standard media players.

The audiowrite function also allows for some more parameters to be specified when creating your audiofile.  Name-argument pairs can be sent as arguments to the function (after the filename, matrix, and sample-rate) and can be used to set a number of different parameters.  For example, ‘BitsPerSample’ allows you to specify the bit-depth of the output file (the default is 16 bits, the standard for audio CDs).  ‘BitRate’ allows you to specify the amount of compression if you’re creating an .m4a or .mp4 file.  You can also use these arguments to put in song titles and artist names for use with software like iTunes.

 

Images

Yes, MatLab can also do pictures.  There are two functions associated with handling images: imread() and imwrite().  I think you can surmise from the names of these two function which one reads-in images and which one writes them out.  With images, samples exist in space rather than in time so there is no sample-rate to worry about.  Images still do have a bit-depth and, in my own experience, it tends to differ a lot more from image-to-image than it does for audio files.

When you import an image into MatLab, the image is represented by a three-dimensional matrix.  For each color channel (red, green, and blue), there is a two-dimensional matrix with the same vertical and horizontal resolution as your photo.  When you display the image, the three channels are summed together to produce a full-color image.

By the way, if you want to display an image in MatLab, use the image() function.

MathWorks provides a good deal of image-processing features built-into MatLab so if you are interested in doing some crazy stuff to your pictures, you’re covered!

Categories
Hardware

The Future Of Wireless Charging

The idea of powering devices wirelessly has been around for centuries, ever since Nikola Tesla built the Tesla tower that could light up lamps about 2 km away based on electromagnetic induction. Wireless Charging devices can be traced back to electric toothbrushes that used a relatively primitive form of inductive charging, decades before Nokia announced Integrated Inductive charging in its break-though Lumia 920 model in 2012. This marked the birth of the Qi standard which at that time was still contending for the much coveted universal/international standard spot. Although now it seems like wireless charging is right around the corner; and with Apple and Google launching Qi compatible phones, the message is clear and simple. ‘Wireless is the future and future is here.’ Or is it ?

 Qi (Mandarin for ‘material energy’ or ‘inner strength’) is a near-field energy transfer technology that works on the principle of electromagnetic induction. Simply put, the base station (charging matt, pad or dock) has a transmitting coil, which (when connected to an active power source) induces a current into the receiver coil in the phone, which in turn charges the battery. In its early stages, Qi used ‘guided positioning’ which required the device to be placed in a certain alignment on the base station. With some rapid developments over the time, this has been effectively replaced by the ‘free positioning’, which is standard in almost all the recent Qi charging devices. There’s a catch here- the devices must have a transmittable back surface. Glass is currently the most viable option and most Qi compatible smartphones have glass backs. This certainly has its implications though, the obvious one being significantly reduced durability.

Come to think of it, the fact that in order to charge, the device has to be within at the most an inch of the base station sounds counterproductive.  Besides, if the base needs to be connected to a power source, that’s still one cable. So…….. what’s the point ? Well currently the mobility part is more of a grey-area since this technology is still in its transitional phase. Majority of the Qi compatible smartphones still come with a traditional adapter by default and the wireless dock needs to be purchased separately. There are several other issues with near-field charging that need to be addressed, such as-

  •  Longer charging times
  •  Reduced efficiency ~60-70%
  •  high manufacturing costs
  •  higher energy consumption which could lead to increased production costs of electricity
  •  residual electromagnetic waves- a potential health risk
  •  Devices heat up faster compared to traditional adapters >  energy/heat waste
  •  Higher probability of software updates causing bugs

Over the past decade, people have come up with interesting solutions for this, including a charging phone case and even a battery-less phone powered by ambient radio waves and wifi signals. But the most promising option is the pi charging startup which hopes to fix the range issue, by allowing devices to pair with a charging pad within the range of a foot in any direction. The concept is still in its experimental stages and it’s going to be a while before the mid-long range wireless charging technology becomes pervasive standard for smartphones and other IoT devices. Assuming, further progress is made down that road, wireless charging hotspots could be a possibility in the not-very-distant future.

Qi standard despite all its shortcomings has had considerable success in the market and it looks like it’s here to stay for the next few years. A green light by both Apple and Google has given it the necessary boost towards being profitable and wireless pads are gradually finding their way into Cafes, Libraries, Restaurants, Airports etc. Furniture retailers such as Ikea have even started manufacturing wireless charging desks and tables with inductive pads/surfaces built in.  However, switching completely to, and relying solely on inductive wireless charging wouldn’t be the most practical option as of now unless upgrades are made to it, keeping all the major concerns surrounding it in sights. Going fully wireless would mean remodelling the very foundations of conventional means of transmitting electricity. In short, the current Qi standard is not the endgame and it can be seen as more of a stepping stone towards mid-long range charging hotspots.

Categories
Hardware

How Tesla is Revolutionizing Solar Energy

Unless you live under a rock, the chances are that you’ve come across Tesla technology in your daily life. From their very successful car line consisting of some of the sleekest, fastest, most efficient electric cars on the market, or even their ventures in making renewable rocketry in SpaceX, Tesla is making a name for itself in revolutionizing technology for the next era. But one of their ventures has for the most part flied under the radar, despite its huge advantages. That’s right, I’m talking about Tesla Solar.

Formerly a separate entity under the title SolarCity, Tesla purchased the company and made it one of its premier subsidiaries of Tesla Energy in 2016. The mission of Tesla Energy is to bring the power of solar energy into control of the consumer, whether it be a residential or commercial premises. This is a more cost and space effective means of solar energy that doesn’t transform massive amount of space or forests into giant solar farms. This not only gives people control over their energy production and costs, but can also be beneficial to consumers through energy grid buyback, where the grid pays you for excess energy.

But enough about the company; lets talk about the technology behind it. Tesla Energy’s Solar Panels are thin and sleek, allowing them to seamlessly fit in to any roofing style or shape. None of the mounting hardware is visible, allowing the panels to blend into the roof almost as if they were never there. In fact, Tesla Energy takes it another step further, through their Solar Roof. This is a complete roofing unit that takes the technology of solar panels into interlocking shingles, allowing your entire roof to capture energy from solar rays and power your home. If you thought the slim, sleek design of their panels was impressive, Tesla Energy’s Solar Roof takes it to another level. While most would worry about these energy shingles getting damaged, Tesla prides itself with being much stronger than most other roof tile alternatives.

 

You might be thinking, “Capturing all of this energy is cool and all, but how does it all get stored?” Well, Tesla has that covered. Their own technology, Powerwall, seamlessly connects to your Solar inputs and hooks up to the electrical system in your home or business. Not only does this guarantee that you can use your own power you produce, but allows you to have uninterrupted power even when there’s a grid outage, leaving your life uninterrupted and those Christmas lights in your front yard up and running. According to Tesla, Powerwall can leave you with up to 7+ days of power during an outage. If you happen to own a Walmart or large retail store, the capabilities of Powerwall can be expanded into Tesla Energy’s commercial units, where micro-grids and Powerwalls can be built for your commercial needs.

The future of residential and off-the-grid living is here. Through Tesla Energy, people can independently and reliably power their homes and businesses completely grid free. While the costs may be expensive right now, increased competition in the sector can lead to a bigger market, which will lower costs as Tesla is able to push this out to more consumers. Even with Tesla being known widely as the electric car company, the company is making strides in the renewable energy sector which stemmed off of its work in revolutionizing the electric car battery. The future is bright for renewable energy, and the future is even brighter for Tesla.

For more information on Tesla Energy, visit their website, at https://www.tesla.com/energy

All images used in this blog were obtained from tesla.com, all rights reserved.

Categories
Software

Is AI journalism the future?

Artificial intelligence in news media is being used in many new ways, from speeding up research to accumulating and cross-referencing data and beyond.
You might be wondering: how AI does something as complex as writing the news?

AI writes the news by sifting through huge amounts of data, and finding the useful data by categorizing them. The AI tool then uses this data to train itself to imitate human writers. In addition to that it also helps human reporters avoid grunt work such-as tracking the score or updating a breaking news.
Automated Journalism is everywhere from Google News, to Facebook’s fake news checker, in addition to that there are AI tools used  by major publications: Narrative Science’s Quill: It puts together reports and stories just based off of raw data.  The kicker is that a study was done and most people couldn’t tell the difference between articles written by software or real journalists.

The news industry is predicting that 90% of the articles are going to be written by AI within the next 10-15 years. The industry is going through a huge push towards automated news generation because of the huge amounts of data we are amassing.

While some of us might be scared about machines taking over and influencing our minds, the reality couldn’t be far from it. These AI tools can only write fact-based articles which are much closer to a computer reading a bunch of facts to you than a qualitative article written by a real journalist. These tools don’t have the power to sway most people, and checks are being made to make sure these tools aren’t used to spread “fake news”.

 

Categories
Linux Security Software

Hiding in Plain Sight with Steganography

Steganography is the process of hiding one file inside another, most popularly, hiding a file within a picture. If you’re a fan of Mr. Robot you are likely already somewhat familiar with this.

Although hiding files inside pictures may seem hard, it is actually rather easy. All files at their core are just text, so to hide one file into another it is just a case of inserting the text value of one file into another.

Even though this possible on all platforms, it is easiest to accomplish on Linux (although the following commands will probably work on Mac OS as well).

There are many different ways to hide different types of files, however the easiest and most versatile method is to use zip archives.

Once you create your own zip archive we can then append it to the end of an image file, such as a png.

cat deathstarplans.zip >> r2d2.png

If you’re wondering what just happened, let me explain. Cat prints out a file as text (deathstarplans.zip in this instance). Instead of printing to the terminal, >> tells your terminal to appends the text to the end of the specified file -> r2d2.png.

We could have also just done > however that would replace the text of the specified file, specifically the metadata of r2d2.png in this instance. This does work and it would still allow you to view the image… BUT r2d2.png would be easily recognized as containing a zip file and defeat the entire purpose.

Getting the file(s) out is also easy, simply run unzip r2d2.png. Unzip will throw a warning that “x extra bytes” are before the zip file, which you can ignore, basically just restates that we hid the zip in the png file. And so they files pop out.

So why zip? Tar tends to be more popular on Linux… however tar has a problem with this method. Tar does not parse through the file and get to the actual start of the archive whereas zip does so automatically. That isn’t to say its impossible to get tar to work, it simply would require some extra work (aka scripting). However there is another, more adavanced way, steghide.

Unlike zip, steghide does not come preinstalled on most Linux Distos, but is in most default repositories, including for Arch and Ubuntu/Linux Mint.

sudo pacman -S steghide – Arch

sudo apt install steghide – Ubuntu/Linux Mint

Steghide does have its ups and downs. One upside is that it is a lot better at hiding and can easily hide any file type. It does so by using an advanced algorithm to hide it within the image (or audio) file without changing the look (or sound) of the file. This also means that without using steghide (or at least the same mathematical approach as steghide) it is very difficult to extract the hidden files from the image.

However there is big draw back: steghide only supports a limited amount of ‘cover’ files – JPEG, BMP, WAV, and AU. But since JPEG files are a common image type, it isn’t a large draw back and will not look out of place.

To hide the file the command would be steghide embed -cf clones.jpg -ef order66.pdf

At which point steghide will prompt you to enter a password. Keep in mind that if you lose the password you will likely never recover the embedded file.

To extract the file we can run steghide extract -sf clones.jpg, assuming we use the correct password, the hidden file is revealed.

All that being said, both methods leave the ‘secret’ file untouched and only hide a copy. Assuming the goal is to hide the file, the files in the open need to be securely removed. shred is a good command which overwrites the file multiple times to make it as difficult to recover as possible.

shred -z order66.pdf

or to delete it automatically

shred -zu order66.pdf

Categories
Google

How to Google!

Google, the world’s most popular search engine, usually does a great job finding what we need with little information for us. But what about when Google isn’t giving us the hits we need?
This article will go over commonly unused tips that will help refine your search and tell Google exactly what you’re searching for. It will also go over fun, new features of Google.

 

 

1. Filter Results by Time
Users can now browse only the most recent results. After searching “Tools” will appear on the right below the search bar. If you click on ‘Tools’, ‘Any time’ and ‘All Results’ will appear under the search bar. Under ‘Any time’ there are options to show results ranging from the past hour to the past year.

 

2. Search Websites for Specific Words
If you are searching through a specific website you can now search for keywords. Ex: to see how many times Forbes mentioned Kylie Jenner you would simply type “Kylie Jenner site:Forbes.com”.

 

3. Search Exact Phrases and Quotes
A more commonly used trick is typing quotation marks around words or phrases to tell Google to only show results contain the exact words in quotes

 

4. Omit Certain Words Using the Minus Sign
In contrast to the last tip, using “-aword” will omit results containing the word right after the minus sign. For example typing “Apple -iPhone” will get rid of all results containing iPhone with the word Apple.

 

5. Use Google as a timer
Now Google has a stopwatch and timer feature that will show up by just searching “set timer”. No need to mess around on apps when you can just pull it up on the internet!

 

6. Search Newspaper Archives from the 1800s
Search “google news archive search” and the first link will bring you to a page with the names of hundreds of newspapers. You can browse issues of Newspapers by date and name.

 

7.  Use Google to Flip a Coin
Need help making a decision? Simply search “flip a coin” and Google will flip a virtually generated coin and give you an answer of heads or tails.

 

8. Search Through Google’s Other Sites
Google has other search engines for specific types of results. For example, if you’re searching for a blog use “Google Blog Search” or if you want to search for a patent use “Google Patent Search”, etc.

 

Now with these Google tips you can search Google like a pro!
Categories
Operating System

October Apple Event Preview

Today Apple sent out invitations for an event on October 30th in New York City. The event, titled “There’s more in the Making”, hints at a creative and pro focused event, which is further suggested by the event being hosted at the Howard Gilman Opera House. There are several rumored devices that will be launched at this event

The headline product that is rumored to be announced will be an update to the iPad Pro line. The line, which is made up of two models, is rumored to gain many of the features from the iPhone X line of phones. This includes smaller bezels and FaceID to replace the fingerprint reader. The devices are also said to switch over from their proprietary Lightning connector in favor of the more standard USB-C. This will also allow the iPad to connect to external display and other accessories much more easily. The iPad and the iPhone are some of the only devices in the industry that haven’t switched over to USB-C. This transition will help the industry converge on a single port type.

There are also rumored to be new Mac’s at this event. The Mac mini hasn’t been updated in over 4 years and is overly due for a refresh. The new minis are rumored to be smaller and more aimed at the pro market. This makes sense given the overall theme of the event. Apple is also rumored to be introducing a new low end Mac laptop at around the $1000 price point. This will replace the aging MacBook Air that Apple is still selling. This is by far Apple’s highest volume price range, so it’s important to have a modern, compelling option.

Is there anything else that Apple will announce next week? What are your predictions?

Categories
Operating System

Are Self-Driving Cars Safe?

Self-driving cars promise to revolutionize driving by removing human error from the equation altogether. No more drunk or tired driving, great reductions in traffic, and even the possibility of being productive on the commute to work. But what are the consequences of relying on algorithms and hardware to accomplish this vision? Software can be hacked or tricked, electrical components can be damaged. Can we really argue that it is safer to relinquish control to a computer than to operate a motor vehicle ourselves? Ultimately, this question cannot be answered with confidence until we conduct far more testing. Data analysis is key to understanding how these vehicles will perform and specifically how they will anticipate and react to the kind of human error which they exist to eliminate. But “the verdict isn’t out yet” is hardly a satisfying answer, and for this reason I would argue that despite concerns about ‘fooling’ self-driving cars, this technology is safer than human drivers.

The article “Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms” details how researchers have tricked computer vision algorithm to misinterpret street signs. Researchers were able to achieve these results by training their classifier program with public road sign data, and then adding new entries of a modified street sign with their own classifiers. Essentially, the computer is “taught” how to analyze a specific image and, after numerous trial runs, will eventually be able to recognize recurring elements in specific street signs and match them with a specific designation / classifier. The article mainly serves to explore how these machines could be manipulated, but only briefly touches upon a key safety feature which would prevent real-world trickery. Notably, redundancy is key in any self-driving car. Using GPS locations of signs and data from past users could ensure that signs are not incorrectly classified by the computer vision algorithm.

The article “The Long, Winding Road for Driverless Cars” focuses less on the safety ramifications of self-driving vehicles, and instead on how practical it is that we will see fully autonomous cars in the near future. The author touches upon the idea that selling current vehicles (such as Tesla) with self-driving abilities as “autopilot” might be misleading, as these current solutions still require a human to be attentive behind the wheel. She presents the hurdle that in order to replace human drivers, self-driving vehicles cannot just be “better” than human drivers but near perfect. While these are all valid concerns, they will only result in benefits for consumers. Mistrust in new tech means that companies and regulatory authorities will go through rigorous trials to ensure that these vehicles are ready for the road and maintain consumer confidence. We have already accepted many aspects of car automation (stopping when an object is detected, hands-free parallel parking, and lane-detection) to make our lives easier, and perhaps some time in the near future self-driving cars will be fully tested and ready for mass deployment.

Categories
Software

A Brief Introduction to Creating Functions in MATLAB

Hey wow, look at this!  I’ve finally rallied myself to write a blog article about something that is not digital audio!  Don’t get too excited though, this is still going to be a MATLAB article and, although I am not going to be getting too deep into any DSP, the fundamental techniques underlined in this article can be applied to a wide range of problems.

Now, let me go on record here and say I am not much of a computer programmer.  Thus, if you are looking for a guide to functional programming in general, this is not the place for you!  However, if you are perhaps an engineering student who’s learned MATLAB for school and are maybe interested in learning what this language is capable of, this is a good place to start.  Alternatively, if you are familiar with functional languages (*cough cough* Python), then this article may help you to start transposing your knowledge to a new language.

So What are Functions?

I am sure that, depending on who you ask, there are a lot of definitions for what a function actually is.  Functions in MATLAB more or less follow the standard signals-and-systems model of a system; this is to say they have a set of inputs and a corresponding set of outputs.  There we go, article finished, we did it!

Joking aside, there is not much more to be said about how functions are used in MATLAB; they are excellently simple.  Functions in MATLAB do provide great flexibility though because they can have as many inputs and outputs as you choose (and the number of inputs does not have to be the same as the number of outputs) and the relationship between the inputs and outputs can be whatever you want it to be.  Thus, while you can make a function that is a single-input-single-output linear-time-invariant system, you can also make literally anything else.

How to Create and Use Functions

Before you can think about functions, you’ll need a MATLAB script in which to call your function(s).  If you are familiar with an object oriented language (*cough cough* Java), the script is similar to your main method.  Below, I have included a simple script where we create two numbers and send them to a function called noahFactorial.

Simple Script Example

It doesn’t really matter what noahFactorial does, the only thing that matters here is that the function has two inputs (here X and Y) and one output (Z).

Our actual call to the noahFactorial function happens on line 4.  On the same line, we also assign the output of noahFactorial to the variable Z.  Line 6 has a print statement that will print the inputs and outputs to the console along with some text.

Now looking at noahFactorial, we can see how we define and write a function.  We start by writing ‘function’ and then defining the function output.  Here, the output is just a single variable, but if we were to change ‘output’ to ‘[output1, output2]’, our function would return a 2×1 array containing two output values.

Simple Function Example

Some of you more seasoned programmers might notice that ‘output’ is not given a datatype.  This will undoubtedly make some of you feel uncomfortable but I promise it’s okay; MATLAB is pretty good at knowing what datatype something should be.  One benefit of this more laissez-faire syntax is that ‘output’ itself doesn’t even have to be a single variable.  If you can keep track of it, you can make ‘output’ a 2×1 array and treat the two values like two separate outputs.

Once we write our output, we put an equals sign down (as you might expect), write the name of our function, and put (in parentheses) the input(s) to our function.  Once again, the typing on the inputs is pretty soft so those too can be arrays or single values.

In all, a function declaration should look like:

function output = functionName(input)

or…

function [output1, output2, …, outputN] = functionName(input1, input2, …,inputM)

And just to reiterate, N and M here do not have to be the same.

Once inside our function, we can do whatever MATLAB is capable of.  Unlike Java, return statements are not used to send anything to the output, rather they are used to stop the function in its tracks.  Usually, I will assign an output for error messages; if something goes wrong, I will assign a value to the error output and follow that with ‘return’.  Doing this sends back the error message and stops the function at the return statement.

So, if we don’t use return statements, then how do we send values to the output?  We make sure that in our function, we have variables with the same name as the outputs.  We assign those variable values in the function.  On the last line of the function when the function ends, whatever the values are in the output variables, those values are sent to the output.

For example, if we define an output called X and somewhere in our function we write ‘X=5;’ and we don’t change the value of X before the function ends, the output X will have the value: 5.  If we do the same thing but make another line of code later in the function that says ‘X=6;’, then the value of X returned will be: 6.  Nice and easy.

 

…And it’s that simple.  The thing I really love about functions is that they do not have to be associated with a script or with an object, you can just whip one up and use it.  Furthermore, if you find you need to perform some mathematical operation often, write one function and use it with as many different scripts as you want!  This insane flexibility allows for some insane problem-solving capability.

Once you get the hang of this, you can do all sorts of things.  Usually, when I write a program in MATLAB, I have my main script (sometimes a .fig file if I’m writing a GUI) in one folder, maybe with some assorted text and .csv files, and a whole other folder full of functions for all sorts of different things.  The ability to create functions and some good programming methodology can allow even the most novice of computer programmers to create incredibly useful programs in MATLAB.

 

NOTE: For this article, I used Sublime Text to write-out the examples.  If you have never used MATLAB before and you turn it on for the first time and it looks completely different, don’t be alarmed!  MATLAB comes pre-packaged with its own editor which is quite good, but you can also write MATLAB code in another editor, save it as a .m file, and then open it in the MATLAB editor or run it though the MATLAB kernel later.

Categories
Operating System Software

What is Docker and How Does it Work?

Docker is a very popular tool in the world of enterprise software development. However, it can be difficult to understand what it’s really for. Here we will take a brief look at why software engineers, and everyday users, choose Docker to quickly and efficiently manage their computer software.

Categories
Operating System Security

Password Security on Github

“The password you provided has been reported as compromised due to re-use of that password on another service by you or someone else. GitHub has not been compromised directly. To increase your security, please change your password as soon as possible.”

I thought this was funny when I first saw this message from Github, a website that has over 28 million users and 57 million repositories. I knew I was receiving this message because I used a very similar password for my IBM intern account and my personal account.

So I was telling my coworkers in IT about it, and they pointed out to me in horror – “That means they’re storing passwords in plaintext…”

Well turns out this isn’t true. In fact, they use fairly secure Key-Derivation Function (KDF) software called Bcrypt.

For obvious reasons, this is scary. The responsible practices for password storage are, well, complicated. It’s a combination of hashing or the more secure Key-Derivation Function, both of which basically scrambles up the user’s password so that not just anyone can decode it, and a careful implementation of where . If a company isn’t using proper security for user data, there’s an increased risk of getting hacked. And realistically, if someone managed to snag the password to your Github account, they’d likely be able to get into at least a few of your other accounts as well.

If you want to learn about this more in depth, you can read this interesting thread.

Categories
Operating System

The Future of the Mac

There have been two major rumors in the past month about the future of the Mac. It’s clear in the past several years that much of Apple’s development effort has been geared towards Apple’s mobile operating system, iOS, which powers iPhones and iPads. Apple has also been introducing new platforms, such as Apple Watch and HomePod. Through all of this, the Mac has been gaining features at a snail’s pace. It seems like Apple will only add features when it feels it must in order to match something it introduces first on iOS. But these recent rumors point to a Mac platform that could be revitalized.

The first major rumor is a shared development library between iOS and the Mac. What does this mean to non-developers? It means that we could very well see iOS apps such as Snapchat or Instagram on Mac. MacOS uses a development framework called AppKit. This framework stems back many years to when Apple bought a company called NeXT computer. These systems are what eventually became the Mac, and the underlying framework has stayed largely the same since then. Obviously, there have been changes and many additions, but it is still different from what developers use to make iOS apps for iPhones and iPads. iOS uses a framework called UIKit, which is very different in key areas. Basically, it means that to develop an app for the iPhone and the Mac takes twice the development effort. Supposedly, Apple is working on a framework for the Mac that is virtually identical to UIKit. This means that developers can port their apps to the Mac with basically no work. In theory, the amount of apps on the Mac would increase as developers port over their iOS apps to the Mac. This means many communication apps such as Snapchat and Instagram could be usable desktop apps.

What Apple’s future macOS framework could look like.

The second major rumor is that Apple is expected to switch from Intel provided CPUs to their own ARM based architecture. Apple switched to Intel CPUs in 2006 after using PowerPCs for many years. This transition brought along an almost 2x increase in performance compared to the PowerPC chips they were using. In the last few years, Intel hasn’t seen the year over year performance increases that they used to have. Additionally, Intel has been delaying new architectures as manufacturing smaller chips gets harder and harder. This means Apple is dependent on Intel’s schedule to introduce new features. On the other hand, Apple has been producing industry-leading ARM chips for use in their iPhones and iPads. These chips are starting to benchmark at or above some of the Intel chips that Apple is using in their Mac line. Rumors are saying that the low-power Macs could see these new ARM based chips as soon as 2020. The major caveat with this transition is that developers could have to re-write some of their applications for the new architecture. This means it might take some time for applications to be compatible, and some older applications might never get updated.

Its clear that Apple’s focus in the past several years has been on its mobile platforms and not on its original platform, the Mac. But these two rumors show that Apple is still putting serious engineering work into its desktop operating system. These new features could lead to a thriving Mac ecosystem in the years to come.

Categories
Hardware Library Mac OSX Software Windows

A Reflection on Winning The Vive

By Parker Louison 

The Views and Opinions Expressed in This Article Are Those of Parker Louison and Do Not Necessarily Reflect the Official Policy or Position of UMass Amherst IT 

A Note of Intention

I want to start off this article by explaining that I’m not making this in an effort to gloat or brag, and I certainly hope it doesn’t come across that way. I put all of the creative energy I had left this semester into the project I’m about to dissect and discuss, so sadly I won’t be publishing a video this semester (as I’ve done for the past two semesters). One of the reasons I’m making this is because a lot of the reaction towards what I made included people asking how I made it and how long it took me, and trust me, we’ll go in depth on that.

My First Taste

My first experience with high-grade virtual reality was a few weeks before the start of my sophomore year at UMass when my friend Kyle drove down to visit me, bringing along his HTC Vive after finding out that the only experience I’d had with VR was a cheap $20 adapter for my phone. There’s a consensus online that virtual reality as a concept is better pitched through firsthand experience rather than by word of mouth or marketing. The whole appeal of VR relies on subjective perception and organic optical illusions, so I can understand why a lot of people think the whole “you feel like you’re in the game” spiel sounds like nothing but a load of shallow marketing. Remember when Batman: Arkham Asylum came out and nearly every review of it mentioned that it made you feel like Batman? Yeah, well now there’s actually a Batman Arkham VR game, and I don’t doubt it probably does make you actually feel like you’re Batman. The experience I had with VR that night hit me hard, and I came to understand why so many people online were making it out to be such a big deal. Despite my skeptical mindset going in, I found that it’s just as immersive as many have made it out to be. 

This wasn’t Microsoft’s Kinect, where the action of taking away the remote actually limited player expression. This was a genuinely deep and fascinating technological breakthrough that opens the door for design innovations while also requiring programmers to master a whole new creative craft. The rulebook for what does and doesn’t work in VR is still being written, and despite the technology still being in its early stages, I wanted in. I wanted in so badly that I decided to try and save up my earnings over the next semester in an effort to buy one. That went about as well as you’d expect; not just because I was working within a college student’s budget, but also because I’m awful with my money. My Art-Major friend Jillian would tell you it’s because I’m a Taurus, but I think it has more to do with me being a giant man-child who impulse-purchases stupid stuff because the process of waiting for something to arrive via Amazon feels like something meaningful in my life. It’s no wonder I got addicted to Animal Crossing over Spring Break… 

The Task

Anyway, I was sitting in my Comp-Lit discussion class when I got the email about the Digital Media Lab’s new Ready Player One contest, with the first place winner taking home an HTC Vive Headset. I’m not usually one for contests, and I couldn’t picture myself actually winning the thing, but something about the challenge piqued my interest. The task involved creating a pitch video, less than one minute in length, in which I’d have to describe how I would implement Virtual Reality on campus in a meaningful way. 

With Virtual Reality, there are a lot of possible implementations relating to different departments. In the Journalism department, we’ve talked at length in some of my classes about the potential applications of VR, but all of those applications were either for the benefit of journalists covering stories or the public consuming them. The task seemed to indicate that the idea I needed to pitch had to be centered more on benefiting the average college student, rather than benefiting a specific major (at least, that’s how I interpreted it). 

One of my original ideas was a virtual stress-relief dog, but then I realized that people with anxiety would likely only get even more stressed out with having to put on some weird giant headset… and real-life dogs can give hecking good nuzzles that can’t really be simulated. You can’t substitute soft fur with hard plastic. 

I came to college as a journalism major, and a day rarely goes by when I don’t have some doubts about my choice. In High School I decided on journalism because I won this debate at a CT Youth Form thing and loved writing and multi-media, so I figured it seemed like a safe bet. Still, it was a safe bet that was never pitched to me. I had no idea what being a journalist would actually be like; my whole image of what being a reporter entailed came from movies and television. I thought about it for a while, about how stupid and hormonal I was and still am, and realized that I’m kind of stuck. If I hypothetically wanted to switch to chemistry or computer science, I’d be starting from scratch with even more debt to bear. Two whole years of progress would be flushed down the toilet, and I’d have nothing to show for it. College is a place for discovery; where your comfortable environment is flipped on its head and you’re forced to take care of yourself and make your own friends. It’s a place where you work four years for a piece of paper to make your resume look nicer when you put it on an employer’s desk, and you’re expected to have the whole rest of your life figured out when you’re a hormonal teenager who spent his savings on a skateboard he never learned how to ride.

And so I decided that, in this neo-cyberpunk dystopia we’re steadily developing into, it would make sense for simulations to come before rigorous training. Why not create simulated experiences where people could test the waters for free? Put themselves in the shoes of whatever career path they want to explore to see if the shoes fit right, you know?

I mentioned “cyberpunk” there earlier because I have this weird obsession with cyberpunk stuff at the moment and I really wanted to give my pitch video some sort of tongue-in-cheek retrograde 80s hacker aesthetic to mask my cynicism as campy fun, but that had to be cut once I realized I had to make this thing under a minute long.

Gathering My Party and Gear

Anyway, I wrote up a rough script and rented out one of the booths in the Digital Media Lab. With some help from Becky Wandel (the News Editor at WMUA) I was able to cut down my audio to just barely under the limit. With the audio complete, it came time to add visual flair. I originally wanted to do a stop-motion animated thing with flash-cards akin to the intros I’ve made for my Techbytes videos, but I’m slow at drawing and realized that it’d take too much time and effort, which is hilarious because the idea I settled on was arguably even more time-consuming and draining.

I’m the proud owner of a Nikon D80, a hand-me-down DSLR from my mom, which I bring with me everywhere I go, mostly because I like taking pictures, but also because I think it makes me seem more interesting. A while back I got a speck of dust on the sensor, which requires special equipment to clean (basically a glorified turkey baster). I went on a journey to the Best Buy at the Holyoke Mall with two friends to buy said cleaning equipment while documenting the entire thing using my camera. Later, I made a geeky stop-motion video out of all those photos, which I thought ended up looking great, so I figured doing something similar for the pitch video would be kind of cool. I messaged a bunch of my friends, and in a single day I managed to shoot the first 60% of the photos I needed. I then rented out the Vive in the DML and did some photoshoots there. 

At one point while I was photographing my friend Jillian playing theBlu, she half-jokingly mentioned that the simulation made her want to study Marine Biology. That kind of validated my idea and pushed me to make sure I made this video perfect. The opposite effect happened when talking to my friend Rachael, who said she was going to pitch something for disability services, to which I immediately thought “damn, she might win with that.”

I then knew what I had to do. It was too late to change my idea or start over, so I instead decided that my best shot at winning was to make my video so stylistically pleasing and attention-grabbing that it couldn’t be ignored. If I wasn’t going to have the best idea, then gosh darn it (I can’t cuss because this is an article for my job) I was going to have the prettiest graphics I could muster.   

The Boss Fight 

I decided to use a combination of iMovie and Photoshop, programs I’m already familiar with, because teaching myself how to use more efficient software would ironically be less efficient given the short time frame I had to get this thing out the door. Using a drawing tablet I borrowed from my friend Julia, I set out to create the most complicated and ambitious video project I’ve ever attempted to make. 

A few things to understand about me: when it comes to passion projects, I’m a bit of a perfectionist and extremely harsh on myself. I can’t even watch my Freshman Year IT video because I accidentally made it sound like a $100 investment in some less than amazing open back headphones was a reasonable decision on my part, and my other IT video makes me cringe because I thought, at the time, it’d be funny to zoom in on the weird hand motions I make while I talk every five seconds.

So in this case, I didn’t hold back and frequently deleted whole sections of my video just because I didn’t like how a single brush stroke animated (with the exception of the way my name is lopsided in the credits, which will haunt me for the rest of my life). For two weeks, I rigorously animated each individual frame in Photoshop, exported it, and imported it into iMovie. 

(Above) A visual representation of all the files it took to create the video

(Above) Frame by frame, I lined up my slides in iMovie

The most demanding section was, without a doubt, the one involving my friend Matthew, which I spent one out of the two weeks entirely focused on. For that section, I needed it to animate at a speed faster than 0.04 seconds, which is impossible because 0.04 seconds is the shortest you can make a frame in iMovie’s streamlined interface, so I ended up creating a whole new project file, slowing down my audio by half-speed, editing the frames of that section relative to that slowed down audio before exporting it, putting it into the original project file and doubling its speed just to get it to animate smoothly. 

 (Above) Some sections required me to find loopholes in the software to get them to animate faster than iMovie would allow

(Above) Some of the scrap paper I scribbled notes on while editing the video together

Each individual border was drawn multiple times with slight variations and all the on-screen text (with the exception of the works cited) was handwritten by me multiple times over so that I could alternate between the frames of animation to make sure everything was constantly moving. 

(Above) Boarders were individually drawn and cycled through in order to maintain visual momentum

This was one of my major design philosophies during the development of this project: I didn’t want there to be a single moment in the 59 seconds where nothing was moving. I wanted my video to grab the viewer’s attention, and I feared that losing momentum in the visual movement would cause me to lose the viewer’s interest. The song LACool by DJ Grumble came on my Spotify radio coincidentally right when I was listening over the audio for the section I was editing, and I thought it fit so well I bought it from iTunes on the spot and edited it in.

I finished my video on Monday, March 26th, turned it into the Digital Media Lab, stumbled back to my dorm, and went to bed at 6:00 PM by accident. 

The Video

(Above) The final video submission 

The winner wouldn’t be announced until Wednesday, so for two days I nervously waited until 6:00 PM on March 28th, when I sat on my bed in my dorm room refreshing the Digital Media Lab website every 7 seconds like a stalker on an ex’s Facebook page waiting for the winner to finally be posted. At 6:29 PM I got a call from an unrecognized number from Tallahassee, Florida, and almost didn’t answer because I thought it was a sales call. Turns out it was Steve Acquah, the coordinator of the Digital Media Lab, who informed me that my video won. Soon after, the Digital Media Lab Website was also updated with the announcement.

(Above) A screenshot taken of the announcement on the Digital Media Lab Website 

Thank You

Along with the raw joy and excitement came a sort of surreal disbelief. Looking back on those stressful weeks of work, it all felt like it flew by faster than I could’ve realized once I got that phone call. I’m so grateful for not only the reward but the experience. Making that video was a stressful nightmare, but it also forced me to push myself to my creative limits and challenge myself in so many ways. On a night where I would’ve probably just gone home and watched Netflix by myself, I sprinted around campus to meet up with and take photos of my friends. This project got me to get all my friends together and rent out the Vive in the DML, basically forcing me to play video games and have fun with the people I love. While the process of editing it all together drove me crazy, the journey is definitely going to be a highlight of my time at UMass. 

I’m grateful to all of my friends who modeled for me, loaned me equipment, got dinner with me while I was stressing out over editing, played Super Hot VR with me, gave me advice on my audio, pushed me to not give up, and were there to celebrate with me when I won. I’m also immensely grateful to the staff and managers of the DML for providing me with this opportunity, as well as for their compliments and praise for the work I did. This was an experience that means a lot to me and it’s one I won’t soon forget. Thank you.

Epilogue

I picked up my prize the other day at the DML (see photo above the title of this article)! Unfortunately, I have a lot of work going on, so it’s going to be locked up in a safe place until that’s done. Still, it’s not like I could use it right now if I wanted to. My gaming PC hasn’t been touched in ages (since I don’t bring it with me to college) so I’m going to need to upgrade the GPU before I can actually set up the Vive with it. It’s a good thing there isn’t a spike in demand for high-end GPUs at the moment for cryptocurrency mining, right?

(Above) A visual representation of what Bitcoin has done to the GPU market (and my life)

…Oh.

Regardless of when I can actually use the prize I won, this experience was one I’m grateful to have had. The video I made is one I’m extremely proud of, and the journey I went on to create it is one I’ll think about for years to come.

Categories
Operating System

SoFi the Robotic Fish

Researchers at MIT’s Computer Science and Artificial Intelligence department have created a Soft Robotic Fish (nicknamed SoFi) which is able to swim and blend in with real fish while observing and gathering data from them. This remarkable bot is not only cool and adorable, but it also paves the way for the future of lifelike Artificial Intelligence.

Think about it: We have already reached the point where we can create a robotic fish which is capable of fooling real fish into thinking that it’s a real fish. Granted, fish aren’t the smartest of the creatures on this planet, but they can usually tell when something is out of the ordinary and quickly swim away. SoFi, however, seems to be accepted as one of their own. How long will it take for us to create a robot that can fool more intelligent species? Specifically, how long will it be until Soft Robotic Humans are roaming the streets as if they weren’t born yesterday? Perhaps more importantly, is this something that we actually want?

The benefits of a robotic animal like SoFi are obvious: It allows us to get up close and personal with these foreign species and learn more about them. This benefit of course translates to other wild animals like birds, bees, lions, etc. We humans cant swim with the fishes, roost with the birds, visit the hive with the bees or roar with the lions, but a robot like SoFi sure can. So it makes sense to invest in this type of technology for research purposes. But when it comes to replicating humanity, things get a bit trickier. I’m pretty confident in saying that most humans in this world would not appreciate being secretly observed in their daily lives “for science.” Of course, it’s still hard to say whether or not this would even be possible, but the existence of Sofi and the technology behind it leads me to believe we may be closer than most of us think.

Regardless of its possible concerning implications, SoFi is a truly amazing feat of engineering. If nothing else, these Soft Robots will bring an epic evolution to the Nature Documentary genre. For more information about the tech behind SoFi, check out the video at the top from MITCSAIL.

Categories
Operating System

Building a Better Bracket: Beating the Odds with Machine Learning

Like most other fans of college basketball, I spent an unhealthy amount of time dedicated to the sport the week after Selection Sunday (March 11th). Starting with spending hours filling out brackets, researching rosters, injuries, and FiveThirtyEight’s statistical predictions to fine-tune my perfect bracket, through watching around 30 games over the course of four days. I made it a full six hours into the tournament before my whole bracket busted. The three-punch combo of Buffalo (13) over Arizona (4), Loyola Chicago (11) beating Miami (6), and, most amazingly, the UMBC Retrievers (16) crushing the overall one-seed and tournament favorite, UVA, spelled the end for my predictions. After these three upsets, everyone’s brackets were shattered. The ESPN leaderboards looked like a post-war battlefield. No one was safe.

The UMBC good boys became the only 16th seed to beat a 1st seed in NCAA tournament history

The odds against picking a perfect bracket are astronomical. The probability ranges from 1 in 9.2 quintillion to 1 in 128 billion. Warren Buffet offers $1 million a year for life for Berkshire Hathaway employees who correctly pick a bracket. Needless to say, no one has been able to cash in on the prize. Picking a perfect bracket is nearly impossible, and is (in)famous for being one of the most unlikely statistical probabilities in gambling.

The Yin and Yang of March Madness

To make the chances of making a perfect bracket somewhat feasible, a competition has been set up to see who can beat the odds with machine learning. Hosted by Kaggle, an online competition platform for modeling and analytics that was purchased by Google’s parent company, Alphabet, the competition has people making models to predict which teams will win each game based on prior data. A model that is correct and predicted it with 99% confidence will score better than one with a 95% confidence and so on. The prize is $100,000, split among the teams that made the top 3 brackets. Teams are provided with the results of every men’s and women’s game in the tournament since 1985, the year that the tournament first started with 64 teams. They are also provided with every play since 2009 in the tournament. Despite all this data, it is still very hard to predict, with the best bracket in this competition, which has been hosted for five years, predicting 39 games correct. Many unquantifiable factors, such as hot streaks and team chemistry, play a large factor in the difficulty in choosing, so it looks like we’re still years off from having our computers picking the perfect bracket.

Categories
Operating System

[Sidebar] How to Resize a VirtualBox .vdi

Congratulations! You’ve made a virtual box of your favorite linux distro. But now you want to download a picture of your cat and find out that you’ve run out of disk space. 
Image: habrahabr.ru

Rather than free up space by deleting the other pics of Snuffles, you decide you’d rather just make the virtual machine have more disk space. But you’ll find out quickly that Oracle has not made this super-easy to do. The process is not simple, but it can be if you just use the following steps:

Open the Command Line on your windows machine. (Open Start and type cmd)

You can then navigate to you vitualbox installation folder. It’s default location is C:\Program Files\Oracle\VirtualBox\

Once there, type this command to resize the .vdi file:

VBoxmanage modified LOCATION –resize SIZE

Replace LOCATION with the absolute file path to your .vdi image (just drag the .vdi file from file explorer to you cmd window) and replace SIZE with the new size you want (measured in MB) 1 GB = 1000 MB

Now your .vdi is resized, but the disk space is unallocated in the virtual machine. You’ll need to resize it. To do this, download gparted live. Make a new virtual machine. It is going to simulate a live CD boot where you can modify your virtual partition.

If your filesystem is ext4, like mine was when I did this, you’ll need to delete the linux-swap file located in-between your partition and the unallocated space. Make sure you leave at least 4 GB of unallocated space so that you can add the linux-swap partition back later.

After you’ve resized your partition, you’ll be done. Boot into the virtual machine as normal and you’ll notice you have more space for Snuffles.

Image: wideopenpets.com