Cross Platform Learning- Opinion

Last semester, my Moodle looked a little barren. Only two of my classes actually had Moodle pages. This would be okay if only 2 of my classes had websites. But all of them did. In fact, most of the classes I took had multiple websites that I was expected to check, and memorize, and be a part of throughout the semester. This is the story of how I kept up with:

  1. courses.umass.edu
  2. people.umass.edu
  3. moodle.umass.edu
  4. owl.oit.umass.edu
  5. piazza.com
  6. Flat World Learn On
  7. SimNet
  8. TopHat
  9. Investopedia
  10. Class Capture

 

The Beginning

At the beginning of the semester it was impossible to make a calendar. My syllabi (which weren’t given out in class) were difficult to find. Because I didn’t have a syllabus from which I could look at the link to the teacher’s page, I had to remember the individual links to each professor’s class. This was a total waste of my time. I couldn’t just give up either because that syllabus is where the class textbook was. I felt trapped by the learning curve of new URLs that were being slung at me. I had moments were I questioned my ability to use computers. Was I so bad that I couldn’t handle a few new websites? Has technology already left me in the past?


The Semester

One of the classes I am taking is on technology integration into various parts of your life. The class is an introductory business class with a tech focus. This class is the biggest culprit of too many websites. For homework we need website A, for class we use website B, for lab we use website C, the tests are based on the information from website D, and everything is poorly managed by website E.

Another class is completely a pen on paper note taking class. In the middle of lecture, my professor will reference something on the website and then quickly go back to dictating notes. Reflecting on it, this teaching had a method to using online resources that I enjoyed. Everything I needed to learn for the tests was given to me in class and if I didn’t understand a concept, there were in depth help on the website.

One class has updates on Moodle that just directs me toward the online OWL course. This wasn’t terrible. I am ok with classes that give me a Moodle dashboard so I have one place to start my search for homework and text books. The OWL course described also had the textbook. This was really nice. One stop shopping for one class.

My last class (I know, I am a slacker that only took 4 classes this semester) never used the online resource which meant I never got practice using it. This was a problem when I needed to use it.


The End

I got over the learning curve of the 10 websites for 4 classes I was taking. But next semester I will just have to go through the same thing. I wish that professors at UMass all had a Moodle page that would at least have the syllabus and a link to their preferred website. But they don’t do that.

Automation with IFTTT

Image result for IFTTT

“If This, Then That”, or IFTTT, is a powerful and easy to use automation tool that can make your life easier. IFTTT is an easy way to automate tasks that could be repetitive or inconvenient. It operates on the fundamental idea of if statements from programming. Users can create “applets”, which are simply just scripts, that trigger when an event occurs. These applets can be as simple as “If I take a picture on my phone, upload it to Facebook”, or range to be much more complex. IFTTT is integrated with over 300 different channels,  including major services such as Facebook, Twitter, Dropbox, and many others, which makes automating your digital life incredibly easy.

Getting Started with IFTTT and Your First Applet

Getting started with IFTTT is very easy. Simply head over to the IFTTT website and sign up. After signing up, you’ll be read to start automating by creating your first applet. In this article, we will build a simple example applet to send a text message of today’s weather report every morning.

In order to create an applet, click on “My Applets” at the top of the page, and select “New Applet”.

Now you need to select a service, by selecting the “this” keyword. In our example, we want to send a text message of the weather every morning. This means that the service will be under a “weather” service like Weather Underground. Hundreds of services are connected through IFTTT, so the possibilities are almost limitless. You can create applets that are based off something happening on Facebook, or even your Android/iOS device.

Next, you need to select a trigger. Again, our sample applet is just to send a text message of the weather report to your text in the morning. This trigger is simply “Today’s weather report”. Triggers often have additional fields that need to be filled out. In this particular one, the time of the report needs to be filled out.

Next, an action service must be selected. This is the “that” part of IFTTT. Our example applet is going to send a text message, so the action service is going to fall under the SMS category.

Like triggers, there are hundreds of action services that can be be used in your applets. In this particular action, you can customize the text message using variables called “ingredients”.

Ingredients are simply variables provided by the trigger service. In this example, since we chose Weather Underground as the trigger service, then we are able to customize our text message using weather related variables provided by Weather Underground such as temperature or condition.

After creating an action, you simply need to review your applet. In this case, we’ve just created an applet that will send a text message about the weather every day. If you’re satisfied with what it does, you can hit finish and IFTTT will trigger your applet whenever the trigger event occurs. Even from this simple applet, it is easy to see that the possibilities of automation are limitless!

Water Damage: How to prevent it, and what to do if it happens

Getting your tech wet is often one of the most common things that people tend to worry about when it comes to their devices. Rightfully so; water damage is often excluded from manufacturer warranties, can permanently ruin technology under the right circumstances, and is one of the easiest things to do to a device without realizing it.

What if I told you that water, in general, is one of the easiest and least-likely things to ruin your device, if reacted to properly?

Don’t get me wrong; water damage is no laughing matter. It’s the second most common reason that tech ends up kicking the bucket, the most common being drops (but not for the reason you might think). While water can quite easily ruin a device within minutes, most, if not all of its harm can be prevented if one follows the proper steps when a device does end up getting wet.

My goal with this article is to highlight why water damage isn’t as bad as it sounds, and most importantly, how to react properly when your shiny new device ends up the victim to either a spill… or an unfortunate swan dive into a toilet.

_________________

Water is, in its purest form, is pretty awful at conducting electricity. However, because most of the water that we encounter on a daily basis is chock-full of dissolved ions, it’s conductive enough to cause serious damage to technology if not addressed properly.

If left alone, the conductive ions in the water will bridge together several points on your device, potentially allowing for harmful bursts of electricity to be sent places which would result in the death of your device.

While that does sound bad, here’s one thing about water damage that you need to understand: you can effectively submerge a turned-off device in water, and as long as you fully dry the whole thing before turning it on again, there’s almost no chance that the water will cause any serious harm.

Image result for underwater computer

You need to react fast, but right. The worst thing you can do to your device once it gets wet is try to turn it on or ‘see if it still works’. The very moment that a significant amount of water gets on your device, your first instinct should be to fully power off the device, and once it’s off, disconnect the battery if it features a removable one.

As long as the device is off, it’s very unlikely that the water will be able to do anything significant, even less so if you unplug the battery. The amount of time you have to turn off your device before the water does any real damage is, honestly, complete luck. It depends on where the water seeps in, how conductive it was, and how the electricity short circuited itself if a short did occur. Remember, short circuits are not innately harmful, it’s just a matter of what ends up getting shocked.

Once your device is off, your best chance for success is to be as thorough as you possibly can when drying it. Dry any visible water off the device, and try to let it sit out in front of a fan or something similar for at least 24 hours (though please don’t put it near a heater).

Rice is also great at drying your devices, especially smaller ones. Simply submerge the device in (unseasoned!) rice, and leave it again for at least 24 hours before attempting to power it on. Since rice is so great at absorbing liquids, it helps to pull out as much water as possible.

Image result for phone in rice

If the device in question is a laptop or desktop computer, bringing it down to us at the IT User Services Help Center in Lederle A109 is an important option to consider. We can take the computer back into the repair center and take it apart, making sure that everything is as dry as possible so we can see if it’s still functional. If the water did end up killing something in the device, we can also hopefully replace whatever component ended up getting fried.

Overall, there are three main points to be taken from this article:

Number one, spills are not death sentences for technology. As long as you follow the right procedures, making sure to immediately power off the device and not attempt to turn it back on until it’s thoroughly dried, it’s highly likely that a spill won’t result in any damage at all.

Number two is that, when it comes to water damage, speed is your best friend. The single biggest thing to keep in mind is that, the faster you get the device turned off and the battery disconnected, the faster it will be safe from short circuiting itself.

Lastly, and a step that many of us forget about when it comes to stuff like this; take your time. A powered off device that was submerged in water has an really good chance at being usable again, but that chance goes out the window if you try to turn it on too early. I’d suggest that for smartphones and tablets, at the very least, they should get a thorough air drying followed by at least 24 hours in rice. For laptops and desktops, however, your best bet is to either open it up yourself, or bring it down the Help Center so we can open it up and make sure it’s thoroughly dry. You have all the time in the world to dry it off, so don’t ruin your shot at fixing it by testing it too early.

I hope this article has helped you understand why not to be afraid of spills, and what to do if one happens. By following the procedures I outlined above, and with a little bit of luck, it’s very likely that any waterlogged device you end up with could survive it’s unfortunate dip.

Good luck!

Tips for Gaming Better on a Budget Laptop

Whether you came to college with an old laptop, or want to buy a new one without breaking the bank, making our basic computers faster is something we’ve all thought about at some point. This article will show you some software tips and tricks to improve your gaming experience without losing your shirt, and at the end I’ll mention some budget hardware changes you can make to your laptop. First off, we’re going to talk about in-game settings.

 

In-Game Settings:

All games have built in settings to alter the individual user experience from controls to graphics to audio. We’ll be talking about graphics settings in this section, primarily the hardware intensive ones that don’t compromise the look of the game as much as others. This can also depend on the game and your individual GPU, so it can be helpful to research specific settings from other users in similar positions.

V-Sync:

V-Sync, or Vertical Synchronization, allows a game to synchronize the framerate with that of your monitor. Enabling this setting will increase the smoothness of the game. However, for lower end computers, you may be happy to just run the game at a stable FPS that is less than your monitor’s refresh rate. (Note – most monitors have a 60Hz or 60 FPS refresh rate). For that reason, you may want to disable it to allow for more stable low FPS performance.

Anti-Aliasing:

Anti-Aliasing, or AA for short, is a rendering option which reduces the jaggedness of lines in-game. Unfortunately the additional smoothness heavily impacts hardware usage, and disabling this while keeping other things like texture quality or draw distance higher can make big performance improvements without hurting a game’s appearance too much. Additionally, there are many different kinds of AA options that games might have settings for. MSAA (Multisampling AA), and the even more intensive, TXAA (Temporal AA), are both better smoothing processes that have an even bigger impact on performance. Therefore turning these off on lower-end machines is almost always a must. FXAA (Fast Approximate AA) uses the least processing power, and can therefore be a nice setting to leave on if your computer can handle it.

Anisotropic Filtering (AF):

This setting adds depth of field to a game, by making things further away from your character blurrier. Making things blurrier might seem like it would make things faster, however it actually puts a greater strain on your system as it needs to make additional calculations to initiate the affect. Shutting this off can yield improvements in performance, and some players even prefer it, as it allows them to see distant objects more clearly.

Other Settings:

While the aforementioned are the heaviest hitters in terms of performance, changing some other settings can help increase stability and performance too (beyond just simple texture quality and draw distance tweaks). Shadows and reflections are often unnoticed compared to other effects, so while you may not need to turn them off, turning them down can definitely make an impact. Motion blur should be turned off completely, as it can make quick movements result in heavy lag spikes.

Individual Tweaks:

The guide above is a good starting point for graphics settings; because there are so many different models, there are any equally large number of combinations of settings. From this point, you can start to increase settings slowly to find the sweet spot between performance and quality.

Software:

Before we talk about some more advanced tips, it’s good practice to close applications that you are not using to increase free CPU, Memory, and Disk space. This alone will help immensely in allowing games to run better on your system.

Task Manager Basics:

Assuming you’ve tried to game on a slower computer, you’ll know how annoying it is when the game is running fine and suddenly everything slows down to slideshow speed and you fall off a cliff. Chances are that this kind of lag spike is caused by other “tasks” running in the background, and preventing the game you are running from using the power it needs to keep going. Or perhaps your computer has been on for awhile, so when you start the game, it runs slower than its maximum speed. Even though you hit the “X” button on a window, what’s called the “process tree” may not have been completely terminated. (Think of this like cutting down a weed but leaving the roots.) This can result in more resources being taken up by idle programs that you aren’t using right now. It’s at this point that Task Manager becomes your best friend. To open Task Manager, simply press CTRL + SHIFT + ESC at the same time or press CTRL + ALT + DEL at the same time and select Task Manager from the menu. When it first appears, you’ll notice that only the programs you have open will appear; click the “More Details” Button at the bottom of the window to expand Task Manager. Now you’ll see a series of tabs, the first one being “Processes” – which gives you an excellent overview of everything your CPU, Memory, Disk, and Network are crunching on. Clicking on any of these will bring the process using the highest amount of each resource to the top of the column. Now you can see what’s really using your computer’s processing power. It is important to realize that many of these processes are part of your operating system, and therefore cannot be terminated without causing system instability. However things like Google Chrome and other applications can be closed by right-clicking and hitting “End Task”. If you’re ever unsure of whether you can end a process or not safely, a quick google of the process in question will most likely point you in the right direction.

Startup Processes:

Here is where you can really make a difference to your computer’s overall performance, not just for gaming. From Task Manager, if you select the “Startup” tab, you will see a list of all programs and services that can start when your computer is turned on. Task Manager will give an impact rating of how much each task slows down your computers boot time. The gaming app Steam, for example, can noticeably slow down a computer on startup. A good rule of thumb is to allow virus protection to start with Windows, however everything else is up to individual preference. Shutting down these processes on startup can prevent unnecessary tasks from ever being opened, and allow for more hardware resource availability for gaming.

Power Usage:

You probably know that unlike desktops, laptops contain a battery. What you may not know is that you can alter your battery’s behavior to increase performance, as long as you don’t mind it draining a little faster. On the taskbar, which is by default located at the bottom of your screen, you will notice a collection of small icons next to the date and time on the right, one of which looks like a battery. Left-clicking will bring up the menu shown below, however right-clicking will bring up a menu with an option “Power Options” on it.

 

 

 

 

Clicking this will bring up a settings window which allows you to change and customize your power plan for your needs. By default it is set to “Balanced”, but changing to “High Performance” can increase your computer’s gaming potential significantly. Be warned that battery duration will decrease on the High Performance setting, although it is possible to change the battery’s behavior separately for when your computer is using the battery or plugged in.

Hardware:

Unlike desktops, for laptops there are not many upgrade paths. However one option exists for almost every computer that can have a massive effect on performance if you’re willing to spend a little extra.

Hard Disk (HDD) to Solid State (SSD) Drive Upgrade:

Chances are that if you have a budget computer, it probably came with a traditional spinning hard drive. For manufacturers, this makes sense as they are cheaper than solid states, and work perfectly well for light use. Games can be very demanding on laptop HDDs to recall and store data very quickly, sometimes causing them to fall behind. Additionally, laptops have motion sensors built into them which restrict read/write capabilities when the computer is in motion to prevent damage to the spinning disk inside the HDD. An upgrade to a SSD not only eliminates this restriction, but also has a much faster read/write time due to the lack of any moving parts. Although SSDs can get quite expensive depending on the size you want, companies such as Crucial or Kingston offer a comparatively cheap solution to Samsung or Intel while still giving you the core benefits of a SSD. Although there are a plethora of tutorials online demonstrating how to install a new drive into your laptop, make sure you’re comfortable with all the dangers before attempting, or simply take your laptop into a repair store to have them do it for you. It’s worth mentioning that when you install a new drive, you will need to reinstall Windows, and all your applications from your old drive.

Memory Upgrade (RAM):

Some laptops have an extra memory slot, or just ship with a lower capacity than what they are capable of holding. Most budget laptops will ship with 4GB of memory, which is often not enough to support both the system, and a game.

Upgrading or increasing memory can give your computer more headroom to process and store data without lagging up your entire system. Unlike with SSD upgrades, memory is very specific and it is very easy to buy a new stick that fits in your computer, but does not function with its other components. It is therefore critical to do your research before buying any more memory for your computer; that includes finding out your model’s maximum capacity, speed, and generation. The online technology store, Newegg, has a service here that can help you find compatible memory types for your machine.

Disclaimer: 

While these tips and tricks can help your computer to run games faster, there is a limit to what hardware is capable of. Budget laptops are great for the price point, and these user tricks will help squeeze out all their potential, but some games will simply not run on your machine. Make sure to check a game’s minimum and recommended specs before purchasing/downloading. If your computer falls short of minimum requirements, it might be time to find a different game or upgrade your setup.

PCIe Solid State Drives: What They Are and Why You Should Care

Consumer computers are largely moving away from hard disk drives, mostly because solid state drives have gotten so cheap. Upgrading to a solid state drive is one of the best things that you can do for your computer. Unlike a RAM or CPU upgrade, you will notice a dramatic difference in day-to-day usage coming from a hard drive. The only real benefit of using a traditional hard drive over a solid state drive would be capacity per dollar. If you want anything over 1TB, you’re basically going to have to settle for a hard drive.

Solid-State Drive with SATA bus (compare the gold connectors to the below image)

While SSD prices have come down, SSD technology has also improved dramatically. The latest trend for solid state drives is a move away from SATA to PCIe. Serial ATA, or SATA, is the bus interface that normally connects drives to computers. The latest version of this, SATA 3, has a bandwidth limit of 750 Megabyes per second. This used to be plenty for hard drives and even early SSDs; however modern SSD’s are easily able to saturate that bus. This is why many SSDs have started to move to PCIe. Depending on the implementation, PCIe can do up to 32 Gigabytes per second. (That’s nearly 43 times as fast!) This means that SSDs have plenty of room to grow in the future. There are a couple different technologies and terms related to PCIe SSDs that you may want to make yourself familiar with:

M.2

M.2 is a new interface for connecting SSDs to motherboards. This connector is much smaller than the SATA connector was, and allows SSDs to be much smaller and physically attach to the board instead of connecting via a cable. The confusing thing about M.2 is that it can operate drives over either SATA or PCIe. Most of the newer drivers and motherboards only support the PCIe version. M.2 drives have a few standard lengths, ranging from 16 to 110 millimeters. There are also a few different connector type styles that have varying pins on them. M.2 connectors also support other PCIe devices such as wireless cards.

NVMe

NVM Express is a Host Controller Interface that allows the CPU to talk to the SSD. This standard is meant to replace the current AHCI, which was created in the 1980s. This standard is too slow for managing solid state drives, so NVMe was designed specifically for that purpose. It means that CPUs can communicate with the drive with much lower latency. NVMe is largely the reason that current PCIe SSDs can reach speeds over 3 Gigabytes per second.

Solid State is soon to become a universal standard as older machines are phased out and consumer expectations rise. Don’t get left in the dust.

How to Fund Your Project or Organization with Online Crowdfunding!

Image: Edison Awards, 2016

Most of us remember being in high school, and having people try to sell us candy bars at outrageous prices in order to fund their mission trips, charity organizations, abroad experiences, and other such things. I always remember being impressed at the commitment of people, and confused as to how they managed to raise enough money selling candy bars! Of course, in many of these cases, parents and family members were providing much of the funding.

In this new era of interconnection through social media, it is easier than ever to raise money from your social circle using the internet. This kind of fundraising is called crowdfunding, and most of us know it best through Kickstarter.

Kickstarter is a crowdfunding platform which allows people to generate funds for various projects. These projects range from the mundane such as this (for anyone who doesn’t feel like clicking on the link, that is a man trying to raise $15 to make a french toast pancake waffle) to the brilliant (the Pebble smartwatch) , to the truly disappointing and scandalous (the Yogventures video game).

One cool aspect of Kickstarter is that only successful kick starters are able to keep the money. For instance, if your project requires $100, you must actually raise $100 dollars in order to be granted the money. If it is not successful, the money is returned to the donors. This leads people to be more likely to fund projects, as they know that if the project is not fully funded, the creators will not abscond with their cash. In addition, many Kickstarters include rewards to people based on how much they donate. For example, a video game development project might give a cool exclusive skin to people who donate $5, a signed copy for people who donate $30, and a studio tour for people who donate over $1,000.

Now you may be asking, “this is cool and all, but how does apply to me? I have no intention of creating a video game or developing some huge project.”

Crowdfunding does not need to be limited to projects and startups. For instance, if you are a member of an Registered Student Organization here at UMass, you may (ok you almost definitely do) find yourself thinking that you do not have enough money! Maybe you have a trip to go on, or an event you want to host, or equipment you need to buy. Crowdfunding is a great way to raise some funds! The UMass Minute Fund is a website which allows student groups on campus to crowdsource money. For RSOs, the Minute Fund is a better platform to raise money than places such as Kickstarter, because it does not take any cut of the money raised (as Kickstarter and other for profit companies do). This really works too! Here is a trip that I went on, funded by the Minute Fund. Here is the HackUMass Minute Fund (which was also fully funded).

In short, when your organization is running out of cash, your social circle might be able to sponsor you. Create these pages, share them on Facebook, Twitter, etc, and watch the money for your organization roll in!

How to merge Windows Mail and Calendar with iCloud


 

 

 

 

If you using a Windows PC and an iPhone, you might want to merge the calendar and Mail with iCloud instead of registering a new Microsoft account. The services in Apple become compatible with Windows 10 recently. It is very easy to set it up and easy to use.

STEP 1:

Click the Start button or search Settings in Cotana

STEP 2:

Go to Setting and Click Accounts

STEP 3:

Click Add an account

STEP 4:

Select iCloud

STEP 5:

Enter your iCloud email and password. Note: The password is not regular password of Your Apple ID. You need to generate a new password through two-factor authentication in Apple ID website. How to do that?

  • Go to Appleid.apple.com, and Sign in with your regular email and password
  • Verify your identity with two factor authentication
  • Click ‘Generate Passwords’
  • You are all set to use the new password to Log in Windows 10 Account Service

 

 

How to Import your Academic Moodle Calendar into your Personal Google Calendar

How to Export your Moodle Calendar for calendar subscription
1. Navigate to https://moodle.umass.edu/ and log in with NetID and password
2. Click under your name in the upper right hand corner and click on Dashboard
3. Scroll to the bottom of the page and click on Go to calendar… in the bottom right hand corner
4. Switch the drop down menu to specify whether you want a specific class or all of your classes bundled under one calendar this is important later at step 6
5. Click on the Export calendar button in the middle of the page
6. Some settings will show up in regards to exporting your Umass Moodle Calendar
a. I would recommend under the Export* menu choosing All events if you decided earlier to bundle your classes in one export, otherwise if you’re exporting classes individually I would recommend selecting Events related to courses for this option
b.I would recommend under the for* menu to choose Custom range because it guarantees all the events to be added
7. Click on Get calendar URL and *triple click* on the generated Calendar URL (as it may overlap with the Monthly view column)
8. You can now import this calendar into any calendar client that allows for import by URL

Note: This export may have to be updated in the future because it won’t add new events retroactively.


How to Import this Moodle Calendar into Google Calendar
1. Navigate to https://www.google.com/calendar and log in with your credentials
2. On the left hand side on Other calendars click the down facing carrot symbol and click on Add by URL
3. Paste the copied URL, this step may take 20 or so seconds to load the new calendar
a. This step will fail if the generated calendar URL was not copied it its entirety.
4. You can rename this calendar by clicking on the down facing carrot symbol to the right of it and clicking on Calendar settings, then changing the field Calendar Name:
Happy Google Calendaring!

Quantum Computers: How Google & NASA are pushing Artificial Intelligence to its limit

qcomp

“If you think you understand quantum physics, you don’t understand quantum physics”. Richard Feynman quoted that statment in relation to the fact that we simply do not yet fully understand the mechanics of the quantum world. NASA, Google, and DWave are trying to figure this out as well by revolutionizing our understanding of physics and computing with the first commercial quantum computer that runs 100 million times faster than traditional computers.

Quantum Computers: How they work

To understand how quantum computers work you must first recognize how traditional computers work. For several decades, a computer processor’s base component is the transistor. A transistor either allows or blocks the flow of electrons (aka electricity) with a gate. This transistor can then be one of two possible values: on or off, flowing or not flowing. The value of a transistor is binary, and is used to represent digital information by representing them in binary digits, or bits for short. Bits are very basic, but paired together can produce exponentially more and more possible values as they are added. Therefore, more transistors means faster data processing. To fit more transistors on a silicon chip we must keep shrinking the size of them. Transistors nowadays have gotten so small they are only the size of 14nm. This is 8x less than the size of an HIV virus and 500x smaller than a red blood cell.

As transistors are getting to the size of only a few atoms, electrons may just transfer through a blocked gate in a concept called quantum tunneling. This is because in the quantum realm physics works differently than what we are used to understanding, and computers start making less and less sense at this point. We are starting to see a physical barrier to the efficiency technological processes, but scientists are now using these unusual quantum properties to their advantage to develop quantum computers.

Introducing the Qubit!

With computers using bits as their smallest unit of information, quantum computers use qubits. Like bits, qubits can represent the values of 0 or 1. This 0 and 1 is determined by a photon and its spin in a magnetic field where polarization represents the value; what separates them from bits is that they can also be in any proportion of both states at once in a property called superpositioning. You can test the value of a photon by passing it through a filter, and it will collapse to be either vertically or horizontally polarized (0 or 1). Unobserved, the qubit is in superposition with probabilities for either state – but the instant you measure it it collapses to one of the definite states, being a game-changer for computing.

201011_qubit_vs_bit

When normal bits are lined up they can represent one of many possible values. For example, 4 bits can represent one of 16 (2^4) possible values depending on their orientation. 4 qubits on the other hand can represent all of these 16 combinations at once, with each added qubit growing the number of possible outcomes exponentially!

Qubits can also feel another property we can entanglement; a close connection that has qubits react to a change in the other’s state instantaneously regardless of the distance between them both. This means when you measure one value of a qubit, you can deduce the value of another without even having to look at it!

Traditional vs Quantum: Calculations Compared

When performing logic on traditional computers it is pretty simple. Computers perform logic on something we call logic gates using a simple set of inputs and producing a single output (based on AND, OR, XOR, and NAND). For example, two bits being 0 (false) and 1 (true) passed through an AND gate is 0 since both bits aren’t true. 0 and 1 being passed through an OR gate will be 1 since either of the two needs to have the value of true for the outcome to remain true. Quantum gates perform this on a much more complex level. They manipulate an input of superpositions (qubits each with probabilities of 0 or 1), rotates these probabilities and produces another superposition as an output; measuring the outcome, collapsing the superpositions into an actual sequence of 0s and 1s for one final definite answer. What this means is that you can get the entire lot of calculations possible with a setup all done at the same time!

quantum-computers-new-generation-of-computers-part1-by-prof-lili-saghafi-17-638

When measuring the result of qubit’s superpositions, they will probably give you the one you want. However you need to be sure that this outcome is correct so you may need to double check and try again. Exploiting the properties of superposition and entanglement can be exponentially more efficient than ever possible on a traditional computer.

What Quantum Computers mean for our future

Quantum computers will most likely not replace our home computers, but they are much more superior. In applications such as data searching in corporate databases, computers may need to search every entry in a table. Quantum computers can do this task in a square root of that time; and for tables with billions of entries this can save a tremendous amount of time and resources. The most famous use of quantum computers is IT security. Tasks such as online banking and browsing your email is kept secure by encryption, where a public key is made for everyone to encode messages only you can decode. Problem is public keys can be used to calculate one’s secret private key, but doing the math on a normal computer would literally take years of trial and error. Quantum computers can do this in a breeze with an exponential decrease in calculation time! Simulations in the quantum world are intense on resources, regular computers lack on resources for bigger structures such as molecules. So why not simulate quantum physics with actual quantum physics? Quantum simulations for instance could lead to insights on proteins that can revolutionize medicine as we know it.

140903112645-google-quantum-computer-1024x576

What’s going on now in Quantum Computing? How NASA & Google are using AI to reveal nature’s biggest secrets.

We’re unsure if quantum computers will only be a specialized tool, or a big revolution for humanity. We do not know the limits for technology but there is only one way to find out. One of the first commercial quantum computers developed by DWave will be stored in Google and NASA’s research center in California. They operate the chip at an incredible temperature at 200 times colder than interstellar space. They are currently focused on using it to solve optimization problems, finding the best outcome given a set of data. For example: finding the best flight path to visit a set of places you’d like to see. Google and NASA are also using artificial intelligence on this computer to further our understanding of the natural world. Since it operates on quantum level mechanics beyond our knowledge, we can ask it questions that we may never otherwise be able to figure out. Questions such as “are we alone?” and “where did we come from?” can be explored. We have evolved into creatures that are able to ask the nature of physical reality, and being able to solve the unknown is even more awesome as a species. We have the power to do it and we must do it, because that is what it means to be human.

Bonus Bit: Surviving the Steam Summer Sale

Ahh yes, the steam summer sale, the glorious and magical two weeks of wallet crushing sales and bundles, whether you are new or a grizzled veteran, there is always something to be found at a price you thought was impossible.  But wait, it’s dangerous out there, take a read through this before you head out into the tsunami of sales tags to make sure you get the most out of your summer sale action.  

 

Quick Details on the Summer Sale

What: Large discounts on hundreds of video games from the largest PC gaming platform
Who: Anyone who owns a computer
When: June 22nd 1pm est until July 5th 1pm est
Where: store.steampowered.com

Changes and Updates to the Summer Sale Format

Veterans of Summer Sales will remember daily deals and flash sales, which are missing from this years sale, instead Steam will curate a list of games already on sale that they think you should take a look at.  This unfortunately limits what Valve can do with the sale, instead of like previous years with games for users to play, like the monster clicker game or being split into colored teams, they have decided to release limited summer sale stickers.  What are stickers you ask?  Stickers act in a similar way to Trading Cards, but instead of dropping from time spent in game, they drop based on certain activities that Valve want to encourage (check steam each day during the sale, etc) and if you fill up your sticker book, you may get a special surprise.  Trading cards are also back this year, and seem to be dropping in the same manner as previous sales, based on how much money (currently each $10 increase gets you a card) you have spent during the sale, with a special badge that can be crafted if you collect all the cards.  

 

Tips for New Commers 

Your first Summer Sales is almost always the most memorable sale, seeing hundreds of games that you want for 60%~95% off embeds a nostalgic feeling that is hard to shake.  Many veterans will complain that the sales aren’t like they used to be, but in reality it is more likely that they’ve picked up the games that they want, and as such it seems to loose a bit of luster to them.  But to the newbie it is all brand new and very easy to get lost in the fray.  To keep you from getting burnt out from the first week of sales I suggest you check out the r/steam and r/pcmasterrace (disclaimer: PCMR is a reddit group by and for pc gaming, there are no political allegiances, mac heathens and console peasants are welcomed) subreddits and the Summer Sale megathreads to keep up the special sales and answer any questions that you have.  

Even though it is a bit outdated I suggest keeping this flow chart in mind as planning your purchases can help keep you from breaking the bank.  Another tidbit is that Steam has a refund option, as long as you have owned the game for less than 14 days and have less than 2 hours of playtime you can refund it, but be careful, Steam refunds whole purchases and not single games, so if you buy 5 games on sale and want to refund 1, you will have to refund the other 4 as well.  Once you get down to playing with your new games, don’t forget to include other people, discord/teamspeak/mumble are great ways of voice chatting with your friends if the steam VOIP service doesn’t interest you and can provide structure if you are playing squad MMO’s.

Remember to stay safe out there, it’s a big sale but with a bit of planning and some self control you and your wallet should stay intact.

 

Content Providers and Net Neutrality: A Double-Edged Sword

Source: http://www.thetelecomblog.com/2016/06/15/fccs-net-neutrality-upheld-in-appeals-court-decision/

Source: http://www.thetelecomblog.com/2016/06/15/fccs-net-neutrality-upheld-in-appeals-court-decision/

Net neutrality is the principle that data should be treated equally by internet service providers (ISPs) and without favoring or blocking particular products or websites. Those in favor of net neutrality argue that ISPs should not be able to block access to a website run by their competitor or offer “fast lanes” to deliver data more efficiently for a hefty fee. Imagine if Verizon could stop customers from researching about switching to Comcast, or block access to negative press about their business practices. For ISPs, network inequality is a pretty sweet deal. Broadband providers can charge premiums for customers to access existing network-structures, and control the content viewed by subscribers.

Essentially, a lack of network neutrality actively promotes discrimination against competitors and encourages ISPs to deliberately limit high-speed data access. This form of throttling speeds when there are negligible costs of production after initial development is known as “artificial scarcity.” Supply is intentionally restricted which makes the item, internet access, more valuable.

Without net neutrality, internet providers have free-reign over deciding which content reaches their subscribers. In 2014, this issue came to a head when Comcast and other broadband suppliers intentionally restricted the data transmission for Netflix services. To appease customers with a paid subscription who could no longer watch the streaming service, Netflix agreed to pay the broadband companies tens of millions of dollars a year. Evidently, a lack of net neutrality creates a conflict of interest between wireless service providers and content firms like Google, Facebook, and Netflix. These content providers want consumers to have unfettered access to their services. Tolls for network access create barriers for internet-based services which rely on  ad-revenue and network traffic.

Despite the threat network neutrality poses to content-centric services many tech companies have been hesitant to vehemently oppose restricting data access. Facebook is investing in creating their own ecosystem. With Facebook as a central hub where you can connect with friends, view businesses, listen to music and play games, the company has little incentive to petition for the free and universal flow of information and Web traffic. From a corporate perspective, every web-interaction would ideally be done through Facebook. In a similar vein, Google has been moving closer and closer to becoming an internet provider themselves. Company initiatives like Google Fiber, Project Fi and Project Loon are the stepping-stone to Google dominating both the web-traffic and web-access businesses. This creates a double-edged sword where unrestricted internet access both helps and harms content-providers. While tech companies do not want restricted access to their sites, they would love to restrict consumer-access to that of their rivals. The burden of protecting a free internet and the unrestricted flow of information therefore lies on consumers.

Password Managers and You

Today we’re going to deal with an issue that I’m sure many of us run into on a daily basis: managing passwords. Given that you probably use a bajillion different services, each of which has its own password requirement, and given that UMass makes you change your password once a year, you probably have trouble keeping them all straight. Luckily for you, there is a tool you can use to keep your passwords tracked!

 

For these tools, you can use one super strong password to keep all your other passwords safe, easily searchable, and all in one place. They can often be used to automatically fill in login info on the web.

 

There are many password managers out there. You can find reviews of them simply by googling “password manager.” The ones I am going to mention here are the default chrome password manager, and Lastpass.

 

The first and easiest one, Google Smart Lock is so ubiquitous that you’ve probably been using it all along! Any time your google chrome asks you to “save” a password, it gets stored in Google Smart Lock. If you want to see your passwords, or manually add new ones, simply go to “passwords.google.com” and log in with your (non UMass) Google account. Voila! You can see all of the passwords that you have saved while using Chrome.

Image result for google smart lock

What about if you aren’t a Chrome user? Or maybe you don’t like the idea of Google storing your data… What can you do?

You can use a manager like LastPass. This browser extension/mobile app can also keep your passwords safe and encrypted. You can even set up 2 factor authentication (so that you would have to have 2 devices on you to be able to see your saved passwords). You can find more information here: https://www.lastpass.com/how-it-works but it works in essentially the same way as Google Last Pass. You can save passwords, add new passwords, automatically fill out forms, etc.

img-vault-tour-1-jpg

So get one of these managers, and never worry about forgetting your many many passwords again!

The New Face of the FCC

With any incoming president interest swirls around cabinet nominees and appointees, many set precedent for the departments, perhaps none more so than Ajit Pai, Chairman of the Federal Communications Commission.  An advocate for deregulation of the FCC and free market ideals, Pai has an unique opportunity to shape our world into something vastly new and different.

Born in 1973, Pai graduated from Havard with a BA in Social Studies in 1994 and a J.D. from the University of Chicago in 1997.  After which he clerked for the US District Court for the Eastern District of Louisiana and then working for the Department of Justice Anti-Trust Division where he specialized in mergers and acquisitions.  After which he served as an associate general counsel for Verizon where he dealt with competition matters, regulatory issues, and counseling of business units on broadband initiatives.  From there he served on several subcommittees, until 2007 when he was appointed to work for the general counsel ultimately serving as Deputy General Counsel.  In 2011 he was nominated and unanimously confirmed for the republican party position on the FCC and served until 2016.  

Pai’s controversial stances on net neutrality stem from his view that they are an overly conservative reading of the laws of the responsibilities held by the FCC, claiming that regulations may lead the FCC to regulating political speech.  He advocates for the marketplace of ideas, stating to the Washington Examiner  “I think it’s dangerous, frankly, that we don’t see more often people espousing the First Amendment view that we should have a robust marketplace of ideas where everybody should be willing and able to participate.“  While it will take time for his tenure to have an effect on regulations, he will definitely speed up the pace of work, from a 2012 speech at Carnegie Mellon “we need to start taking our other statutory and internal deadlines more seriously” and “The FCC should be as nimble as the industry we oversee”.  From corporate mergers to changing how radio spectrum is portioned out, changes will be coming.  In the speech Pai shared his view of a different FCC, where the free market is utilized to bring about change and regulations are used to increase competition.  The next 4 years will be written by free market ideals and a furious pace of work, leading to an impact that will hopefully provide better choice and coverage for consumers. 

Pai’s presence as FCC Chairman will leave a lasting change on the history of the committee, some changes will be a step in the right directions, others maybe missteps, but all of them will have the possibility of changing how you interact with the rest of the world.  

Today in “Absurd Tech Stories”: Burger King vs Google

“OK, Google: What is the Whopper burger?”

The internet is all over a story today involving burger giant Burger King and tech giant Google, in which Burger King released a new ad that takes advantage of Google Home, the in home personal assistant created by Google. This device mirrors other in home assistants like Amazon’s Alexa.

Google Home.

The short commercial, titled “BURGER KING® | Connected Whopper®” (shown below), features a Burger King employee using the phrase “OK, Google” to purposefully trigger in home devices or mobile phones with Google Voice capability to conduct a Google search for the Whopper. On the surface, this comes across as a pretty clever marketing ploy by BK, taking advantage of current tech trends to make the commercial more relate-able

However, in true internet fashion, those that wanted to have a little fun caught wind of this ad pretty quickly turned this innocent commercial into something a little more ridiculous.

Asking Google Home the question “OK, Google: What is the Whopper burger?” gives the user a description based on the current Wikipedia article. This rule applies to anything that is searched for in this fashion. Users who wanted to mess around with the first line of the Wikipedia article started to edit the line, making it say things like that the Whopper’s main ingredient was cyanide, and that the Whopper was “cancer-causing”, which would then read out when someone tried to run the voice command.

Within three hours, Google had modified their voice detection to not interact at all with the Burger King commercial. Users could still normally ask the device the same phrase, but it seemed that Google didn’t take too kindly to the small disturbance that this commercial was causing and shut it down as fast as it started.

Stories of internet trolls taking advantage of AI programs are becoming more and more prevalent in recent years. In March of 2016, Twitter users were able to modify TAY.AI, Microsoft’s Twitter chatter bot, to make remarkably inflammatory and inappropriate comments.

 

The commercial can be viewed here:

App Review: Glitché

Fun fact: You can type the “é” character on Mac OS by holding down the “e” key until the following menu pops up:

Screen Shot 2016-11-22 at 3.39.58 PM

From there, simply select the second option with your mouse and you’ll be right as rain. I’m only telling you this because the application I’ll be discussing today is called Glitché, not “Glitche”.

IMG_2882

Glitché is an app that provides users with “a full range of tools and options to turn images into masterpieces of digital art.” That description is from the app’s official website; a website which also proudly displays the following quote:

Screen Shot 2016-11-22 at 4.09.19 PM

Either this quote is outdated or Mr. Knight is putting more emphasis on the word “compared” than I’m giving him credit for. While yes, one could argue that contextually a 0.99¢ application would comparatively seem like a free download to someone purchasing a nearly $400 post-production suite, I might be more inclined to ask how you define the word “free”.

You see, Glitché is actually 0.99¢…unless you want the other features. Do you want Hi-Res Exports? That’ll be $2.99. Do you want to be able to edit videos? Another $2.99, please. Do you want camera filters? $2.99 it is!

IMG_2881

So Glitché is actually more like $9.96, but that doesn’t sound as good as 0.99¢, does it? You might argue that I’m making a big deal out of this, but I’m just trying to put this all in perspective for you. From here on out I want you to understand that the program I’m critiquing charges $10 for the full experience, which is fairly expensive for a phone application.

Another issue I have with this quote and the description given by the website is that Glitché isn’t trying to compete with Adobe Photoshop. Glitché isn’t a replacement for your post-production suite nor is it your one-stop-shop for turning images into masterpieces of digital art; rather, Glitché strives to give you a wide selection of tools to achieve a very specific look. This aesthetic can best be described as a mixture of To Adrian Rodriguez, With Love and a modern take on cyberpunk. Essentially the app warps and distorts a given image to make it look visually corrupted, glitched, or of VHS quality. It’s a bit hard to describe, so here’s a few examples of some of the more interesting filters.

IMG_2884

Unedited photo for reference

IMG_2885

The “GLITCH” filter. Holding down your finger on the screen causes the flickering and tearing to increase. Tapping once stops the flickering.

IMG_2886

The “CHNNLS” filter. Dragging your finger across the screen sends a wave of rainbow colors across it. The color of the distortion can be changed.

IMG_2887

The “SCREEN” filter works like the “CHNNLS” filter, only it distorts the entire image.

IMG_2888

The “GRID” filter turns your image into a 3D abstract object akin to something one might see in an EDM music video.

IMG_2889

The “LCD” filter lets you move the colors with your thumb while the outline of your image remains fixed.

IMG_2890

The “VHS” filter applies VHS scan lines and warps more aggressively if you press your thumb down on the image.

IMG_2891

The “DATAMOSH” filter. The direction of the distortion depends on the green dot you press in the center reticle. The reticle disappears once the image is saved.

IMG_2892

The “EDGES” filter can be adjusted using both the slider below your image and with your thumb.

IMG_2893

The “FISHEYE” filter creates a 3D fisheye overlay you can move around on your image with your thumb.

IMG_2894

The “TAPE” filter works in a similar fashion to the “VHS” filter, only moving your thumb across it creates a more subtle distortion.

Listing off some of the individual filters admittedly isn’t doing the app justice. While you are able to use a singular filter, the app also allows you to combine and overlay multiple filters to achieve different effects. Here’s something I made using a combination of five filters:

IMG_2897

You can also edit video in a similar fashion (after paying the required $2.99).

The interface itself is simplistic and easy to navigate, though the application lacks certain features one might expect. You can’t save and load projects, you can’t favorite filters, and you can’t perform any complex video editing outside of applying a filter. The app has crashed on me a few times in the past, though this is a rare occurrence. The app is regularly updated with new features and filters.

So, 0.99¢ gets you 33 filters and limits you to Lo-Res exports and GIF exports. $9.96 gets you 33 filters, the ability to export in Hi-Res, the ability to export to GIF, the ability to edit videos, and the ability to record video in the actual application while using said filters.

I keep bringing this back to the cost of the app because that’s really the only place where opinions may vary. The app does what it sets out to do, but the price for the full package leaves a lot to be desired. There are definitely people out there who would gladly pay $10 for this aesthetic, and there are plenty more who would shake their head at it. If any of the filters or images I’ve shown you seem worth $10, then I think you’ll enjoy Glitché. However, if you think this app is a bit too simplistic and overpriced for what it is, I recommend you spend your money elsewhere. It really all boils down to the cost, as the app itself works fine for what it is. In my opinion, the app would be a great deal at $3 or even $5; however, $10 is a bit much to ask for in return for a few nifty filters.

 

Browsing the Web Anonymously with a VPN

You may have heard someone say that they use a VPN to protect themselves on the internet. What is a VPN? What does it do? How can you use it to protect yourself?

VPN stands for virtual private network. They are essentially simulations of connections (hence the ‘virtual’ part) to a certain private networks (networks that one can’t normally connect to from outside or over the internet). They allow users to connect to a local private (e.g. corporate) network remotely from, say, their home, or a coffee shop. A VPN allows its users to interact with the local network as if they were normally connected to it. For example, say a developer at a tech startup wanted to work on her project at her local Starbucks instead of commuting into the office, but to protect their intellectual property the startup doesn’t allow anyone to look at their code without being connected to their local onsite network (sometimes referred to as an intranet). However, the developers at the startup aren’t big fans of the cubicle life, and like to roam around and do their work at the library with a book, or at home with their dogs. Fortunately, the startup has a VPN set up so that the developers can log into the intranet and look at their projects remotely. The computer appears as if it actually is physically located in the office and has almost all of the access that it would have if it was literally in the office.

But how does the VPN make sure that only the right people have access to the network? This is where the magic of the VPN is. When you log into your VPN client with your username and password and the server authenticates you, your computer creates a point-to-point encrypted tunnel between you and the VPN server — think of it as a really long tube that runs between your computer and the server in the office that nobody in between can look inside of. That means if you’re sitting at Starbucks and your company uses Comcast as its internet service provider, nobody in your Starbucks can peek into your Wi-Fi signal (this is referred to as a man-in-the-middle attack), and Comcast can’t snoop into what’s in the data that your company is sending to you before it delivers it to you.

Computer Privacy Hood

Just like nobody can see what’s going on here between the computer display and the man’s eyes, nobody over the internet can see what’s going on between the endpoints of a VPN point-to-point encrypted tunnel.

Having a reliable, trustworthy connection to a server over the internet can be a very valuable tool. In a world of big data, hacking, online banking, password leaks, and government surveillance, being able to communicate with anyone securely is very important.

In addition to providing secure connections to remote servers, VPNs provide another incredibly useful ability as a sort of side effect — a VPN can act as a sort of ‘online mask,’ so that you can browse around a website without the website knowing exactly who you are. Generally speaking, your identity to the World Wide Web is your IP address, which can be used to determine your location down to the city/town. When you access a website, you send your IP address to the website’s server (so that the website knows who to send information back to), and your internet service provider (e.g. Comcast) knows that you are communicating with this website (if your connection is unencrypted, Comcast can also see the content of your communications with the website). When you access this website through a VPN server, your request first goes through the encrypted tunnel to the VPN server, and the VPN server then bounces the request along to the website itself (over an unencrypted connection). When the website responds to the VPN server, the server bounces the response back to you over your encrypted tunnel. The website believes that they are just communicating with the VPN server, without any clue that their response is being passed on to anyone else. Comcast may be able to read the communications between the website and the VPN server, but they have no way of knowing that the communication is connected to you.

VPN Server Setup

This diagram shows the path that information travels through between your computer and the internet when you are connected to a VPN server. The encryption between your computer and the VPN server prevents anyone from snooping in on the communications between you and the server.

There are other ways to hide your identity on the internet. You can use a proxy, which appears similar to a VPN on the surface. You can connect to a website through a proxy to hide your IP address from the website, so the proxy also acts like a man-in-the-middle like a VPN does. The difference is that your computer’s connection to the proxy is not encrypted, so from a large enough scope, your communication with the website could be traced back to you. If an internet service provider such as Comcast happened to service both the connection from you to the proxy server, AND from the proxy server to the website, they could piece together that it was you who connected to the website over the proxy, and since the communications aren’t encrypted, they could also see exactly what you were communicating about with the website over the proxy. Proxies also don’t mask your IP address over the entire computer — you have to configure each application individually to send all of it’s internet-based protocols through a proxy server. VPNs are OS-wide, meaning that it protects your entire computer no matter what internet-based protocol is being sent out.

Proxy Server Setup

The layout of a connection to a proxy server. Only individual applications can connect to a proxy server, not the entire computer. Communications are also not encrypted and open to being intercepted.

Thanks to the ability to provide anonymity over the internet, some companies have emerged that make a business out of providing access to their VPN servers. Their business model is that, for a fee, you can connect to their VPN servers to use as an ‘online mask’ however you like, and whatever you do won’t be traced back to you. The catch is whether a particular company is trustworthy or not — some VPN service providers log your activity and give it to authority or sell it to the highest bidder, essentially nullifying the anonymity that a VPN provides. You should always be skeptical and selective when choosing a VPN service provider; and remember, you get what you pay for. There are many free VPN service providers out there that allow you to use their servers for free up to a certain bandwidth; as a general rule of thumb, whether it be regarding free VPN service providers or free social networks, as long as someone is making a profit, if you’re not paying for the product, YOU are the product!

In conclusion, there are many ways to protect yourself over the internet, and selecting the best tool for your needs is the way to go. If you’re abroad and you want to watch a show on Netflix but it’s not available in the country you’re in, you can use a proxy to connect to a US server and stream it over your proxy connection, since encryption isn’t mandatory for this case. If you’re at Dunkin’ Donuts and you’re working on a top-secret project for your startup and you don’t want any tech-savvy thieves stealing your code over your free Wi-Fi connection, you can use a VPN to encrypt your connection between you and your company server. If you want to check your bank account online, but the bank doesn’t have good online business practices and don’t encrypt their web communications by default, you may want to use a VPN when logging into your bank’s website to make sure that nobody successfully fishes for your username and password. And if you’re working on an absolutely, positively, unconditionally classified, top-secret, sensitive, need-to-know-basis document, but you really, really, really want to get a frappuchino, perhaps you should consider getting yourself one of those sweatshirts with the oversized privacy hoods that you can wrap around your computer display, as seen above.

The red iPhone 7, and Why There Should Be More Product Red Products

I recently purchased an iPhone 7 with the Product Red branding. It took a little convincing, but my wallet and I eventually came to an agreement about this. It had been a while since I last upgraded my phone, and the iPhone is the industry standard. And it’s red!

Product Red is an initiative that started 11 years ago, with a goal of engaging companies that sells consumer goods to raise funds to fight AIDS in Africa. Product Red products have a distinctive red branding, and a share of the proceeds go towards the Global Fund.

When Apple announced that they were to ship out iPhones with the Product Red casing, the overall sentiment was that the phone looked good. Real good. Almost makes you wanna trade in your Android good. And if you were already an iPhone owner and was looking to switch to a newer phone, it’s hard to look away and consider otherwise.

Apple has a very rich history with the Product Red initiative, having had branded various iPods with Product Red beginning in 2006. The new iPhone, however, is the biggest slab of red Apple has released so far, and really, it brings up the question: why aren’t there more Product Red phones elsewhere on the market? The only other phone that was ever shipped with Product Red branding was the Motorola RAZR (remember those things?), a decade ago.

Sure, Product Red has its fair share of criticisms. It is, in the end, a marketing ploy, and Apple smartly released this phone a few months before the announcement and release of the next iPhone to drive sales and push out soon to be obsolete hardware from their supply chains. But try and think of the last major product that pledge to donate a portion of the proceeds to any charity of any kind. Unfortunately, they’re few and far between.

Understand that, in today’s world, where the internet should be considered (and is, in some places) a utility, and where our phones and laptops are the main proponents of the internet, it only makes sense that we should demand more products that gives back, even if it’s just a little bit, even if it’s just a marketing ploy. Considering the already questionable ethics of how these devices are produced to begin with, it’s the least that we, as conscientious consumers, can do.

NES Mini

Nintendo recently released the NES Classic, but good luck finding it.

The NES classic is a small, $60, HDMI compatible replica of Nintendo’s iconic first console, the NES, which hit the US market in 1985.  The classic comes with 30 games preinstalled, with the potential for more to be added later.  It includes all of the classics many of us can still remember playing as kids, albeit on our parent’s childhood Consoles.  Now you can play Pac-Man, Super Mario Bros, the Legend of Zelda, Kirby’s Adventure, and more, all in a cute little NES with two controllers (which are compatible with the Wii U) that can fit in the palm of your hand!  Or you could, If it wasn’t completely sold out.

Nostalgia took its toll and Nintendo proved that their games are timeless.  Some stores sold out within 10 minutes of officially selling them, and all preorder lists for stores like Target, Best Buy, Walmart, Gamestop, an even Amazon are long and without a date or shipment size for when they will get their hands on more.

Such a clamor has been made about the new consoles that a site with the sole purpose of tracking mass shipments of them has gotten a nice bump in traffic http://www.nowinstock.net/videogaming/consoles/nesclassicmini/

Some that are ultra desperate to get their hands on the gadget have been shelling out as much as 5 times the original cost(sometimes as much as $300-500) on ebay and craigslist to own the otherwise sold out NES classic.

What’s The Deal With External Graphics Docks?

What is an External Graphics Dock?

Not everyone who likes to play video games has the time, money, or know-how to build their own gaming PC. These people will more often than not opt to get a gaming laptop instead, which with their high cost and TDP/wattage-limited graphics solutions prove unsatisfactory for high intensity gaming. If not a gaming laptop, then they do what they can with their thin & light notebook with integrated graphics that, while great for portability, can not run games very well at all. Using an external graphics dock you can get the best of both worlds! There is minimal assembly required, and you can have your thin and light laptop to bring to class or to work, then when you get home plug into your external graphics dock and have all the gaming horsepower and display outputs you need.

Sounds Great! How Do These External Graphics Docs Work, Then?

egpu
The most basic eGPU dock

The basic concept of an external graphics dock is this: take a regular desktop Graphics Card, plug it into a PCIe slot in a dock, get power to the dock and the Graphics card, then plug that dock into your laptop. After installing the right drivers and performing two or three restarts, hark! High frame rates at high settings are coming your way. The internal GPU is completely bypassed and data is sent from the laptop to the GPU to an external display, and in some cases back to the laptop to power its own internal display. The graphics card will have to be purchased separately, and to see a sizable difference in performance over a dedicated laptop GPU you will be looking at around $200 for that card on top of the cost of the dock. Each commercially available dock has their own benefits and drawbacks, but all of them share some basic properties. They can all accept any single or dual-slot GPU from AMD or Nvidia (cooler size permitting), and have at least two 6+2-pin power connectors to power the graphics card. Along with the GPU support, docks usually also add at least four USB ports to connect peripherals similar to the laptop docks of olde.

So What Are The Performance Numbers Really Like?

In general, performance loss over using that same GPU in a real desktop is 10-15%. This can be due to a reduced bandwidth over the connection to the laptop, or due to bottlenecking from less powerful laptop CPUs. However, even over a dedicated laptop GPU the increase in performance when using an external one is roughly double. Here’s a few benchmarks of recent AAA titles, courtesy of TechSpot. Listed from bottom to top, each graph has performance of the internal GPU, the Graphics Amplifier with a desktop GPU, and that same GPU in a regular desktop PC.

aga bench 1aga bench 3 aga bench 2

 

Let’s Take A Look At What is Available Today:

Alienware Graphics Amplifier (MSRP $199):

aga
Pros – Relatively inexpensive, High bandwidth interface, Good airflow, PSU is user upgradeable
Cons – Only works for Alienware machines (R2 & up), Uses proprietary cable, Requires shutdown to connect / disconnect

Razer Core (MSRP $499):
razercore
Pros – Universal Thunderbolt 3 interface, Adds ethernet jack, Sturdy aluminum construction, Small size
Cons – High cost, Compatibility list with non-Razer computers is short

MSI GS30 Shadow:
gs30shadow

Pros: User upgradeable PSU, Includes support for internal 3.5″ drive, Has integrated speakers
Cons: Only works for one machine, Huge footprint, Dock cannot be purchased separately

Final Thoughts

After seeing all the facts, does using an eGPU sound like the solution for you? If none of the options available sound perfect right now, don’t fret. As the popularity of eGPUs grows, more companies will inevitably put their hats into the ring and make their own solutions. Prices, form factors, and supported laptops will continue changing and improving as time goes on.

Intel, ARM, and the Future of the Mac

For years, there have been rumors that Apple wants to move away from Intel and x86 processors to something that they design in house. This desire comes from a combination of Intel’s slowing pace and the rapid improvement of Apple’s own A-series chips that the company uses in the iPhone and iPad. Moving to a new CPU architecture is not without it’s challenges, and it would not be the first one that Apple has undertaken. The last major change was from PowerPC to Intel in 2005. That transition was made due to the lack of innovation from IBM. Intel’s future roadmap had much more powerful chips than what IBM was offering. IBM was slowly moving their product line to be more server oriented. They were already having issues meeting the power demands that Apple was trying to achieve.

Much of that same situation is happening now with Intel and ARM processors. For the last several generations, Intel’s improvements have been aimed at power efficiency increases. Many PC owners haven’t had a reason to upgrade their Sandy Bridge CPUs to the latest generation. Intel’s latest generation chips, Kaby Lake, is based on the same architecture as two generations ago. Kaby Lake is the second “iterative” step for the same process architecture. This is mostly due to Intel’s problems with being able to produce 10nm chips(their current chips are based on a 14nm process). Intel has not delivered the increased power that many Mac users have been craving, especially for their pro desktops.

On the other hand, Apple has been one of the leading innovators in ARM processor design. ARM holdings designs and produces the basic architecture design. It then licenses these designs to companies such as Apple, Samsung, and Qualcomm to manufacture their own systems on a chip (SOC). While these chips are not x86, they are much more power efficient and require less transistors. ARM chips are getting to the point where they are almost as powerful as some Intel chips. For example, the iPad Pro benchmarks higher than the 12” Macbook for both single core and multi-core tests. It would totally be possible to produce a high power ARM processor that would replace the Intel chips that Apple uses. With the slow progress that Intel has had, its not a matter of if, but rather when.

Rumors are saying that Apple has already ported macOS from x86 to ARM internally. This rumor has also stated that the new version of macOS meant for ARM chips has many similarities to iOS. While the pros and cons of this are up for debate, its easy to predict from past macOS updates that this is where the platform is going. A switch to ARM would mean that app developers would have to do some work to update their apps, as x86 applications will not natively run on ARM chips. But Apple has made similar transitions from PowerPC to Intel. In that case, the pros and cons were very similar to what they are now, and overall the market was very happy with the switch. Would you be happy with a switch to ARM chips if that meant a faster and lighter machine for you?

JLab Audio Epic2

Image

After the purchase of my iPhone 7 and the tragic loss of my ability to use my Audiotechnicas’s m50xs, I decided that it was time to go wireless. The use of the 3.5 mm headphone jack to lightning was not something that I wanted to use; it would be too easy to lose and just looks silly. I wanted to get good all around earbuds that I could use while studying, biking and walking around campus, working out at the Rec and going on runs. Some of the potential candidates were the Beats Powerbeats3 Wireless,  JLab’s Audio Epic2, Bose SoundSport Wireless and Apple Airpods.

Powerbeats3 currently going for $149.99 on Amazon and $199.99 from Apple. The Powerbeats have a cable that connects the two  I also didn’t want to get such a lengthy cable, but they’re very good for exercise and have a long battery life of 12 hours. They also have a remote and microphone support to take calls.

epic-2-blue-with-earbuds

JLab’s Audio Epic2 had a more modest price tag at $99.99. They have a cable that connects the two wrap-around inear earbuds and also boast a 12 hour battery life. I’ve enjoyed using the Epic2’s over the last several weeks since my purchase. The wireless earbuds come with seven different size and form factor plastic in-ear pieces so they fit comfortably and the wires wrap around the ear for a light and well secured fit.

My one complaint with the ear pieces is that they insulate from outside noise almost too well so they have to be pushed out whenever you want to have a conversation with anyone or purchase a coffee on your commute to classes.

The JLab Audio Epic2s also perform admirably as wireless fitness earbuds. They’re loud and rarely need to be turned up to the max even in a noisy gym setting. They also feel light and well secured and don’t shift with movement which makes them an ideal running or exercise choice for music on the go. They’re also protected against damage from sweat or splashes so you don’t have to worry about short circuiting them; which was a concern by some of the reviews I read on Amazon.

All in all I would say that these are a solid purchase for wireless earbuds. They come at a low price in comparison to their competition, and although they don’t look as flashy as Powerbeats, they perform just as well with the same battery life. I would strongly recommend these to anyone who is looking to go wireless or made the decision of purchasing an iPhone 7.

Technology is Taking Our Jobs

Today April 13, 2017, Elon Musk sent out a tweet stating that his Tesla company plans on releasing their plans for a Semi-Truck line in September. Tesla is the same company that produces electric automated cars. The fact that Tesla is making semi-trucks in itself not important news, it is more about the repercussions that come because of it.  When I first saw the tweet it made me think of the semi-trucks from the Logan film that was recently in theaters. They were automated without anyone driving, basically this would be what Elon Musk and company are striving to achieve, that took movie took place in 2029 and it could be a reality by then as well. The problem that not just the U.S but the rest of the World will face is another industry that is taken over by machines and a loss of millions of jobs. According to Alltrucking.com the U.S has 3.5 Million Truckers, and actually are looking for more. This touches on a larger issue in our society today, more industries are becoming mechanized. With more industries no longer using the same number of humans to create labor, this creates a labor crisis, it’s why Donald Trump was elected he promised to bring jobs to America, but didn’t realize the real problem is not jobs leaving for other countries but our increasing technological advancements. This isn’t just a Trump issue but a problem has been with every leader in the world. How do we create jobs via the government or to get other businesses create jobs and industries that can’t be taken over by computer systems?

If it is not possible for us to make the jobs required, then we must come up with subsidies and an allowance for those people that cannot acquire a job. The Trucking industry maybe the next industry to go, but it won’t be the last and might not even have the most impact. The Oil industry is also an industry that won’t last and it supplies 8.5 million jobs and it will depend on what the governments of the world replace it with if the economy will be able to handle the massive hit.

Why Making the Jump to Linux May be for you

Image result for linux

Do you feel that Windows no longer respects your privacy? Or do you feel that Macs are too expensive? Linux might be just right for you then! Linux is an open source operating system. Although it has been around for some time now, it is slowly gaining more popularity. While Linux is often seen as the geeky computer nerd operating system, it can be perfect for average users too. Linux is all about allowing user customization and giving fine system control to the user.

Linux is Completely Free!

One of the greatest things about Linux is that it is completely free. Unlike Windows or macOS, you don’t need to pay anything in order to use it. As the latest version of Windows or macOS slowly becomes old, you will eventually need to upgrade them. Sometimes this means purchasing new licensing, which can be a unneeded financial hit. If you have the hardware, you can simply find a distribution you like, and install it. Whether this is for one machine, or 1000 machines, Linux will never bother you for a license key.

A Tweaker’s Dream

Image result for linux tweaks

Linux is the dream operating system for someone that enjoys playing around with settings to fine tune their machine. Linux offers multiple desktop environments which completely change how desktop behavior is handled. Each of these have hundreds, or possibly thousands of settings so that a user can make their experience exactly how they envision it. This is contrary to Windows and macOS, which consists one one desktop with fairly limited customization options. Almost everything in Linux has a right click menu which allows for further customization. For the extremely motivated tweakers, there are also configuration files which allow you to modify almost anything on your system. A personal favorite tweak is a use of universal keyboard shortcuts. As an avid user of terminal, I’m able to launch terminal from anywhere with a single touch of a button.

Gaining a Better Knowledge of Computers

Image result for linux terminalLinux features a terminal similar to macOS. Mastering the terminal allows you to tell a computer what you really want it to do. With terminal, you no longer have to rely on menus and clicking. Linux is an excellent choice to learn terminal commands because you will easily learn how to use it whether you need to fix something, or just due to the ease of access.

By using Linux, every user becomes aware of file permissions, and how they work. Users also become adept at using commands like top and ps aux to understand how processes work. Linux users also often learn to use commands like rsync to create backups. Finally, many users that delve a little deeper into Linux also learn about computer architecture, such as how operating systems work, and how storage devices are mounted.

Linux Has Some Amazing Software

Image result for linux beautiful software

While Linux has a reputation for being incompatible with certain software, it also offers an enormous repository of software for its users. Many major programs such as web browsers like Google Chrome or Firefox are also available for Linux. Additionally, many programs have Linux alternatives that work just as well, or even better. Better yet, software on Linux is completely free too. You can get incredibly good productive software like LibreOffice for creating documents, and Okular for viewing pdf files.

Linux is Efficient

Linux fits on small systems and large systems. It works on slow computers and fast ones too. Linux is engineered by efficiency-obsessed engineers that want to get every ounce of computing power out of their machines. Most flavors of Linux are designed to be lighter weight than their Windows or macOS counterparts. Linux also offers excellent utilization of computer hardware, as the operating system is built to efficiently handle resource management.

The storage architecture of Linux is built in a way where any dependency for a program never needs to be installed twice. All programs have access to any dependency that is already installed. However, in Windows, every program that you install needs to have all of its dependencies packaged with it. This often leads to programs having the same exact software packaged together and thus taking up more space on the harddrive.

Hardware Just Works

Perhaps you have an older laptop, or maybe new cutting edge PC. A common problem for these types of hardware is a lack of drivers. Older computers often have hardware that is no longer supported by new operating systems, and new hardware occasionally plagued by buggy driver support. On popular distributions such as Ubuntu or Linux Mint, driver support for almost all hardware is provided. This is because the Linux kernel (or core) is designed to have these drivers, whereas Windows often requires them as a separate install. Additionally, Linux drivers are much more generic than Windows, which allows Linux to reach a broader spectrum of hardware, even if the driver was not designed for older or newer hardware in mind. Finally, Linux’s amazing hardware support is a product of its users. If you ever decided to dig around in the Linux kernel, you would find an enormous amount of very specific hardware drivers simply due to various Linux users over time. Unlike Linux, Windows does not have a way for an average user to create a driver for their hardware. Linux’s software and distribution model empowers users to create their own drivers if hardware is not supported.

 

Overall, Linux is a finely tuned operating system that deserves a look. With its many features, it is able to offer an experience tailor made to any user. You can reclaim control of your computer, and make it exactly the way you want!

 

A Basic Guide to Digital Audio Recording

The Digital Domain

iab-digital-audio-committee-does-dallas-4

Since the dawn of time, humans have been attempting to record music.  For the vast majority of human history, this has been really really difficult.  Early cracks at getting music out of the hands of the musician involved mechanically triggered pianos whose instructions for what to play were imprinted onto long scrolls of paper.  These player pianos were difficult to manufacture (this was prior to the industrial revolution) and not really viable for casual music listening.  There was also the all-important phonograph, which recorded sound itself mechanically onto the surface of a wax cylinder.

If it sounds like the aforementioned techniques were difficult to use and manipulate, it was!  Hardly anyone owned a phonograph since they were expensive, recordings were hard to come by, and they really didn’t sound all that great.  Without microphones or any kind of amplification, bits of dust and debris which ended up on these phonograph records could completely obscure the original recording behind a wall of noise.

Humanity had a short stint with recording sound as electromagnetic impulses on magnetic tape.  This proved to be one of the best ways to reproduce sound (and do some other cool and important things too).  Tape was easy to manufacture, came in all different shapes and sizes, and offered a whole universe of flexibility for how sound could be recorded onto it.  Since tape recorded an electrical signal, carefully crafted microphones could be used to capture sounds with impeccable detail and loudspeakers could be used to play back the recorded sound at considerable volumes.  Also at play were some techniques engineers developed to reduce the amount of noise recorded onto tape, allowing the music to be front and center atop a thin floor of noise humming away in the background.  Finally, tape offered the ability to record multiple different sounds side-by-side and play them back at the same time.  These side-by-side sounds came to be known as ‘tracks’ and allowed for stereophonic sound reproduction.

Tape was not without its problems though.  Cheap tape would distort and sound poor.  Additionally, tape would deteriorate over time and fall apart, leaving many original recordings completely unlistenable.  Shining bright on the horizon in the late 1970s was digital recording.  This new format allowed for low-noise, low cost, and long-lasting recordings.  The first pop music record to be recorded digitally was Ry Cooder’s, Bop till you Drop in 1979.  Digital had a crisp and clean sound that was rivaled only by the best of tape recording.  Digital also allowed for near-zero degradation of sound quality once something was recorded.

Fast-forward to today.  After 38 years of Moore’s law, digital recording has become cheap and simple.  Small audio recorders are available at low cost with hours and hours of storage for recording.  Also available are more hefty audio interfaces which offer studio-quality sound recording and reproduction to any home recording enthusiast.

 

Basic Components: What you Need

Depending on what you are trying to record, your needs may vary from the standard recording setup.  For most users interested in laying down some tracks, you will need the following.

Audio Interface (and Preamplifier): this component is arguably the most important as it connects everything together.  The audio interface contains both analog-to-digital converters and a digital-to-analog convert; these allow it to both turn sound into the language of your computer for recording, and turn the language of your computer back into sound for playback.  These magical little boxes come in many shapes and sizes; I will discus these in a later section, just be patient.

Digital Audio Workstation (DAW) Software: this software will allow your computer to communicate with the audio interface.  Depending on what operating system you have running on your computer, there may be hundreds of DAW software packages available.  DAWs vary greatly in complexity, usability, and special features; all will allow you the basic feature of recording digital audio from an audio interface.

Microphone: perhaps the most obvious element of a recording setup, the microphone is one of the most exciting choices you can make when setting up a recording rig.  Microphones, like interfaces and DAWs, come in all shapes a sizes.  Depending on what sound you are looking for, some microphones may be more useful than others.  We will delve into this momentarily.

Monitors (and Amplifier): once you have set everything up, you will need a way to hear what you are recording.  Monitors allow you to do this.  In theory, you can use any speaker or headphone as a monitor.  However, some speakers and headphones offer more faithful reproduction of sound without excessive bass and can be better for hearing the detail in your sound.

 

Audio Interface: the Art of Conversion

Two channel USB audio interface.

Two channel USB audio interface.

The audio interface can be one of the most intimidating elements of recording.  The interface contains the circuitry to amplify the signal from a microphone or instrument, convert that signal into digital information, and then convert that information back to an analog sound signal for listening on headphones or monitors.

Interfaces come in many shapes and sizes but all do similar work.  These days, most interfaces offer multiple channels of recording at one time and can record in uncompressed CD-audio quality or better.

Once you step into the realm of digital audio recording, you may be surprised to find a lack of mp3 files.  Turns out, mp3 is a very special kind of digital audio format and cannot be recorded to directly; mp3 can only be created from existing audio files in non-compressed formats.

You may be asking yourself, what does it mean for audio to be compressed?  As an electrical engineer, it may be hard for me to explain this in a way that humans can understand, but I will try my best.  Audio takes up a lot of space.  Your average iPhone or Android device maybe has 32 GB of space but most people can keep thousands of songs on their device.  This is done using compression.  Compression is the computer’s way of listening to a piece of music, and removing all the bits and pieces that most people wont notice.  Soft and infrequent noises, like the sound of a guitarist’s fingers scraping a string, are removed while louder sounds, like the sound of the guitar, are left in.  This is done using the Fourier Transform and a bunch of complicated mathematical algorithms that I don’t expect anyone reading this to care about.

When audio is uncompressed, a few things are true: it takes up a lot of space, it is easy to manipulate with digital effects, and it often sounds very, very good.  Examples of uncompressed audio formats are: .wav on Windows, .aif and .aiff on Macintosh, and .flac for all the free people of the Internet.  Uncompressed audio comes in many different forms but all have two numbers which describe their sound quality: ‘word length’ or ‘bit depth’ and ‘sample rate.’

The information for digital audio is contained in a bunch of numbers which indicate the loudness or volume of the sound at a specific time.  The sample rate tells you how many times per second the loudness value is captured.  This number needs to be at least two times higher than the highest audible frequency, otherwise the computer will perceive high frequencies as being lower than they actually are.  This is because of the Shannon Nyquist Theorem which I, again, don’t expect most of you to want to read about.  Most audio is captured at 44.1 kHz, making the highest frequency it can capture 22.05 kHz, which is comfortably above the limits of human hearing.

The word length tells you how many numbers can be used to represent different volumes of loudness.  The number of different values for loudness can be up to 2^word length.  CDs represent audio with a word length of 16 bits, allowing for 65536 different values for loudness.  Most audio interfaces are capable of recording audio with a 24-bit word length, allowing for exquisite detail.  There are some newer systems which allow for recording with a 32-bit word length but these are, for the majority part, not available at low-cost to consumers.

I would like to add a quick word about USB.  There is a stigma, in the business, against USB audio interfaces.  Many interfaces employ connectors with higher bandwidth, like FireWire and Thunderbolt, and charge a premium for it.  It may seem logical, faster connection, better quality audio.  Hear this now: no audio interface will ever be sold which has a connector that is too slow for the quality audio it can record.  This is to say, USB can handle 24-bit audio with a 96 kHz sample rate, no problem.  If you notice latency in your system, it is from the digital-to-analog and analog-to-digital converters as well as the speed of your computer; latency in your recording setup has nothing to do with what connector your interface uses.  It may seem like I am beating a dead horse here, but many people think this and it’s completely false.

One last thing before we move on to the DAW, I mentioned earlier that frequencies above half the recording sample rate will be perceived, by your computer, as lower frequencies.  These lower frequencies can show up in your recording and can cause distortion.  This phenomena has a name and it’s called aliasing.  Aliasing doesn’t just happen with audible frequencies, it can happen with super-sonic sound too.  For this reason, it is often advantageous to record at higher sample rates to avoid having these higher frequencies perceived within the audible range.  Most audio interfaces allow for recording 24-bit audio with a 96 kHz sample rate.  Unless you’re worried about taking up too much space, this format sounds excellent and offers the most flexibility and sonic detail.

 

Digital Audio Workstation: all Out on the Table

Apple's pro DAW software: Logic Pro X

Apple’s pro DAW software: Logic Pro X

The digital audio workstation, or DAW for short, is perhaps the most flexible element of your home-studio.  There are many many many DAW software packages out there, ranging in price and features.  For those of you looking to just get into audio recording, Audacity is a great DAW to start with.  This software is free and simple.  It offers many built-in effects and can handle the full recording capability of any audio interface which is to say, if you record something well on this simple and free software, it will sound mighty good.

Here’s the catch with many free or lower-level DAWs like Audacity or Apple’s Garage Band: they do not allow for non-destructive editing of your audio.  This is a fancy way of saying that once you make a change to your recorded audio, you might not be able to un-make it.  Higher-end DAWs like Logic Pro and Pro Tools will allow you to make all the changes you want without permanently altering your audio.  This allows you to play around a lot more with your sound after its recorded.  More expensive DAWs also tend to come with a better-sounding set of built-in effects.  This is most noticeable with more subtle effects like reverb.

There are so many DAWs out there that it is hard to pick out a best one.  Personally, I like Logic Pro, but that’s just preference; many of the effects I use are compatible with different DAWs so I suppose I’m mostly just used to the user-interface.  My recommendation is to shop around until something catches your eye.

 

The Microphone: the Perfect Listener

Studio condenser and ribbon microphones.

Studio condenser and ribbon microphones.

The microphone, for many people, is the most fun part of recording!  They come in many shapes and sizes and color your sound more than any other component in your setup.  Two different microphones can occupy polar opposites in the sonic spectrum.

There are two common types of microphones out there: condenser and dynamic microphones.  I can get carried away with physics sometimes so I will try not to write too much about this particular topic.

Condenser microphones are a more recent invention and offer the best sound quality of any microphone.  They employ a charged parallel plate capacitor to measure vibrations in the air.  This a fancy way of saying that the element in the microphone which ‘hears’ the sound is extremely light and can move freely even when motivated by extremely quiet sounds.

Because of the nature of their design, condenser microphones require a small amplifier circuit built-into the microphone.  Most new condenser microphones use a transistor-based circuit in their internal amplifier but older condenser mics employed internal vacuum-tube amplifiers; these tube microphones are among some of the clearest and most detailed sounding microphones ever made.

Dynamic microphones, like condenser microphones, also come in two varieties, both emerging from different eras.  The ribbon microphone is the earlier of the two and observes sound with a thin metal ribbon suspended in a magnetic field.  These ribbon microphones are fragile but offer a warm yet detailed quality-of-sound.

The more common vibrating-coil dynamic microphone is the most durable and is used most often for live performance.  The prevalence of the vibrating-coil microphone means that the vibrating-coil is often dropped from the name (sometimes the dynamic is also dropped from the name too); when you use the term dynamic mic, most people will assume you are referring to the vibrating-coil microphone.

With the wonders of globalization, all microphones can be purchase at similar costs.  Though there is usually a small premium to purchase condenser microphones over dynamic mics, costs can remain comfortably around $100-150 for studio-quality recording mics.  This means you can use many brushes to paint your sonic picture.  Often times, dynamic microphones are used for louder instruments like snare and bass drums, guitar amplifiers, and louder vocalists.  Condenser microphones are more often used for detailed sounds like stringed instruments, cymbals, and breathier vocals.

Monitors: can You Hear It?

Studio monitors at Electrical Audio Studios, Chicago

Studio monitors at Electrical Audio Studios, Chicago

When recording, it is important to be able to hear the sound that your system is hearing.  Most people don’t think about it, but there are many kinds of monitors out there: the screen on our phones and computers which allow us to see what the computer is doing, to the viewfinder on a camera which allows us to see what the camera sees.  Sound monitors are just as important.

Good monitors will reproduce sound as neutrally as possible and will only distort at very very high volumes.  These two characteristics are important for monitoring as you record, and hearing things carefully as you mix.  Mix?

Once you have recorded your sound, you may want to change it in your DAW.  Unfortunately, the computer can’t always guess what you want your effects to sound like, so you’ll need to make changes to settings and listen.  This could be as simple as changing the volume of one recorded track or it could be as complicated as correcting an offset in phase of two recorded tracks.  The art of changing the sound of your recorded tracks is called mixing.

If you are using speakers as monitors, make sure they don’t have ridiculously loud bass, like most speakers do.  Mixing should be done without the extra bass; otherwise, someone playing back your track on ‘normal’ speakers will be underwhelmed by a thinner sound.  Sonically neutral speakers make it very easy to hear what you finished product will sound like on any system.

It’s a bit harder to do this with headphones as their proximity to your ears makes the bass more intense.  I personally like mixing on headphones because the closeness to my ear allows me to hear detail better.  If you are to mix with headphones, your headphones must have open-back speakers in them.  This means that there is no plastic shell around the back of the headphone.  With no set volume of air behind the speaker, open-back headphones can effortlessly reproduce detail, even at lower volumes.

closed-vs-open-back-headphones  1

Monitors aren’t just necessary for mixing, they also help to hear what you’re recording as you record it.  Remember when I was talking about the number of different loudnesses you can have for 16-bit and 24-bit audio?  Well, when you make a sound louder than the loudest volume you can record, you get digital distortion.  Digital distortion does not sound like Jimi Hendrix, it does not sound like Metallica, it sounds abrasive and harsh.  Digital distortion, unless you are creating some post-modern masterpiece, should be avoided at all costs.  Monitors, as well as the volume meters in your DAW, allow you to avoid this.  A good rule of thumb is: if it sounds like it’s distorting, it’s distorting.  Sometimes you won’t hear the distortion in your monitors, this is where the little loudness bars on your DAW software come in; those bad boys should never hit the top.

 

A Quick Word about Formats before we Finish

These days, most music ends up as an mp3.  Convenience is important so mp3 does have its place.  Most higher-end DAWs will allow you to make mp3 files upon export.  My advise to any of your learning sound-engineers out there is to just play around with formatting. However, a basic outline of some common formats may be useful…

24-bit, 96 kHz: This is best format most systems can record to.  Because of large files sizes, audio in this format rarely leaves the DAW.  Audio of this quality is best for editing, mixing, and converting to analog formats like tape or vinyl.

16-bit, 44.1 kHz: This is the format used for CDs.  This format maintains about half of the information that you can record on most systems, but it is optimized for playback by CD players and other similar devices.  Its file-size also allows for about 80 minutes of audio to fit on a typical CD.  Herein lies the balance between excellent sound quality, and file-size.

mp3, 256 kb/s: Looks a bit different, right?  The quality of mp3 is measured in kb/s.  The higher this number, the less compressed the file is and the more space it will occupy.  iTunes uses mp3 at 256 kb/s, Spotify probably uses something closer to 128 kb/s to better support streaming.  You can go as high as 320 kb/s with mp3.  Either way, mp3 compression is always lossy so you will never get an mp3 to sound quite as good as an uncompressed audio file.

 

In Conclusion

Recording audio is one of the most fun hobbies one can adopt.  Like all new things, recording can be difficult when you first start out but will become more and more fulfilling over time.  One can create their own orchestras at home now; a feat which would have been near impossible 20 years ago.  The world has many amazing sounds and it is up to people messing around with microphone in bedrooms and closets to create more.

IOT: Connecting all our stuff to the network of networks

What is the IOT?

The Internet of Things (IOT for short) is the common term for devices that have become integrated with “smart” or internet connectable technologies that use the global infrastructure of the Internet to bring both accessibility and highly improved product experiences to millions of users of common electronics. In this article I’ll be discussing some implications that IOT has on the landscape of the Internet, as well as some IOT devices that have become commonplace in many homes across the nation.

Some things to note about IOT

Many IOT devices offer very promising integrations with online services that make their usefulness indispensable, however, this usefulness can come at the cost of security so it’s always good to understand the implications of adding an IOT device to a network. A most notable event that underscores the importance of securing these connected devices was the Mirai Botnet attack carried out on DynDNS on Friday Oct 21 2016, relevant article here.

Some of the Things:

Amazon Echo

A smarthome hub created by Amazon with the ability to integrate with various devices and services to command and control your smart home and allow for easier access to informational resources. The Alexa service provides an easy to use interface for interacting with various services via speech, a query to Alexa can perform web searches, interact with online services, as well as control some of the devices in this article. More information can be found here.

Google Home

Google’s equivalent to Amazon’s Echo, recently released as of November 2016, the Google Home is able to integrate with about the same amount of services as the Echo, as well as integrates more directly with the Google smart home ecosystem. The ability to stream directly to a Google Chromecast device connected to the same network as the Home is one of it’s notable features.

 

Nest Product Line: Cam, Thermostat, Protect

These smart products aim to keep your home automated yet safe, the Cam is a webcam that is accessible via the internet, has the ability to perform speakerphone functions. the Thermostat is a remotely controllable thermostat that adjusts based on user presence in the home. The Protect is a smoke-detector with internet connectivity that can perform remote alertive actions as well as speaks based on the location of the source of the smoke.

Smart Lighting Products: Phillips Hue, GE Link, LIFX

Smart lighting affords users the ability to customize lighting based on their location data, as well as by time of day. Being able to remotely turn on and off lighting also affords users some peace of mind in being able to determine whether they forgot to turn of the lights before leaving the house. These products typically connect to a Zigbee based hub, which can be used with all Zigbee compatible devices.

Smart Appliances: Coffeemaker, Dishwasher, Clothes Washer and Dryer

Various smart appliances allow for remotely starting, stopping, and manually controlling settings individualized settings.

 

Smart plugs: TP-Link Smart Plug

The smart plug allows for remotely turning on and off a device that is connected to the socket. This type of smart device allows extending remote capabilities to anything that uses a standard power socket.

Smart wearables: Apple Watch, Android Wear, Tizen and Pebble

These devices allow for data to be gathered from our person, heart rate/fitness information, location based information, and remote notifications are some of the data that can be gathered on these devices for display to the user.

Be sure to secure your things, as the data they collect and create become increasingly more critical the more integrated into our lives they become.