The University of Massachusetts Amherst
Categories
Operating System

Forget About It!: How to forget a network in Windows 10

Sometimes, it’s better to just forget!

One of the most common tropes in the tech support world is the tried and true “have you tried turning it off and turning it back on again?”. Today, we’ll be examining how we can apply this thinking to helping solve common internet connectivity issues.

While it’s one of the best things to do before trying other troubleshooting steps, “forgetting” your wireless network is not a step most people think to do right away. Forgetting a network removes any configuration settings from your computer and will cause it to longer try to automatically connect to it. This is one way to try to fix configuration settings that just didn’t get it right the first time.

Today, we’ll be examining how to “forget” a network on Windows 10 in four quick, easy steps!

  1.  Navigate to the settings page and select “Network & Internet” settings
  2. Select “Wifi” from the left menu, then select “Manage known networks”.settings2
  3. Find your network, click on it, then select the “Forget” button.settings3
  4. Open up your available networks, and try to reconnect to the network you would usually connect.

settings4

And that’s it!

While this may not solve connectivity issues, it is a good place to start. May this quick tutorial help you troubleshoot wireless problems you may have. If issues persist, you should next try to examine potential service outages, your network card, or, in the case of home networks, your modem/router.

 

Categories
Hardware

A Fundamental Problem I See with the Nintendo Switch

Nintendo’s shiny new console will launch on March 3rd…or wait, no…Nintendo’s shiny new handheld will launch on March 3rd…Wait…hold on a second…what exactly do you call it?

The Nintendo Switch is something new and fresh that is really just an iteration on something we’ve already seen before.

In 2012, The Wii U, widely regarded as a commercial flop, operated on the concept that you could play video games at home with two screens rather than one. The controller was a glorified tablet that you couldn’t use as a portable system. At most, if your grandparents wanted to use the television to watch Deal or No Deal, you could take the tablet into the other room and stream the gameplay to its display.

Two months later, Nvidia took this concept further with the Nvidia Shield Portable. The system was essentially a bulky Xbox 360 controller with a screen you could stream your games to from your gaming PC. The system also allowed you to download light games from the Google Play store, so while it wasn’t meant to be treated as a handheld, it could be used as one if you really wanted to.

Then, a full year after the release of the Wii U, Sony came out with the PlayStation 4. Now, if you owned a PlayStation Vita from 2011, you could stream your games from your console to your Vita. Not only would this work locally, but you could also do it over Wi-Fi. So, what you had was a handheld that could also play your PS4 library from anywhere that had a strong internet connection. This became an ultimately unused feature as Sony gave up trying to compete with the 3DS. As of right now, Sony is trying to implement this ability to stream over Wi-Fi to other devices, such as phones and tablets.

Screen Shot 2017-02-15 at 10.23.57 AM

And now we have the Nintendo Switch. Rather than make a system that can stream to a handheld, Nintendo decided to just create a system that can be both. Being both a handheld and a console might seem like a new direction when in reality I’d like to think it’s more akin to moving in two directions at once. The Wii U was a dedicated console with an optional function to allow family to take the TV from you, the Nvidia Shield Portable was an accessory that allowed you to play your PC around the house, and the PlayStation Vita was a handheld that had the ability to connect to a console to let you play games anywhere you want. None of these devices were both a console and a handheld at once, and by trying to be both, I think Nintendo might be setting themselves up for problems down the road.

Screen Shot 2017-02-15 at 10.22.24 AM

Screen Shot 2017-02-15 at 10.39.27 AM

Remember the Wii? In 2006, the Wii was that hot new item that every family needed to have. I still remember playing Wii bowling with my sisters and parents every day for a solid month after we got it for Christmas. It was a family entertainment system, and while you could buy some single player games for it, the only time I ever see the Wii getting used anymore is with the latest Just Dance at my Aunt’s house during family get-togethers. Nobody really played single player games on it, and while that might have a lot to do with the lack of stellar “hardcore” titles, I think it has more to do with Nintendo’s mindset at the time. Nintendo is a family friendly company, and gearing their system towards inclusive party games makes sense.

Screen Shot 2017-02-15 at 10.24.24 AM

Nintendo also has their line of 3DS portable systems. The 3DS isn’t a family system; everyone is meant to have their own individual devices. It’s very personal in this sense; rather than having everyone gather around a single 3DS to play party games on, everyone brings their own. Are you starting to see what I’m getting at here?

 

Nintendo is trying to appeal to both the whole family and create a portable experience for a single member of the family. I remember unboxing the Wii for Christmas with my sisters. The Wii wasn’t a gift from my parents to me; it was a gift for the whole family. I also remember getting my 3DS for Christmas, and that gift had my name on it and my name alone. Now, imagine playing Monster Hunter on your 3DS when suddenly your sisters ask you to hand it over so they can play Just Dance. Imagine having a long, loud fight with your brother over who gets to bring the 3DS to school today because you both have friends you want to play with at lunch. Just substitute 3DS with Nintendo Switch, and you’ll understand why I think the Switch has some trouble on the horizon.

You might argue that if you’re a college student who doesn’t have your family around to steal the switch away, this shouldn’t be a problem. While that might be true, remember that Nintendo’s target demographic is and has always been the family. Unless they suddenly decide to target the hardcore demographic, which it doesn’t look like they’re planning on doing, Nintendo’s shiny new console/handheld will probably tear the family apart more than it will bring them together. When you’re moving in two directions at once, you’re bound to split in half.

 

Categories
Android Apps iOS

Fitbit, Machine Learning, and Sleep Optimization


Photo: Fitbit Blog

My big present for Christmas this year was a Fitbit Charge 2. I’d wanted one for a while, but not for anything Fitness related. While I do like to keep track of my active lifestyle choices, I didn’t desire one with fitness in mind at all. My model Fitbit’s key feature (the reason I ditched my reliable $10 Casio watch for it) is its heart rate monitor. The monitor on my Charge 2 takes the form of two green, rapidly flashing LED lights. Visually and technically, it’s similar to the light you may be familiar with seeing underneath an optical mouse. Instead of tracking motion, though, this light’s reflection keeps track of the subtle changes in my skin’s color as blood pumps in and drains from my capillaries. It sends the data on time between color changes to my phone, which sends the information through a proprietary algorithm to determine my heart rate. Other algorithms take into account my average heart rate and my lowest heart rate to calculate my resting heart rate (55).

But in the end, these are all just numbers. Some people (like me) just like having this data, but what can you actually do with it? Well, the Fitbit has another interesting feature. It uses your heart rate and motion information to determine when you’ve fallen asleep, when you’ve woken up, and whether you’re sleeping deeply or restlessly. I can check my phone every morning for a graphical representation of my sleep from the previous night, and determine how well I slept, how long I slept, and how my sleep fits in with my desired regular schedule (11:45 to 7:45). Kind of cool, right?

With a new market emphasis on machine learning, and sleep researchers making strides in answering fundamental questions, things are about to get a lot cooler.

Everybody has experienced miraculous three-hour slumbers that leave them feeling like they slept a full night, and heartbreaking ten-hour naps that make them question whether they slept at all. Although most of us consider those simple anomalies, scientists have caught on, and are actively studying this phenomenon. From what I’ve gleaned online, scientists that study sleep find that allowing a sleeping subject to complete REM cycles (lasting about 90 minutes, with variation) results in fuller and more restoring sleep. In other words, 7 hours and 30 minutes can result in a better sleep than a full 8 hours. It sounds like quackery, but the evidence is widely available, peer-reviewed, and convincing to the layperson.

Machine learning has been a buzzword for at least the past year. The concept itself is worthy of an entire post, but to summarize it for my purposes, it’s a broad term that refers to programming algorithms that adjust their behavior based on data input. For example, programs that predict what a customer wants to buy will show ads to that customer on a variety of platforms and decide where to show those ads more often, based on how much time the customer spends on each platform. Machine learning is essentially automating programs to use big data to improve their predictive or deductive capabilities.

Let’s bring this all together for a look into the future: If my Fitbit can keep track of my heartbeat to a precise enough degree to determine when I am in REM sleep — or can use an intelligent, learning-capable algorithm to set alarms that give me an optimal amount of sleep — I can have a personalized, automatic alarm that adapts to my habits and improves my quality of rest. Would that convince you to buy one?

Categories
Operating System

What is Data Forensics?

Short History of Data Forensics

The concept of data forensics was created in the 1970s with the first acknowledged data crime seen in Florida, 1978, where deleting files to hide evidence became considered illegal. The field gained traction through the 20th century with the FBI creating the Computer Analysis and Response Team quickly followed by the creation of the British Fraud Squad. The small initial size of these organizations created a unique situation where civilians were brought in to assist with investigations. In fact, it’s acceptable to say that computer hobbyists in the 1980s and 1990s gave the profession traction, as they assisted government agencies in developing software tools for investigating data related crime. The first conference on digital evidence took place in 1993 at the FBI Academy in Virginia; it was a huge success, with over 25 countries attending, it concluded in the agreement that digital evidence was legitimate and that laws regarding investigative procedure should be drafted. Until this point, no federal laws had been put in place regarding data forensics, somewhat detracting from its legitimacy. The last section of history takes place in the 2000s, which marks the field’s explosion in size. The advances seen in home computing during this time allowed for the internet to start playing a larger part in illegal behavior, as well as more powerful software both to aid and counteract illegal activity. At this point, government agencies were still aided greatly by grassroots computer hobbyists who continued to help design software for the field.

Why is it so Important?

The first personal computers, while incredible for their time, were not capable of many operations, especially when compared to today’s machines. These limitations were bittersweet, as they limited the illegal behavior available. With hardware and software continuing to develop at a literally exponential rate, coupled with the invention of the internet, it wasn’t long before crimes increased with parallel severity. For example, prior to the internet, someone could be caught in possession of child pornography (a fairly common crime associated with data forensics) and that would be the end of it; they would be prosecuted and their data confiscated. Post-internet, someone could be in possession of the same materials, however they could now be guilty of distribution across the web, greatly increasing the severity of the crime, as well as how many others might be involved. 9/11 sparked a realization for the necessity for further development in data investigation. Though no computer hacking or software manipulation aided in the physical act of terror, it was discovered later on that there was traces of data leading around the globe that pieced together a plan for the attack. Had forensics investigations been more advanced than they were at the time, a plan might have been discovered and the entire disaster avoided. A more common use for data forensics is to discover fraud in companies, and contradictions in their server system’s files. Investigations as such tend to take a year or longer to complete given the sheer amount of data that has to be looked through. Bernie Madoff, for example, used computer algorithms to change the origin of the money being deposited into his investors’ accounts so that his own accounts did not drop at all. In this case, more than 36 billion dollars were stolen from clients. That magnitude is not uncommon for fraud of such a degree. Additionally, if a company declares bankruptcy, it can often follow that they must submit data for analysis to make sure no one is benefiting from the company’s collapse.

How Does Data Forensics Work?

The base procedure for collecting evidence is not complicated. Judd Robbins, a renowned computer forensics expert, describes the sequence of events as following:

The computer is first collected, and all visible data – meaning data that does not require any algorithms or special software to recover – copied exactly to another file system or computer. It’s important that the actual forensics process not take place on the accused’s computer in order to insure no contamination in the original data.

Hidden data is then searched for, including deleted files or files that have been purposefully hidden from plain view and sometimes requiring extensive effort to recover.

Beyond simply making invisible to the system or deleting files, data can also be hidden in places on the hard drive that it would not logically be. A file could possibly be disguised as a registry file in the operating system to avoid suspicion. This kind of sorting the unorthodox parts of the hard drive can be incredibly time consuming.

While all of this is happening a detailed report must be updated that keeps track of not only the contents of the files, but if any of them were encrypted or disguised. In the world of data forensics, merely hiding certain files can lead to an accusation of probable cause.

Tools

Knowing the workflow of investigations is useful for a basic understanding, but the types of tools that have been created to assist investigators are the core of discovering data, leaving the investigators to interpret the results. While details of these tools is often kept under wraps to prevent anti-forensics tools from being developed, their basic workings are public knowledge.

Data Recovery tools are algorithms which detect residual charges on the sectors of a disk to essentially guess what might have been there before (this is how data recovery works too). Reconstruction tools do not have a 100% success rate, as some data could be simply too spread out to recover. Deleted data can be compared to an unsolved puzzle with multiple solutions, or perhaps a half burnt piece of paper. It’s possible to only recover some of the data too, and therefore chance comes into play again as to whether that data will be useful or not.

We’ve mentioned previously the process of copying the disk in order to protect the original copy. A Software or Hardware Write tool is in charge of copying the disk, while insuring that none of the metadata is altered in the process. The point of this software is to be untraceable so that an investigator does not leave a signature on the disk. You could think of accidentally updating the metadata as putting your digital fingerprints on the crime scene.

Hashing tools are used to compare one disk to another. If an investigator were to compare two different servers together with thousands of gigabytes of data, it would take years and years to go through to look for something that may not even exist. Hashing is a type of algorithm that simply runs through one disk piece by piece and tries to identify a similar or identical file on a different one. The nature of hashing makes it excellent for fraud investigations as it allows the analyst to check for anomalies that would indicate tampering.

Though many other tools exist, and many are developed as open source for operating systems such as Linux, these are the fundamental types of tools used. As computers continue to advance, more tools will inherently be invented to keep up with them.

Difficulties During Investigations

The outline of the process makes the job seem somewhat simple, if not a little tedious. What excites experts in the field is the challenge of defeating the culprit’s countermeasures that they may have put in place. These countermeasures are referred to a ‘Anti-Forensics’ tools and can range as far in complexity as the creator’s knowledge of software and computer operations. For example, every time a file is opened the ‘metadata’ is changed – metadata refers to the information about the file, not what’s inside it, regarding things such as last time opened, date created and size – which can be an investigator’s friend or foe. Forensic experts are incredibly cautious to not contaminate metadata while searching through files, as doing so can compromise the integrity of the investigation; it could be crucial to know the last time a program was used or a file opened. Culprits with sufficient experience can edit metadata to throw off investigators. Additionally, files can be masked as different kinds of files as to also confuse investigators. For example, a text file containing a list of illegal transactions could saved as a .jpeg file and the metadata edited so that the investigator would either pass over it, thinking a picture irrelevant, or perhaps open the picture to find nothing more than a blank page or even an actual picture of something. They would only find the real contents of the file if they thought to open it with a word processor as it was originally intended.

Another reason data is carefully copied off the original host is to avoid any risk of triggering a programmed ‘tripwire’ so to speak. Trying to open a specific file could perhaps also activate a program to scramble the hard drive to avoid any other evidence being found. While deleted data can be recovered, a process called ‘scrambling’ cannot. Scrambling the disk rewrites random bits to the entire drive. Overwriting data is impossible to undo in this case, and can therefore protect incriminating evidence. That being said if such a process occurs it offers compelling reason to continue the investigation if someone has gone to such an extent to keep data out of the hands of the police.

Additionally, remote access via the internet can be used to alter data on a local computer. For this reason, it is common practice for those investigating to sever any external connections the computer may have.

Further, data forensics experts forced to be meticulous, as small errors can result in corrupted data that can no longer be used as evidence. More than just fighting the defendant’s attempt to hide their data, analysts fight with the law to keep their evidence relevant and legal. Accidentally violating someone’s rights to data security can result in evidence being thrown out. Just with any legal search a warrant is needed and not having one will void any evidence found. Beyond national legal barriers, the nature of the internet allows users to freely send files between countries with ease. If information is stored in another country, it requires international cooperation to continue the investigation. While many countries inside NATO and the UN are working on legislation that would make international data investigations easier, storing data around the globe remains a common tool of hackers and other computer criminals to maintain anonymity.

Looking Forward

Data security is a serious concern in our world, and will grow in importance given our everyday reliance on digital storage and communication. As computer technology continues to advance at the pace it is, both forensics and anti-forensics tools will continue to advance as more advanced and literate software is developed. With AI research being done at research universities across the world, it is quite possible the future forensics tools will be adaptive, and learn to find patterns by themselves. We already have learning security tools such as Norton or McAfee virus protection for home computers which remember which programs you tell it are safe and make educated guesses in future based on your preferences. This is only scratching the surface of what is capable from such software, leaving much to be discovered in the future. With the advancement in software comes the negative too, with more powerful resources for cyber criminals to carry out their operations undetected. Data Forensics, and information security as a whole, then, can be seen as a never ending race to stay in front of computer criminals. As a result, the industry continues to flourish, as new analysts are always needed with software advances taking place every day.

Categories
Operating System

CPU Overclocking: Benefits, Requirements and Risks

The Benefits of Overclocking

Overclocking is, essentially, using the settings present on the motherboard in order to have the CPU run at higher speeds than what it’s set to run by default. This comes at the cost of increased heat production, as well as potential reduction of lifespan, though for many people the benefits far outweigh the risks.

Overclocking allows you to basically get ‘free’ value from your hardware, potentially letting the CPU last longer before it needs an upgrade, as well as just generally increasing performance in high demand applications like gaming and video editing. A good, successful overclock can grant as much as a 20% performance increase or more, as long as you’re willing to put in the effort.

Requirements 

Overclocking is pretty simple nowadays, however, there are some required supplies and specifications to consider before you’ll be able to do it. For most cases, only computers that you put together yourself will really be able to overclock, as pre-built ones will rarely have the necessary hardware, unless you’re buying from a custom PC builder.

The most important thing to consider is whether or not your CPU and Motherboard even support overclocking. For Intel computers, any CPU with a “K” on the end of it’s name, such as the recently released i7-7700k, will be able to overclock. AMD has slightly different rules, with many more of their CPUs being unlocked for overclockers to tinker with. Always check the specific SKU that you’re looking at on the manufacturer’s website, so you can be sure it’s unlocked!

Motherboards are a bit more complicated. For Intel chips, you’ll need to pick up a motherboard that has a “Z” in the chipset name, such as the Z170 and Z270 motherboards which are both compatible with the previously mentioned i7-7700k. AMD, once again, is a bit different. MOST of their motherboards are overclock-enabled, but once again you’re going to want to look at the manufacturer’s websites for whatever board you’re considering.

Another thing to consider is the actual overclocking-related features of the motherboard you get. Any motherboard that has the ability to overclock will be able to overclock to the same level (though this was not always the case), but some motherboards have built in tools to make the process a bit easier. For instance, some Asus and MSI motherboards in particular have what is essentially an automated overclock feature. You simply click a button in the BIOS (the software that controls your motherboard), and it will automatically load up a fairly stable overclock!

Of course, the automatic system isn’t perfect. Usually the automated overclocks are a bit conservative, which guarantees a higher level of stability, at the cost of not fully utilizing the potential of your chip. If you’re a tinkerer like me who wants to get every drop of performance out of your system, a manual overclock is much more effective.

The next thing to consider is your cooling system. One of the major byproducts of overclocking is increased heat production, as you usually have to turn up the stock voltage of the CPU in order to get it to run stably at higher speeds. The stock coolers that come in the box with some CPUs are almost definitely not going to be enough, so much so that Intel doesn’t even include them in the box for their overclockable chips anymore!

You’re definitely going to want to buy a third party cooler, which will run you between 30-100 dollars for an entry level model, depending on what you’re looking for. Generally speaking, I would stick with liquid cooling when it comes to overclocks, with good entry level coolers like the Corsair h80i and h100i being my recommendations. Liquid cooling may sound complicated, though it’s fairly simple as long as you’re buying the all-in-one units like the Corsair models I mentioned above. Custom liquid cooling is a whole different story, however, and is WAY out of the scope of the article.

If you don’t want to fork over the money for a liquid cooling setup, air cooling is still effective on modern CPUS. The Coolermaster Hyper Evo 212 is a common choice for a budget air cooler, running just below 40 bucks. However, air cooling isn’t going to get you the same low temperatures as liquid cooling, which will not let you get as high of an overclock unless you want to compromise the longevity of your system.

The rest of the requirements are pretty mundane. You’re going to want a power supply that can handle the higher power requirement of your CPU, though to be honest this isn’t really an issue anymore. As long as you buy a highly rated power supply from a reputable company of around 550 watts or higher, you should be good for most builds. There are plenty of online “tier-lists” for power supplies; stick to tier one or two for optimal reliability.

The only other thing you’ll need to pick up is some decent-quality thermal compound. Thermal compound, also called thermal paste, is basically just a grey paste that you put between the CPU cooler and the CPU itself, allowing for more efficient heat transfers. Most CPU coolers come with thermal paste pre-applied, but the quality can be dubious depending on what brand the cooler is. If you want to buy your own, I recommend IC Diamond or Arctic Silver as good brands for thermal compound.

Risks

Overclocking is great, but it does come with a few risks. They aren’t nearly as high as they used to be, given the relative ease of modern overclocking, but they’re risks to be considered nonetheless.

When overclocking, what we’re doing is increasing the multiplier on the CPU, allowing it to run faster. The higher we clock the CPU, the higher voltage the CPU will require, which will thus produce more heat.

Heat is the main concern of CPUs, and too much heat can lead to a shorter lifespan for the chip. Generally speaking, once you’re CPU is consistently running at above 86 degrees Celsius, you’re starting to get into the danger zone. Temperatures like that certainly won’t kill your CPU immediately, but it could overall lower the functional lifespan.

For most people, this won’t really be an issue. Not many people nowadays plan on having their computer last for 10 years and up, but it could be something to be worried about if you do want to hold onto the computer for awhile. However, as long as you keep your temperatures down, this isn’t really something you need to worry about. Heat will only outright kill a CPU when it exceeds around 105 degrees Celsius, though your CPU should automatically shut off at that point.

The other main risk is voltage. As previously mentioned, in order to achieve higher overclocks you also need to increase the voltage provided to the CPU. Heat is one byproduct of this which is a problem, but the voltage itself could also be a problem. Too high voltage on your CPU can actually fry the chip, killing it.

For absolute safety, many people recommend not going above 1.25v, and just settling for what you can get at that voltage. However, most motherboards will allow you to set anything up to 1.4v before notifying you of the danger.

My personal PC runs at 1.3v, and some people do go as high as 1.4v without frying the chip. There really isn’t a hard and fast rule, just make sure to check out what kind of voltages people are using for the hardware you bought, and try to stick around that area.

Essentially, as long as you keep the CPU cool (hence my recommendation for liquid cooling), and keep the voltages within safe levels (I’d say 1.4v is the absolute max, but I don’t recommend even getting close to it), you should be fine. Be wary, however, as overclocking will void some warranties depending on who you’re buying the CPU from, especially if the CPU ends up dying due to voltage.

Afterthoughts – The Silicon Lottery

Now that you understand the benefits of overclocking, as well as the risks and requirements, there’s one more small concept; the silicon lottery.

The silicon lottery is the commonly used term to describe variance in CPU overclocks, depending on your specific CPU. Basically; just because you bought the same model of CPU as someone else doesn’t mean it will run at the same temperatures and overclock to the same point.

I have an i7-7700k that I’m cooling with a Corsair h100i v2. I am able to hold a stable 5ghz overclock at 1.3v, the stock settings being 4.2ghz at around 1.2v. However, not everyone is going to achieve results like this. Some chips might be able to hit 5ghz at slightly below 1.3v, some might only be able to achieve 4.8 at 1.3v. It really is just luck, and is the main reason that overclocking takes time to do. You can’t always set your CPU to the same settings as someone else, expecting it to work. It’s going to require some tinkering.

Hopefully, this article has helped you understand overclocks more. There are some risks, as well as some specific hardware requirements, but from my perspective they’re all worth the benefits.

Always remember to do your research, and check out a multitude of overclocking guides. Everyone has different opinions on what voltages and temperatures are safe, so you’ll need to check out as many resources as possible.
If you do decide that you want to try overclocking, then I wish you luck, and may the silicon lottery be ever in your favor!

Categories
Security Web

Private Data in the Digital Age

Former U.S. spy agency contractor Edward Snowden is wanted by the United States for leaking details of U.S. government intelligence programs
Former U.S. spy agency contractor Edward Snowden is wanted by the United States for leaking details of U.S. government intelligence programs

In a scenario where someone has a file of information stored on a private server with the intent to keep it private, is it ever justified for someone else to expose a security flaw and post the information anonymously on the internet? There exists a fine line where “It depends” on the scenario. But this classification simply does not do the case justice as there are extraneous circumstances where this kind of theft and distribution is justifiable.

One such case is whistle-blowing. Edward Snowden is still a man of much controversy. Exiled for leaking sensitive government documents, some label him a hero, others a traitor. Snowden was former Special Forces and later joined the CIA as a technology specialist. He stole top-secret documents pertaining to the National Security Agency and FBI tapping directly into the central servers of leading U.S Internet companies to extract personal data. Snowden leaked these documents to the Washington Post, exposing the PRISM code, which collected private data from personal servers of American citizens. This program was born out of a failed warrantless domestic surveillance act and kept under lock and key to circumvent the public eye. Americans were unaware and alarmed by the breadth of unwarranted government surveillance programs to collect, store, and search their private data.

Although Snowden illegally distributed classified information, the government was, in effect, doing the same but with personal data of its constituents. I would argue that Snowden is a hero. He educated the American people about the NSA overstepping their bounds and infringing upon American rights. Governments exist to ensure the safety of the populace, but privacy concerns will always be in conflict with government surveillance and threat-prevention. The government should not operate in the shadows; is beholden to its people, and they are entitled to know what is going on.

The United States government charged Snowden with theft, “unauthorized communication of national defense information,” and “willful communication of classified communications intelligence information to an unauthorized person.” The documents that came to light following Snowden’s leaks only pertained to unlawful practices, and did not compromise national security. Therefore, it appears as though the government is trying to cover up their own mistakes. Perhaps this is most telling in one of Edward Snowden’s recent tweets :

“Break classification rules for the public’s benefit, and you could be exiled.
Do it for personal benefit, and you could be President.” – @Snowden

This commentary on Hillary Clinton shows that in the eyes of the government who is right and wrong changes on a case to case basis. In many ways, Snowden’s case mirrors Daniel Ellsberg’s leak of the Pentagon Papers in 1971. The Pentagon Papers contained evidence that the U.S. Government had mislead the public regarding the Vietnam war, strengthening anti-war sentiment among the American populace. In both cases, whistle-blowing was a positive force, educating the public about abuses happening behind their back. While in general practice, stealing private information and distributing it to the public is malpractice, in these cases, the crime of stealing was to expose a larger evil and provide a wake-up call for the general population.

Alternatively, in the vast majority of cases accessing private files via a security flaw is malicious, and the government should pursue charges. While above I advocated for a limited form of “hacktivism,” it was a special case to expose abuses by the government which fundamentally infringed on rights to privacy. In almost all cultures, religions and societies stealing is recognized as wrongdoing and should rightfully be treated as such. Stealing sensitive information and posting it online should be treated in a similar manner. Publishing incriminating files about someone else online can ruin their life chances. For example, during the infamous iCloud hack, thousands of nude or pornographic pictures of celebrities were released online. This was private information which the leaker took advantage of for personal gain. For many female celebrities it was degrading and humiliating. Therefore, the leaker responsible for the iCloud leaks was not justified in  taking and posting the files. While the definition of leaking sensitive information for the “common good” can be in itself a blurred line, but a situation like the iCloud leak evidently did not fit in this category. Hacking Apple’s servers to access and leak inappropriate photos can only be labeled as a malevolent attack on female celebrities, which could have potentially devastating repercussions for their career.

While the iCloud hack was a notorious use of leaking private data in a hateful way, there are more profound ways which posting private data can destroy someone’s life. Most notably, stealing financial information and identification (such as SSID) can have a huge, detrimental effect on someone’s life. My grandmother was a victim of identity theft, where someone she knew and trusted stole her personal information and used it for personal gain. This same scenario plays out online constantly and can drain someone’s life savings, reduce their access to credit and loans, and leave them with a tarnished reputation. Again, we draw a line between leaking something in the public’s interest and exposing a security flaw for the leaker’s benefit. By gaining access to personal files, hackers could wreck havoc and destroy lives. Obviously this type of data breach is unacceptable, and cannot be justified.

Overall, taking sensitive material and posting it anonymously online can generally be regarded as malpractice, however, their are exceptions such as whistle-blowing where the leaker is doing so for the common good. These cases are far and few between, and the “bad cases” have harming repercussions which can follow someone throughout their life. Ultimately, to recall Snowden’s case, everyone has a right to privacy. This is why someone leveraging a security flaw and posting files online is wrong from the get go, because it supersedes personal secrecy. In an increasingly digital world it is difficult to keep anything private, but everyone has a fundamental right to privacy which should not be disrespected or infringed upon.

Categories
Operating System

The Touch Bar may seem like a gimmick, but has serious potential

macbook-pro-touch-bar-customize-100690194-origThe first iPhones came out in 2007. At that time people had Blackberrys and Palm PDAs – phones that came physical keyboards and a stylus. These iPhones were immediately praised for its aesthetics, but criticized for its limited functionality. As development that expanded functionalities of these iPhones took off, so did the phone itself. After wrestling the market with traditional styled PDAs, iPhones and Androids began leaving its competition in dust.

Jump forward to today. The new MacBook Pros now come with a touch strip (marketed as Touch Bar) in place of the function keys that reside in the first row. While they haven’t gone away, Apple decided that a touch strip would enable a more dynamic style of computing. Of course, Apple detractors look at this as a sign that Apple is running out of ideas and resorting to gimmicks.

I recently got my hands on one of these MacBook Pros, and yes, there are obvious shortcomings. Though the computer is beautifully engineered and designed, it’s questionable that the Touch Bar itself isn’t high definition (or retina display, as Apple would’ve marketed it). As far as using it, it does feel a little weird at first, since you don’t get a tactile response as opposed to using any other key on the keyboard, but I’ve gotten used to it. There are also some minor design flaws that might be of annoyance, such as the volume and brightness adjustment bar not being the most intuitive, the fact that I’ve managed to press the power button a couple times when I meant to use the delete key, and that some functions that the Touch Bar is largely advertised for are sometimes buggy, particularly when scrubbing through a video – so much for Apple’s reputation when it comes to quality control.

But it’s obvious to see why Apple might envision the Touch Bar as the next evolution in laptop computing. It’s clear that they don’t believe in a laptop/tablet hybrid ala the Surface Pro – not even Microsoft themselves are buying into it as much. But the dynamism that the Touch Bar offers, or perhaps more importantly, has the potential of offering, is way more appealing. And though the Touch Bar may seem limited in terms of functionality and usefulness, it’s a little like the original iPhone: a lot of it depends on the software development that follows.