Author Archives: Julien Olsen

Setting Roam Aggression on Windows Computers

What is Wireless Roaming?

Access Points

To understand what roaming is, you first have to know what device makes the software function necessary.

If you are only used to household internet setups, the idea of roaming might be a little strange to think about. In your house you have your router, which you connect to, and that’s all you need to do. You may have the option of choosing between 2.4GHz and 5GHz channels, however that’s as complicated as it can get.

Now imagine that your house is very large, let’s say the size of UMass Amherst. Now, from your router in your living room, the DuBois Library, it might be a little difficult to connect to from all the way up in your bedroom on Orchard Hill. Obviously in this situation, the one router will never suffice, and so a new component is needed.

An Access Point (AP for short) provides essentially the same function as a router, except that multiple APs used in conjunction project a Wi-Fi network further than a single router ever could. All APs are tied back to a central hub, which you can think of as a very large, powerful modem, which provides the internet signal via cable from the Internet Service Provider (ISP) out to the APs, and then in turn to your device.

On to Roaming

So now that you have your network set up with your central hub in DuBois (your living room), and have an AP in your bedroom (Orchard Hill), what happens if you want to go between the two? The network is the same, but how is your computer supposed to know that the AP in Orchard Hill is not the strongest signal when you’re in DuBois. This is where roaming comes in. Based on what ‘aggressiveness’ your WiFi card is set to roam at, your computer will test the connection to determine which AP has the strongest signal based on your location, and then connect to it. The network is set up such that it can tell the computer that all the APs are on the same network, and allow your computer to transfer your connection without making you input your credentials every time you move.

What is Roam Aggressiveness?

The ‘aggressiveness’ with which your computer roams determines how frequently and how likely it is for your computer to switch APs. If you have it set very high, your computer could be jumping between APs frequently. This can be a problem as it can cause your connection to be interrupted frequently as your computer authenticates to another AP. Having the aggressiveness set very low, or disabling it, can cause your computer to ‘stick’ to one AP, making it difficult to move around and maintain a connection. The low roaming aggression is the more frequent problem people run into on large networks like eduroam at UMass. If you are experiencing issues like this, you may want to change the aggressiveness to suit your liking. Here’s how:

How to Change Roam Aggressiveness on Your Device:

First, navigate to the Control Panel which can be found in your Start menu. Then click on Network and Internet.

From there, click on Network and Sharing Center. 

Then, you want to select Wi-Fi next to Connections. Note: You may not have eduroam listed next to Wi-Fi if you are not connected or connected to a different network.

Now, select Properties and agree to continue when prompted for Administrator permissions.

After selecting Configure for your wireless card (your card will differ with your device from the one shown in the image above).

Finally, navigate to Advanced, and then under Property select Roaming Sensitivity Level. From there you can change the Value based on what issue you are trying to address.

And that’s all there is to it! Now that you know how to navigate to the Roaming settings, you can experiment a little to find what works best for you. Depending on your model of computer, you may have more than just High, Middle, Low values.

Changing roaming aggressiveness can be helpful for stationary devices, like desktops, too. Perhaps someone near you has violated UMass’ wireless airspace policy and set up and hotspot network or a wireless printer. Their setup may interfere with the AP closest to you, and normally, it could cause packet loss, or latency (ping) spiking. You may not even be able to connect for a brief time. Changing roaming settings can help your computer move to the next best AP while the interference is occurring, resulting in a more continuous experience for you.

Tips for Gaming Better on a Budget Laptop

Whether you came to college with an old laptop, or want to buy a new one without breaking the bank, making our basic computers faster is something we’ve all thought about at some point. This article will show you some software tips and tricks to improve your gaming experience without losing your shirt, and at the end I’ll mention some budget hardware changes you can make to your laptop. First off, we’re going to talk about in-game settings.

 

In-Game Settings:

All games have built in settings to alter the individual user experience from controls to graphics to audio. We’ll be talking about graphics settings in this section, primarily the hardware intensive ones that don’t compromise the look of the game as much as others. This can also depend on the game and your individual GPU, so it can be helpful to research specific settings from other users in similar positions.

V-Sync:

V-Sync, or Vertical Synchronization, allows a game to synchronize the framerate with that of your monitor. Enabling this setting will increase the smoothness of the game. However, for lower end computers, you may be happy to just run the game at a stable FPS that is less than your monitor’s refresh rate. (Note – most monitors have a 60Hz or 60 FPS refresh rate). For that reason, you may want to disable it to allow for more stable low FPS performance.

Anti-Aliasing:

Anti-Aliasing, or AA for short, is a rendering option which reduces the jaggedness of lines in-game. Unfortunately the additional smoothness heavily impacts hardware usage, and disabling this while keeping other things like texture quality or draw distance higher can make big performance improvements without hurting a game’s appearance too much. Additionally, there are many different kinds of AA options that games might have settings for. MSAA (Multisampling AA), and the even more intensive, TXAA (Temporal AA), are both better smoothing processes that have an even bigger impact on performance. Therefore turning these off on lower-end machines is almost always a must. FXAA (Fast Approximate AA) uses the least processing power, and can therefore be a nice setting to leave on if your computer can handle it.

Anisotropic Filtering (AF):

This setting adds depth of field to a game, by making things further away from your character blurrier. Making things blurrier might seem like it would make things faster, however it actually puts a greater strain on your system as it needs to make additional calculations to initiate the affect. Shutting this off can yield improvements in performance, and some players even prefer it, as it allows them to see distant objects more clearly.

Other Settings:

While the aforementioned are the heaviest hitters in terms of performance, changing some other settings can help increase stability and performance too (beyond just simple texture quality and draw distance tweaks). Shadows and reflections are often unnoticed compared to other effects, so while you may not need to turn them off, turning them down can definitely make an impact. Motion blur should be turned off completely, as it can make quick movements result in heavy lag spikes.

Individual Tweaks:

The guide above is a good starting point for graphics settings; because there are so many different models, there are any equally large number of combinations of settings. From this point, you can start to increase settings slowly to find the sweet spot between performance and quality.

Software:

Before we talk about some more advanced tips, it’s good practice to close applications that you are not using to increase free CPU, Memory, and Disk space. This alone will help immensely in allowing games to run better on your system.

Task Manager Basics:

Assuming you’ve tried to game on a slower computer, you’ll know how annoying it is when the game is running fine and suddenly everything slows down to slideshow speed and you fall off a cliff. Chances are that this kind of lag spike is caused by other “tasks” running in the background, and preventing the game you are running from using the power it needs to keep going. Or perhaps your computer has been on for awhile, so when you start the game, it runs slower than its maximum speed. Even though you hit the “X” button on a window, what’s called the “process tree” may not have been completely terminated. (Think of this like cutting down a weed but leaving the roots.) This can result in more resources being taken up by idle programs that you aren’t using right now. It’s at this point that Task Manager becomes your best friend. To open Task Manager, simply press CTRL + SHIFT + ESC at the same time or press CTRL + ALT + DEL at the same time and select Task Manager from the menu. When it first appears, you’ll notice that only the programs you have open will appear; click the “More Details” Button at the bottom of the window to expand Task Manager. Now you’ll see a series of tabs, the first one being “Processes” – which gives you an excellent overview of everything your CPU, Memory, Disk, and Network are crunching on. Clicking on any of these will bring the process using the highest amount of each resource to the top of the column. Now you can see what’s really using your computer’s processing power. It is important to realize that many of these processes are part of your operating system, and therefore cannot be terminated without causing system instability. However things like Google Chrome and other applications can be closed by right-clicking and hitting “End Task”. If you’re ever unsure of whether you can end a process or not safely, a quick google of the process in question will most likely point you in the right direction.

Startup Processes:

Here is where you can really make a difference to your computer’s overall performance, not just for gaming. From Task Manager, if you select the “Startup” tab, you will see a list of all programs and services that can start when your computer is turned on. Task Manager will give an impact rating of how much each task slows down your computers boot time. The gaming app Steam, for example, can noticeably slow down a computer on startup. A good rule of thumb is to allow virus protection to start with Windows, however everything else is up to individual preference. Shutting down these processes on startup can prevent unnecessary tasks from ever being opened, and allow for more hardware resource availability for gaming.

Power Usage:

You probably know that unlike desktops, laptops contain a battery. What you may not know is that you can alter your battery’s behavior to increase performance, as long as you don’t mind it draining a little faster. On the taskbar, which is by default located at the bottom of your screen, you will notice a collection of small icons next to the date and time on the right, one of which looks like a battery. Left-clicking will bring up the menu shown below, however right-clicking will bring up a menu with an option “Power Options” on it.

 

 

 

 

Clicking this will bring up a settings window which allows you to change and customize your power plan for your needs. By default it is set to “Balanced”, but changing to “High Performance” can increase your computer’s gaming potential significantly. Be warned that battery duration will decrease on the High Performance setting, although it is possible to change the battery’s behavior separately for when your computer is using the battery or plugged in.

Hardware:

Unlike desktops, for laptops there are not many upgrade paths. However one option exists for almost every computer that can have a massive effect on performance if you’re willing to spend a little extra.

Hard Disk (HDD) to Solid State (SSD) Drive Upgrade:

Chances are that if you have a budget computer, it probably came with a traditional spinning hard drive. For manufacturers, this makes sense as they are cheaper than solid states, and work perfectly well for light use. Games can be very demanding on laptop HDDs to recall and store data very quickly, sometimes causing them to fall behind. Additionally, laptops have motion sensors built into them which restrict read/write capabilities when the computer is in motion to prevent damage to the spinning disk inside the HDD. An upgrade to a SSD not only eliminates this restriction, but also has a much faster read/write time due to the lack of any moving parts. Although SSDs can get quite expensive depending on the size you want, companies such as Crucial or Kingston offer a comparatively cheap solution to Samsung or Intel while still giving you the core benefits of a SSD. Although there are a plethora of tutorials online demonstrating how to install a new drive into your laptop, make sure you’re comfortable with all the dangers before attempting, or simply take your laptop into a repair store to have them do it for you. It’s worth mentioning that when you install a new drive, you will need to reinstall Windows, and all your applications from your old drive.

Memory Upgrade (RAM):

Some laptops have an extra memory slot, or just ship with a lower capacity than what they are capable of holding. Most budget laptops will ship with 4GB of memory, which is often not enough to support both the system, and a game.

Upgrading or increasing memory can give your computer more headroom to process and store data without lagging up your entire system. Unlike with SSD upgrades, memory is very specific and it is very easy to buy a new stick that fits in your computer, but does not function with its other components. It is therefore critical to do your research before buying any more memory for your computer; that includes finding out your model’s maximum capacity, speed, and generation. The online technology store, Newegg, has a service here that can help you find compatible memory types for your machine.

Disclaimer: 

While these tips and tricks can help your computer to run games faster, there is a limit to what hardware is capable of. Budget laptops are great for the price point, and these user tricks will help squeeze out all their potential, but some games will simply not run on your machine. Make sure to check a game’s minimum and recommended specs before purchasing/downloading. If your computer falls short of minimum requirements, it might be time to find a different game or upgrade your setup.

Engine Management: How Computers Unlocked the Internal Combustion Engine

Introduction

How did engines run before computers?

The internal combustion engine as we know it has always required some level of electronic signal to operate the ignition system. Before the 1980s when the first engine management computer was produced,  the electrical hardware on an engine was fairly rudimentary, boiling down to essentially a series of off and on switches for ignition timing. This is what’s referred to as mechanical ignition.

Mechanical ignition works by sending a charge from a battery to an ignition coil, which essentially stores a high voltage charge that discharges when provided with a path. This path is determined by a distributor is mechanically connected to the crankshaft of the engine. A distributor’s job is just as its name suggests – the rotation of the crankshaft causes the distributor to rotate, connecting the ignition coil to the individual spark plugs for each cylinder to ignite the mixture at the right time in the engine’s cycle to produce power.

Of course there are more complexities to how an engine produces power, involving vacuum lines and the workings of a carburetor and mechanical fuel pumps, however for this article we’re going to focus on electronics.

The First Computers Designed for Engines:

Electronic Fuel Injection, or EFI, has been around since the 1950s however before the mid 1970s was primarily used in motorsport due to its higher cost compared to a carburetor. Japanese companies such as Nissan were pioneers in early consumer EFI systems. The advantages of EFI over carburetors include better startup in cold conditions, as well as massively increased fuel economy. Then in 1980, Motorola introduced the first engine control unit, ECU, that would begin the computer takeover of the car industry.

An ECU replaces the direct mechanical connections with sensors that each read data from different parts of the engine, and feed back to ECU which crunches numbers and then determines how to adjust the various components of the engine to make sure it is operating within predetermined limits An Oxygen Sensor, or O2 sensor, is possibly one of the most important parts of a modern engine – connecting to the exhaust, the O2 sensor reads the levels of oxygen present after combustion. This is extremely important as it tells the ECU information on how efficient the engine is currently burning fuel. There are numerous other sensors on engines, but their jobs are all under the same umbrella: to feed information back to the ECU, so that the microprocessor can adjust timing and how much fuel is going in the engine accordingly.

Replacing the mechanically driven timing of early engines allows for a wider range of adjustability and control to ensure the engine is running right. This led cars to burn gas much cleaner and become much more efficient in general. As technology progressed, engine management became even more advanced, allowing for yet more meticulous control, as well as added safety measures. But what else did this computer-powered control do for the automotive industry?

Improvements in Performance

Engine Tuning

With ever increasing processing power, the computers in cars advanced just as quickly as any other computers: exponentially. More efficiently controlling fuel and timing quickly led to tuning for maximum power and response. EFI, and direct injection increased the throttle response, and further tuning could be done to make the car have a wider powerband – a term used to refer to the range of revolutions per minute (RPM) where an engine was making usable power. Manufacturers, realizing the extensive power of ECUs, started building mechanical parts around them to utilize their strengths. Below is a list of variable timing technologies used by several different companies:

  • Variable Valves/ Variable Cam Design
    • Honda VTEC (Variable Valve Timing and Lift Electronic Control)
    • Mitsubishi MIVEC (Mitsubishi Innovative Valve timing Electronic Control System)
    • Toyota VVT-i (Variable Valve Timing with intelligence)
    • Nissan VVL/VVT (Variable Valve Lift/ Variable Valve Timing)

While differing in name and how they are applied, these systems all boil down to controlling the engine timing at different engine speeds (RPM). The word ‘variable’ stands out in all of these, and is possibly the most powerful tool that advanced engine tuning enables. In this case, variable refers to the ability to change the behavior of the engine’s valves and camshafts (a long rod at the top of an engine that tells the valves when to move). As the engine speed increases, what might have been a good design at lower RPM soon starts to fall short, and this is what causes the powerband to drop off. Being able to alter the timing of the engine allows for better high and low end performance, as manufacturers essentially have the opportunity to design their engine for both, and use the ECU to switch modes at the optimal time.

Looking Forward

Hybrids

Most people think of hybrids as the Toyota Prius, something designed with pure efficiency in mind, however some supercar companies have taken hybrid technology and adapted it for performance. Supercars such as the McLaren P1 and Porsche 918 utilize electric engines to compliment the power of the conventional combustion engine. Managed by an advanced ECU, the electric engines are used to provide immediate power while the gas engine is accelerating into its powerband. While the electric engines can be used separately in place of the gas engine, they mainly serve to further fill in the gap that the variable timing technology we talked about previously could not. As regular hybrid technology continues to advance, we can expect to see the same with respect to response and performance.

 

While engine efficiency is still being improved, the means to do so are based on these core engine technologies and their supporting computer systems. Now, manufacturers have once again started producing supporting components to utilize the ECUs ability to process data.

 

Hard Drives: How Do They Work?

What’s a HDD?

A Hard Disk Drive (HDD for short) is a type of storage commonly used as the primary storage system both laptop and desktop computers. It functions like any other type of digital storage device by writing bits of data and then recalling them later. It stands to mention that an HDD is what’s referred to as “non-volatile”, which simply means that it can save data without a source of power. This feature, coupled with their large storage capacity and their relatively low cost are the reasons why HDDs are used so frequently in home computers. While HDDs have come a long way from when they were first invented, the basic way that they operate has stayed the same.

How does a HDD physically store info?

Inside the casing there are a series of disk-like objects referred to as “platters”.

The CPU and motherboard use software to tell what’s called the “Read/Write Head” where to move on the platter and where it then provides an electrical charge to a “sector” on the platter. Each sector is an isolated part of the disk containing thousands of subdivisions all capable of accepting a magnetic charge. Newer HDDs have a sector size of 4096 bytes or 32768 bits; Each bit’s magnetic charge translates to a binary 1 or 0 of data. Repeat this stage and eventually you have a string of bits which when read back can give the CPU instructions, whether it be updating your operating system, or opening your saved document in Microsoft Word.

As HDDs have been developed, one key factor that has changed is the orientation of the sectors on the platter. Hard Drives were first designed for “Longitudinal Recording” – meaning the longer side of the platter is oriented horizontally – and since then have utilized a different method called “Perpendicular Recording” where the sectors are stacked on end. This change was made as hard drive manufacturers were hitting a limit on how small they could make each sector due to the “Superparamagnetic Effect.” Essentially, the superparamagnetic effect means that hard drive sectors smaller than a certain size will flip magnetic charge randomly based on temperature. This phenomenon would result in inaccurate data storage, especially given the heat that an operating hard drive emits.

One downside to Perpendicular Recording is increased sensitivity to magnetic fields and read error, creating a necessity for more accurate Read/Write arms.

How software affects how info is stored on disk:

Now that we’ve discussed the physical operation of a Hard Drive, we can look at the differences in how operating systems such as Windows, MacOS, or Linux utilize the drive. However, beforehand, it’s important we mention a common data storage issue that occurs to some degree in all of the operating systems mentioned above.

Disk Fragmentation

Disk Fragmentation occurs after a period of data being stored and updated on a disk. For example, unless an update is stored directly after a base program, there’s a good chance that something else has been stored on the disk. Therefore the update for the program will have to be placed in a different sector farther away from the core program files. Due to the physical time it takes the read/write arm to move around, fragmentation can eventually slow down your system significantly, as the arm will need to reference more and more separate parts on your disk. Most operating systems will come with a built in program designed to “Defragment” the disk, which simply rearranges the data so that all the files for one program are in once place. The process takes longer based on how fragmented the disk has become. Now we can discuss different storage protocols and how they affect fragmentation.

Windows:

Windows uses a base computer language called MS-DOS (Microsoft Disk Operating System) and a file management system called NTFS, or New Technology File System, which has been the standard for the company since 1993. When given a write instruction, an NT file system will place the information as close as possible to the beginning of the disk/platter. While this methodology is functional, it only leaves a small buffer zone in between different files, eventually causing fragmentation to occur. Due to the small size of this buffer zone, Windows tends to be the most susceptible to fragmentation.

Mac OSX:

OSX and Linux are both Unix based operating systems. However their file system are different; Mac uses the HFS+ (Hierarchical File System Plus) protocol, which replaced the hold HFS method. HFS+ differs in that it can handle a larger amount of data at a given time, being 32bit and not 16bit. Mac OSX doesn’t need a dedicated tool for defragmentation like Windows does OSX avoids the issue by not using space on the HDD that has recently been freed up – by deleting a file for example – and instead searches the disk for larger free sectors to store new data. Doing so increases the space older files will have closer to them for updates. HFS+ also has a built in tool called HFC, or Hot File adaptive Clustering, which relocates frequently accessed data to specials sectors on the disk called a “Hot Zone” in order to speed up performance. This process, however, can only take place if the drive is less than 90% full, otherwise issues in reallocation occur.  These processes coupled together make fragmentation a non-issue for Mac users.

Linux:

Linus is an open-source operating system which means that there are many different versions of it, called distributions, for different applications. The most common distributions, such as Ubuntu, use the ext4 file system. Linux has the best solution to fragmentation as it spreads out files all over the disk, giving them all plenty of room to increase in size without interfering with each other. In the event that a file needs more space, the operating system will automatically try to move files around it give it more room. Especially given the capacity of most modern hard drives, this methodology is not wasteful, and results in no fragmentation in Linux until the disk is above roughly 85% capacity.

What’s an SSD? How is it Different to a HDD?

In recent years, a new technology has become available on the consumer market which replaces HDDs and the problems they come with. Solid State Drives (SSDs) are another kind of non-volatile memory that simply store a positive charge or no charge in a tiny capacitor. As a result, SSDs are much faster than HDDs as there are no moving parts, and therefore no time to move the read/write arm around. Additionally, no moving parts increases reliability immensely. Solid state drives do have a few downsides, however. Unlike with hard drives, it is difficult to tell when a solid state is failing. Hard drives will slow down over time, or in extreme cases make audible clicking signifying the arm is hitting the platter (in which case your data is most likely gone) while solid states will simply fail without any noticeable warning. Therefore, we must rely on software such as “Samsung Magician” which ships with Samsung’s solid states. The tool works by writing and reading back a piece of data to the drive and checking how fast it is able to do this. If the time it takes to write that data falls below a certain threshold, the software will warn the user that their solid state drive is beginning to fail.

Do Solid States Fragment Too?

While the process of having data pile on top of itself, and needing to put files for one program in different place is still present, it doesn’t matter with solid states as there is no delay caused by the read/write arm of a hard drive moving back and forth between the different sectors. Fragmentation does not decrease performance the way it does with hard drives, but it does affect the life of the drive. Solid states that have scattered data can have a reduced lifespan. The way that solid states work cause the extra write cycles caused by defragmenting to decrease the overall lifespan of the drive, and is therefore avoided for the most part given its small impact. That being said a file system can still reach a point on a solid state where defragmentation is necessary. It would be logical for a  hard drive to be defragmented automatically every day or week, while a solid state might require only a few defragmentations, if any, throughout its lifetime.

What is Data Forensics?

Short History of Data Forensics

The concept of data forensics was created in the 1970s with the first acknowledged data crime seen in Florida, 1978, where deleting files to hide evidence became considered illegal. The field gained traction through the 20th century with the FBI creating the Computer Analysis and Response Team quickly followed by the creation of the British Fraud Squad. The small initial size of these organizations created a unique situation where civilians were brought in to assist with investigations. In fact, it’s acceptable to say that computer hobbyists in the 1980s and 1990s gave the profession traction, as they assisted government agencies in developing software tools for investigating data related crime. The first conference on digital evidence took place in 1993 at the FBI Academy in Virginia; it was a huge success, with over 25 countries attending, it concluded in the agreement that digital evidence was legitimate and that laws regarding investigative procedure should be drafted. Until this point, no federal laws had been put in place regarding data forensics, somewhat detracting from its legitimacy. The last section of history takes place in the 2000s, which marks the field’s explosion in size. The advances seen in home computing during this time allowed for the internet to start playing a larger part in illegal behavior, as well as more powerful software both to aid and counteract illegal activity. At this point, government agencies were still aided greatly by grassroots computer hobbyists who continued to help design software for the field.

Why is it so Important?

The first personal computers, while incredible for their time, were not capable of many operations, especially when compared to today’s machines. These limitations were bittersweet, as they limited the illegal behavior available. With hardware and software continuing to develop at a literally exponential rate, coupled with the invention of the internet, it wasn’t long before crimes increased with parallel severity. For example, prior to the internet, someone could be caught in possession of child pornography (a fairly common crime associated with data forensics) and that would be the end of it; they would be prosecuted and their data confiscated. Post-internet, someone could be in possession of the same materials, however they could now be guilty of distribution across the web, greatly increasing the severity of the crime, as well as how many others might be involved. 9/11 sparked a realization for the necessity for further development in data investigation. Though no computer hacking or software manipulation aided in the physical act of terror, it was discovered later on that there was traces of data leading around the globe that pieced together a plan for the attack. Had forensics investigations been more advanced than they were at the time, a plan might have been discovered and the entire disaster avoided. A more common use for data forensics is to discover fraud in companies, and contradictions in their server system’s files. Investigations as such tend to take a year or longer to complete given the sheer amount of data that has to be looked through. Bernie Madoff, for example, used computer algorithms to change the origin of the money being deposited into his investors’ accounts so that his own accounts did not drop at all. In this case, more than 36 billion dollars were stolen from clients. That magnitude is not uncommon for fraud of such a degree. Additionally, if a company declares bankruptcy, it can often follow that they must submit data for analysis to make sure no one is benefiting from the company’s collapse.

How Does Data Forensics Work?

The base procedure for collecting evidence is not complicated. Judd Robbins, a renowned computer forensics expert, describes the sequence of events as following:

The computer is first collected, and all visible data – meaning data that does not require any algorithms or special software to recover – copied exactly to another file system or computer. It’s important that the actual forensics process not take place on the accused’s computer in order to insure no contamination in the original data.

Hidden data is then searched for, including deleted files or files that have been purposefully hidden from plain view and sometimes requiring extensive effort to recover.

Beyond simply making invisible to the system or deleting files, data can also be hidden in places on the hard drive that it would not logically be. A file could possibly be disguised as a registry file in the operating system to avoid suspicion. This kind of sorting the unorthodox parts of the hard drive can be incredibly time consuming.

While all of this is happening a detailed report must be updated that keeps track of not only the contents of the files, but if any of them were encrypted or disguised. In the world of data forensics, merely hiding certain files can lead to an accusation of probable cause.

Tools

Knowing the workflow of investigations is useful for a basic understanding, but the types of tools that have been created to assist investigators are the core of discovering data, leaving the investigators to interpret the results. While details of these tools is often kept under wraps to prevent anti-forensics tools from being developed, their basic workings are public knowledge.

Data Recovery tools are algorithms which detect residual charges on the sectors of a disk to essentially guess what might have been there before (this is how data recovery works too). Reconstruction tools do not have a 100% success rate, as some data could be simply too spread out to recover. Deleted data can be compared to an unsolved puzzle with multiple solutions, or perhaps a half burnt piece of paper. It’s possible to only recover some of the data too, and therefore chance comes into play again as to whether that data will be useful or not.

We’ve mentioned previously the process of copying the disk in order to protect the original copy. A Software or Hardware Write tool is in charge of copying the disk, while insuring that none of the metadata is altered in the process. The point of this software is to be untraceable so that an investigator does not leave a signature on the disk. You could think of accidentally updating the metadata as putting your digital fingerprints on the crime scene.

Hashing tools are used to compare one disk to another. If an investigator were to compare two different servers together with thousands of gigabytes of data, it would take years and years to go through to look for something that may not even exist. Hashing is a type of algorithm that simply runs through one disk piece by piece and tries to identify a similar or identical file on a different one. The nature of hashing makes it excellent for fraud investigations as it allows the analyst to check for anomalies that would indicate tampering.

Though many other tools exist, and many are developed as open source for operating systems such as Linux, these are the fundamental types of tools used. As computers continue to advance, more tools will inherently be invented to keep up with them.

Difficulties During Investigations

The outline of the process makes the job seem somewhat simple, if not a little tedious. What excites experts in the field is the challenge of defeating the culprit’s countermeasures that they may have put in place. These countermeasures are referred to a ‘Anti-Forensics’ tools and can range as far in complexity as the creator’s knowledge of software and computer operations. For example, every time a file is opened the ‘metadata’ is changed – metadata refers to the information about the file, not what’s inside it, regarding things such as last time opened, date created and size – which can be an investigator’s friend or foe. Forensic experts are incredibly cautious to not contaminate metadata while searching through files, as doing so can compromise the integrity of the investigation; it could be crucial to know the last time a program was used or a file opened. Culprits with sufficient experience can edit metadata to throw off investigators. Additionally, files can be masked as different kinds of files as to also confuse investigators. For example, a text file containing a list of illegal transactions could saved as a .jpeg file and the metadata edited so that the investigator would either pass over it, thinking a picture irrelevant, or perhaps open the picture to find nothing more than a blank page or even an actual picture of something. They would only find the real contents of the file if they thought to open it with a word processor as it was originally intended.

Another reason data is carefully copied off the original host is to avoid any risk of triggering a programmed ‘tripwire’ so to speak. Trying to open a specific file could perhaps also activate a program to scramble the hard drive to avoid any other evidence being found. While deleted data can be recovered, a process called ‘scrambling’ cannot. Scrambling the disk rewrites random bits to the entire drive. Overwriting data is impossible to undo in this case, and can therefore protect incriminating evidence. That being said if such a process occurs it offers compelling reason to continue the investigation if someone has gone to such an extent to keep data out of the hands of the police.

Additionally, remote access via the internet can be used to alter data on a local computer. For this reason, it is common practice for those investigating to sever any external connections the computer may have.

Further, data forensics experts forced to be meticulous, as small errors can result in corrupted data that can no longer be used as evidence. More than just fighting the defendant’s attempt to hide their data, analysts fight with the law to keep their evidence relevant and legal. Accidentally violating someone’s rights to data security can result in evidence being thrown out. Just with any legal search a warrant is needed and not having one will void any evidence found. Beyond national legal barriers, the nature of the internet allows users to freely send files between countries with ease. If information is stored in another country, it requires international cooperation to continue the investigation. While many countries inside NATO and the UN are working on legislation that would make international data investigations easier, storing data around the globe remains a common tool of hackers and other computer criminals to maintain anonymity.

Looking Forward

Data security is a serious concern in our world, and will grow in importance given our everyday reliance on digital storage and communication. As computer technology continues to advance at the pace it is, both forensics and anti-forensics tools will continue to advance as more advanced and literate software is developed. With AI research being done at research universities across the world, it is quite possible the future forensics tools will be adaptive, and learn to find patterns by themselves. We already have learning security tools such as Norton or McAfee virus protection for home computers which remember which programs you tell it are safe and make educated guesses in future based on your preferences. This is only scratching the surface of what is capable from such software, leaving much to be discovered in the future. With the advancement in software comes the negative too, with more powerful resources for cyber criminals to carry out their operations undetected. Data Forensics, and information security as a whole, then, can be seen as a never ending race to stay in front of computer criminals. As a result, the industry continues to flourish, as new analysts are always needed with software advances taking place every day.

TN or IPS Monitors? What’s the Difference?

Whether you just want to project your laptop screen onto a bigger monitor, or you’re buying a new monitor for your desktop, the search for a monitor, like any other component, is riddled with tech jargon that is often difficult to understand. This article is designed to give buyers a quick guide about the differences between TN and IPS, the two main monitor types of today’s world.

A Little Background on Monitors

Back in the not so distant past, CRT, or Cathode Ray Tube, was the standard monitor type. CRTs got information in an analog format along the cable. The cathode, or electron gun, sits at the back of the monitor’s tapered back and fires electrons corresponding to the signal received from the cable. Closer towards the screen is a set of anodes, that direct the electron to the RGB layer of the actual screen, via part of the signal from the cable. While these monitors were state of the art once upon a time, they don’t really have much of a place in today’s world with the invention of LCD screens, which have become the standard for today’s monitors.

LCD, Liquid Crystal Displays, don’t suffer from the same drawbacks as CRTs. For one, they use far less power. Also, CRTs tend to be harsher to stare at, and lack customization options in terms of brightness controls to the degree that modern monitors do. Additionally, LCDs are much more clear than CRTs, allowing for a more accurate image to be displayed. Modern LCD monitors work by having a two layer system of LED lights and LCD screen. The LED lights are referred to as a “backlight” and cause the image to be projected more clearly than the otherwise fairly dark LCD. The LCD layer, then, is in charge of color production, and the actual recreation of the image. LCD monitors are digital now, via such connections as HDMI or DisplayPort, and therefore can transmit data faster.

Now that we know a little about monitor history, let’s move on to the difference between TN panels and IPS panels.

TN Panels

TN, or Twisted Nematic panels, use a ‘nematic’ kind of liquid crystal to rotate and pass light through, corresponding to the signal transmitted. The main advantage of TN panels is speed. TN panels take advantage of something called an “active 3D shutter” which in essence allows them to display up to twice as much information as other types of panels. Additionally, the response time of TN panels is much quicker than IPS, though it is possible to find faster IPS panels. The delay in response time for a TN panel is roughly 2ms (milliseconds) however they can go as low as 1ms. Another benefit of TN panels is that they are generally cheaper than their IPS equivalent. This fast response time, and cheap factor, make these monitors quite popular in the gaming community, as well as the general consumer market, as gamers will experience less delay time when rendering an image. Additionally, TN panels allow for a higher refresh rate, going as high as 144Hz – though once again, it is possible to get IPS monitors with similar specs, just for a more money.

The major downside of TN panels is that they lack 100% accurate color reproduction. If you’re browsing Facebook, it’s not very important. However, if you’re doing color sensitive work perhaps for a movie or a photo edit, then TN panels may not be the right monitor for you.

IPS Panels

The main difference between IPS, In-plane Switching,  and TN panels, as touched on above, are price and color reproduction. IPS monitors are generally preferred by those in the professional rendering industry, as they more accurately portray colors of images. The downside, however, is that they are more expensive, though it is quite possible to find affordable IPS monitors for price ranges from $150 all the way up to thousands of dollars.

IPS monitors work by having a parallel instead of perpendicular array of pixels, which in addition to allowing for better color reproduction has the benefit of excellent viewing angles, while TN panels can often discolor if viewed from any relatively extreme angle. In essence, IPS panels were designed to address the flaws with TN panels, and therefore are preferred by many, from the average consumer to the professional editor.

Don’t let the benefits of IPS panels ruin your opinion of TN panels, though, for TN panels are still fantastic for certain situations. If you’re just sitting in one place in front of your computer, and absolutely perfect color reproduction isn’t really important to you, then TN is the way to go, especially if you’re trying to save a little on your monitor purchase.

Conclusion 

To summarize, TN panels have a better response time, as well as a cheaper price tag, while IPS panels have better viewing angles and color reproduction for a little extra cash. Whatever your choice of type, there are a plethora of excellent monitors for sale across the internet, in an immense variety of sizes and resolutions.