Op Ed Drafts

Op-Ed: America’s Biggest Book of Matches
By Kunrawee Bonita Tangmitpracha

On July 1st, 2013, 18 firefighters were killed in a forest fire in Arizona. But you may not need to be a firefighter to die that way. You can just sit and watch TV in your living room. The fire will come to you, especially if your house is located near a national forest protected by good old Smokey Bear.

Surprise! There is a tinderbox right next to your door!

An article from the Wall Street Journal reported that an increasing number of US communities were under a growing threat of the destructive wildfires. The threat is due to an extended drought, a rising temperature and the buildup of dry, highly flammable vegetation—particularly in the American West.

These hazardous fuels have been accumulating for decades. In 1910 the federal government passed a policy that declared war against forest fires. With Smokey Bear as its mascot, the policy has caused an unnatural expansion of forest area by loading US Western wildland with billions of excess trees. They form dense thickets mostly consisting of chaparral—oily, flammable scrub. This is a tinderbox that burns very hot, and incredibly fast.

Forest expansion may sound green and environmental friendly, as there are more trees (hurray!). Bad news is, the growing wildland resulted from unnatural fire suppression and afforestration poses more harm than good. In a report from the Property and Environmental Research Center, Alison Berry stated that this unnatural process not only drives the buildup of hazardous fuels, but it also interferes with the health of the forest. Natural wildfires have to occur occasionally to maintain a chemical, physical, and biological balance. In short, the forest’s biodiversity is at stake. When the fires are contained, however, the course of nature is interrupted. According to Forest Fire and Biological Diversity published by the Food and Agriculture Organization of the United Nations, fire suppression in North America has contributed to the decline in the number of grizzly bears. Natural, seasonal fires used to promote and maintain a number of important berry-producing shrubs—important food sources for the bears. Poor grizzlies are left starving. The culprit: Uncle Sam’s own Smokey Bear.

The tragedy doesn’t end there. The excess trees, which should have been incinerated by natural wildfires, also devour valuable water supply that could have been used for consumption. In an article on LiveScience.com, Jamie Workman of the Environmental Defense Fund estimates that the excess trees in the Sierra Nevada conifer forest are responsible for the loss of more than 15 billion gallons of water per day (collectively, 17 million acre-feet of water a year)—an amount more than enough to meet the needs of every Californian for a year.

So, how should we clean up this decades-long mess?

There are two approaches to the problem—a direct, and an indirect measures. The direct measure focuses on forest management. Conservationists and experts favored occasional controlled burns, and forest-thining and regular bush-clearing in the overgrown wildland. Since these methods rely on human custodians, policy-makers must be cautious, keeping in mind the lesson from Smokey Bear—the unnatural afforestation. We cannot afford to let the history repeat itself.

In addition to the direct measure, there is also an indirect mean. The indirect measure regulates the area within the vicinity of the forest, and, thus, cannot help affecting the local community. Nevertheless, the indirect control of “the tinderbox right next door” should primarily aim for environmental benefits, not human self-interests. Trees don’t have legs; people do. Risks of wildfires are evident. So, stay away from the hazard. A study published in the Proceedings of the National Academy Sciences (PNAS) states that the number of housing units within 50 km of a national forest went from 9.8 million in 1940 to 38 million in 2000, at a rate faster than the housing development outside this 50-km band. Even worse, it is projected that this number will have grown 45% by 2030 compared to 2000. Oh well, residential developments in the vicinity of national forest should not have been encouraged. Since we can do little to fix the past, future residential development had better take place in underused land. A similar story happened in New Orleans after the city was smashed by Hurricane Katrina. In “Higher Ground” published in The Times-Picayune, Leslie Williams cited a study pointing out that 51% of New Orleans terrain was, indeed, above sea-level and was underutilized. The takeaway from New Orleans is that, perhaps, we may be able to spot a significant number of underutilized parcels somewhere, on which people can live in their home-sweet-homes without being haunted by the nightmares of forest fires—away from the forests.

Time is running short. No longer can America waste even a single minute ignoring the crisis at hand. The cooperation among governmental agencies, research institutes, and the public must be initiated. Challenges are for us to overcome; together, we can.

If you were to ask someone “What element does nuclear energy use as a fuel source” most people would respond with either Uranium or Plutonium. The Uranium-Plutonium cycle is what is currently used in all nuclear power plants, where Uranium is allowed to radioactively decay to Plutonium and in the process releases heat that is used to generate electricity. But why start with Uranium? Is there something special about this element that makes it ideal for nuclear power generation? Well it turns out finding just the right element to use in a nuclear reactor is difficult, and after doing the research in the 1950’s Uranium was determined to be the best option. But what about the alternatives? It turns out there is only one, Thorium, which was considered inferior at the time nuclear power was first being developed. But now Thorium-based nuclear reactors are getting a lot more consideration, and many people are saying they’re the answer to our looming energy problem.

Currently Nuclear power produces 20% of the United States’ energy needs. It is far behind Coal, which sits at a whopping majority of 54%. Coal is the highest polluting current method of energy production, with its fossil fuel partners of oil and natural gas following. In fact, coal power even releases more radioactivity into the environment than nuclear, due to radioactive elements present in coal veins. With the it being a fossil fuel and a huge greenhouse gas emitter, replacing coal as a power source is on a lot of people’s minds.

And the only viable replacement for fossil fuels that allows for a clean, sustainable future seems to be nuclear power. It is largely agreed among experts that while Solar and Wind technology can provide a large portion of the power needs of the US, it cannot be the sole source of electricity. Both solar and wind power are intermittent, and the energy they during operation cannot be easily and economically stored for later use. By cutting out carbon producing systems such as coal, oil and natural gas, we leave ourselves without any base load power capabilities, and in a precarious position when the wind dies down or the days get a little too cloudy. The only viable option to allow a low to zero-emission energy economy is nuclear power. And Thorium is poised to be a green and friendly solution to our energy nuclear needs.

The possible benefits of Thorium are staggering. Firstly, it is cheap and plentiful, much more so than Uranium. It is currently believed that there is enough Thorium in the US alone to supply all its energy needs for the next 1000 years. It is also less radioactive than Uranium, and there is far less waste produced. Reactor designs using Thorium are passively safer than Uranium reactors, with meltdowns and gas explosions being essentially impossible. Much of this inherent safety is due to Thorium being used in a liquid salt form in the most prominent reactor design, meaning it can operate at atmospheric pressure and can be auto-drained by gravity to reserve tanks should any problem occur.

So why hasn't Thorium based energy been put to use in power plants throughout the United States? The reasons are intimately tied to our greatest perceived threat in the middle of the 20th Century: Nuclear War, or more specifically, the need to build bombs to keep the Soviets at bay. Uranium reactors produce byproducts necessary to build nuclear bombs. Thorium reactors, on the other hand, cannot be used in the production of nuclear weapons. This is appealing to those touting the benefits of thorium today, but during the cold war it meant thorium reactor research received much less attention, culminating with the termination of the United States Thorium research program in 1973.

And while United States of the 21st century may be snoozing when it comes to modernizing Thorium energy, other major world powers certainly haven’t. China currently has a plan in the works to construct a thorium reactor slated for 2015. India, one of the world’s largest Thorium Reserves, has gone even further, with a funded project to create 62 Thorium reactors by 2025, with the eventual goal being total energy independence. Even Norway, a relatively small nation not often known for its use of nuclear power, plans on modifying one of its current Uranium-based plants to run on Thorium for a research trial period. It’s time for the United States to step up and do its part for the environment, and make Thorium energy a priority.

Damn you, AutoCorrect!

Looking through my recent texts, I see phrases like “he salutary to compare hw” (he’s asking to compare hw) and “hes aldobin stochadtic” (she’s also in stochastic). It seems I can’t type anything without AutoCorrect (and AutoComplete) intervening when unnecessary, not intervening when necessary, or on the seemingly rare occasion, correctly interpreting my misspelled thoughts.

What exactly is AutoCorrect? It is a proofreading feature first introduced in Microsoft Word in 1993 and since then, has gradually revolutionized the way we communicate in writing, allowing us to be more careless. The basic concept is to aid writers by fixing typos and common misspellings in the interest of saving time. However helpful this feature may be, AutoCorrect often times produces unintentionally funny mistakes, and these mistakes have fed into a new cultural phenomenon.

Websites have sprung up solely to humour us with proof of AutoCorrect’s failures. One such post on autocorrectfail.org reads:
“- hey bro i hate to ask this but could you spot me some cash?
- Hi, what for and how much?
- i’m like $300 short on my MOTTSAPPLESAUCE payment due the 15th.
- Uh, how much apple sauce did you buy from them? WTF Jason
- I am laughing so hard I can’t breathe. I wrote mortgage payment…”

So how does AutoCorrect work? Many companies such as Google, Apple, and Microsoft are reluctant to provide details on how their own AutoCorrect software works. However, the general concept involves an algorithm that compares every word you type with a vast internal dictionary, at the same time predicting what you are trying to type before you finish. When it fails to find an exact match, the AutoCorrect feature suggests alternative words. Some autocorrect systems are connected to the web and can automatically expand its collection of words. Other systems can detect when you recorrect an auto corrected word to avoid using the same substitution again.

With AutoCorrect becoming pretty much ubiquitous, many are questioning its indirect effects. Are we, as a society, becoming more and more illiterate? A somewhat recent study, commissioned by Mark Goldring, a chief executive of a charity for those with learning disabilities, reported that about 1 in every 3 people in Britain cannot spell words like “definitely” and “necessary”. Goldring states that “with over two-thirds of Britons now having to rely on spell check, we are heading towards an auto-correct generation,” a generation that has learned to get far without being able to spell simple everyday words. Should we stop using AutoCorrect then? We certainly haven’t stopped using calculators even when much of the new generation struggles with mental math. According to a recent BAE poll, about 35% of adults cannot perform addition past 100 without the use of a calculator.

ENGRC 3500
July 19, 2013
The world will not survive forever burning fossil fuels. With high rates of economic growth come even higher rates of fossil fuel consumption. Even then, burning fossil fuels is not even the most efficient way to produce energy. Burning of coal and oil, which are the most popular methods, waste about 70% of the energy burned by pumping the polluting products into the atmosphere. Some optimists may say that burning natural gas is the preferred alternative. Sure, burning gas is better, but only by a 10% increase in efficiency.
Nuclear energy can be the solution to the world’s energy needs. Modern nuclear power plants have up to a 75% efficiency rate, which is incredibly higher than traditional fossil fuel plants. Currently, nuclear energy supplies around one-third of the world’s power, with zero emissions. Granted, nuclear energy is not regarding as perfectly clean energy because of the radioactive waste; however, nuclear reactors do not release any greenhouse gasses. The environmentalists should be applauding this fact about nuclear energy.
Past nuclear disasters emotionally charge people loathe with nuclear power. Nuclear disasters such as Chernobyl and Fukushima spark controversy about the safety of nuclear energy. These disasters did take lives and create issues, but so do fossil fuel production.
A high-profile oil related disaster that I’m will be etched into history was the BP Oil Spill in 2010. An explosion at a deep-sea oil drill pumped over 200 million gallons of oil into the Gulf of Mexico. The immediate explosion killed 11 people and injured an additional 17 others. What is even more appalling is the fact that the oil spill continues to affect life in the gulf for animals and humans alike.
All energy production has its risks. Coal mining has explosions and the black lung, oil has spills, and nuclear has radiation leaks. As any business analyst will tell you, growth cannot happen without some risk taking. Although nuclear energy has risks, the benefits greatly outweigh these risks.
Even though nuclear energy should be the primary source of energy creation in the modern world, it is not the end all to the world’s energy problems. Actual clean energies such as wind, hydro, and solar should be the energies of the future. These clean energies only account for fewer than 3% of the world’s energy production. If these energies are so much better for the environment, why do they not make up more of the world’s energy production? Clean energy, besides hydroelectric, is inefficient and expensive. These energy producers are the way of the future. However, they desperately need funding to improve their efficiency and lower their cost.
We cannot simply cease burning fossil fuels to produce energy; the world would not survive one day. What the world needs is a transition period to switch from dirty, inefficient, and non-renewable energy producers to clean, renewable ones. Nuclear energy can be this transition energy. If we move away from burning fossil fuels and build nuclear power plants to replace the fossil fuel plants, more energy will be produced for a smaller cost. Instantly, the money saved by switching to nuclear power can be used to research and improve renewable energy resources.
In order to continue our existence on Earth, the switch must be made to renewable and clean energy away from burning fossil fuels. This transition cannot happen overnight. Slowly, we must reduce dependency on fossil fuels and put our trust in nuclear power. This switch will allow humanity more time to perfect renewable energy before making the Earth inhospitable from greenhouse gas production stemming from burning fossil fuels.

3D printing: An Op-Ed

This spring I had the opportunity to utilize 3D printers for a class project, which involved designing and producing a consumer product within the span of three weeks. Although the product my team had come up with was not the most original product, it was an exciting moment when our first prototype was printed. And when we realized that the parts didn’t fit quite the way we wanted to, the design was changed quickly on SolidWorks, a 3D CAD software, and printed again within the next couple days. The iterative process was complete and before we knew it, we had a functioning website to sell the product.

In this way, 3D printing is every designer’s dream. One can conceive an idea, design it, and produce it all in the same day. While there are still some manufacturing restrictions, 3D Printing could allow one to print any object they desire. The advantages of 3D printing have allowed a 5-year-old boy born without fingers on his right hand to pick up coins, a crippled duck to walk, and an eagle to get a new beak. Today we see the benefits of 3D printing for highly customized, individual products. When we read these heart-warming articles, it’s hard to deny the impact that 3D printing already has on manufacturing. However, will 3D printing ever be advanced enough to truly make a revolution in mass production?

Although 3D printing has become popularized in the recent years, the technology of 3D printing has been around since the 1980’s. Since then, 3D printing has expanded into many fields and has yet to been proven as an efficient method of mass production. One company that will be testing the benefits of 3D printing is GE Aviation, which will be producing engine parts that “would have been impossible to mill or forge in yesterday’s aerospace factories, while saving money and, in some cases, making parts lighter” (Seattle Times). The process would also eliminate what used to be a time-consuming combination of multiple pieces put together and heat-treated. While many manufacturing techniques involve subtractive processes, 3D printing is achieved by adding thin layers of material in different shapes. The main advantage of 3D printing for these industries is the ability to produce designs that were previously impossible to manufacture.

Though 3D printing is still developing and improving, perhaps other companies should follow GE Aviation in exploring a new method of manufacturing to open more innovative possibilities. Perhaps it will be the adoption of this technology and its continuous improvements that will push 3D printing to a main form of manufacturing. But for now, it seems that 3D printing will always be utilized for small-quantity custom products.

Is Solar Energy the Answer?

Solar energy seems too good to be true, doesn’t it? Free and clean energy with no emissions; there’s no way there isn’t a catch. But surprisingly, there isn’t much of one. Many people are skeptical, and there are lots of assumptions out there about the negatives of solar energy. Some of the most common myths are that solar power is too expensive, inefficient and that it can't work in cloudy weather. But solar power is actually an extremely viable way to save energy and improve the sustainability of your home, you just can’t believe the myths, and here’s why:

It takes more energy to build a solar device than it will ever produce.

Although it might seem expensive at first, the National Renewable Energy Lab says that you’ll make back the money that you invest in solar panels within 5 years of installing them. Photovoltaics (PV) such as solar panels or solar cells convert solar radiation into electricity with semiconductors. PV modules last nearly 30 years, so a system will actually produce significantly more energy than it could ever consume. This idea may have been true when devices were custom-made for military and other scientific markets only. But the payback period for energy has dropped significantly upon opening up solar power as an option to the public. Although it takes the current modules 4-5 years to generate more energy than was used to make the system in the first place, the next generation will use different materials, which will allow for a 2 year energy payback. That means that for the rest of its anticipated life, this one system can produce clean, non-emitting energy.

Solar systems are just way too expensive.

Actually, the installation of solar energy units is more affordable than ever before. Many states in the US offer incentives for those who choose to go solar, which can cover between 30% and 85% of the cost of the system. Plus, solar isn’t an all or nothing idea. You can start off with just a few panels. The price has dropped from around $5.50 per kW in 2008 to $4.40 as of May 2013. PV technology prices have dropped every year since going onto the market. The more panels they sell, the more the price drops, which is exactly the opposite of nonrenewable resources, which become more expensive as they become scarcer. A fairly common home installation, about 12 panels, would cost around $9,000 in total, which would produce enough energy for about 4 months of electricity for a home; the remainder of the energy still provided using natural resources. However, this decreased reliance on natural resources would allow for a payback within 5 years, which can allow a profit to be made for around 2 decades thereafter.

Solar panels do not work in cold, cloudy regions or at night.

Surprisingly, all that is needed for solar panels to produce energy is UV light, and even the most cloudy day in the cloudiest place will allow for energy production. Actually, the solar energy capital of the world is Germany, which is known specifically for its lack of sunny days. Also, colder solar panels can conduct electricity better, so temperature doesn’t have much of an affect. What about at night when there is no UV, you may ask? Most solar companies provide customers with “battery banks”; during the day the battery banks store energy for use at night. They have even started to make streetlights with solar panels on them, which is a great example of how the panels can absorb and conserve energy during the day to be used to light streets at night.

Solar energy isn’t as efficient as other forms of energy production.

The efficiency of solar panels has actually quadrupled since the ‘70s according the U.S. Dept of Energy’s studies. Solar energy has between 15-19% efficiency, which is similar to that of the gasoline that runs your car. An upside to solar that gas doesn’t have is that solar power technology continues to improve, and so will the efficiency. Solar electric systems can also be more reliable than the utility company because they have no moving parts fewer possibilities of power outages that many customers slightly outside the grid of an electric company might experience. PV systems are a form of clean energy; that means that they produce no emissions or greenhouse gases. Comparing to commonly generated electricity, every single kilowatt of solar energy offsets nearly: 16 kilograms of nitrogen oxides, 9 kilograms of sulfur oxides and 2,300 kilograms of carbon dioxide.

This is a booming industry; it’s predicted to grow by 25% per year in the U.S. alone. If this prediction remains true, the U.S. will be able to offset ten million metric tons of carbon dioxide per year by the year 2027. That would mean that the greenhouse gas and atmospheric emission rate would become negative because of solar energy.

Diet Coke is a Big Fat Lie

Many of us have experienced the struggle of deciding between the normal sugary soda and the diet soda: after a series of mental breakdowns, we choose the diet one. We often consume a variety of dietary product such as low fat yogurt, low fat cheese, reduced fat milk etc. As advertised, dietary products have substantially lower calories than natural product. Everyone knows that eating too much fat can be bad for you. So when you see the words like “low fat”, “zero-calorie”, “sugar free” on a product, it must be a healthy choice right? Despite the taste between the two, is diet coke actually better? Is low fat food healthier?” Not necessarily”, says Professor Kerin O’Dea, director of the Sansom Institute for Health Research at University of South Australia. Not only they do not contribute to weight loss in the long run, they might cause heart problems and cancer.

With only 80 calories, the Fiber One 80 calories cereal is temping for the consumer. On the surface it seems healthy because it contains zero sugar and 14 grams of fiber. However, when we slide on over to the ingredient list it tells a different story. Yes, it doesn’t contain sugar, but it contains aspartame—a very controversial artificial sweetener. Aspartame causes headaches and stomach issues for a pretty good percentage of people. In many low-fat and fat –free foods, natural sugar is replaced by the dietary supplement sugar like Splenda, which are at least 300 times sweeter.

In 2006, Merisant, filed a lawsuit against the maker of Splenda, McNeil Nutritionals. Splenda contains zero calories but had a tagline that is “Made from sugar, so it tastes like sugar”. It is in fact not a natural product. Some long-term studies show that regular consumption of artificially sweetened beverages reduces the intake of calories and promotes weight loss or maintenance. Others show no effect, while some show weight gain. The human brain responds to sweetness with signals to eat more and then slow down and stop eating. Artificial sweeteners could confuse the intricate feedback loops in our body such as brain, stomach, nerves and hormones by providing a sweet taste without any calories. Body then loses its ability to accurately gauge how many calories are being consumed.

Existent studies in rats support the idea of the “sweetness confusion”. Researchers at Purdue University have shown that rats eating sweetened foods with saccharin took in more calories and gained more weight than rats that ate sugar-sweetened food. Recent studies of the brain reveal that sugar and artificial sweeteners affect the brain in different ways. Some parts of the brain become activated when it receives a “food reward”. People would eat more due to the lack of satisfaction caused by artificial sweeteners. At the University of California-San Diego, researchers performed functional MRI scans as volunteers took small sips of water sweetened with sugar or sucralose. Sugar activated regions of brain involved in food rewards, while sucralose didn’t. Susie Swithers, professor at Purdue University argues that the taste of fat would signal to the body the arrival of calories. The body prepares for these calories by releasing hormones that play a role in digestion and tell the body to stop eating. When consuming low fat food and artificially sweetened food, the body is not able to react appropriately and releases inadequate levels of hormones to make it eat more.

In the Nurses’ Health Study, researchers evaluated the levels of different types of dietary fats and its association to the risk of heart disease in a 14-year follow-up study. The deleterious effect of trans fats on blood lipids would have predicted that trans-fats were the worst type of dietary fat on the basis of about 1000 incident cases of coronary heart disease.

The bottom line is that turning to substitutes like artificial sweeteners or fat substitutes is not the way to lose weight because they can lead to overeating in the long run. Instead of drinking diet coke, what if we only drink tea and water? What if we start eating more veggies, less meat, less oil, less sugar? What if we actually try to taste and appreciate the natural flavor of food?

Frackin' Problems

Hydraulic fracturing, also known as fracking, is a method of improving or creating wells for water, oil, and most commonly natural gas. The process involves creating high pressure in rocks well below ground and causing them to crack apart, making it easier for gas to flow. This relatively untapped source of natural gas provides a local and cheap way of producing much-needed fuel. While this practice is very popular in many areas of the country, it has gotten a very bad reputation for several reasons. The health risks and environmental hazards of fracking, the way it is currently done, can be seriously damaging to the communities where these extraction methods are used. In Ithaca, for example, people are so highly opposed to fracking that they put up anti-fracking signs at every corner. What people may not realize, however, is that fracking can be done well. The environmental hazards of fracking are due largely in part to bad practices of gas companies and poor governmental regulations.

One of the most visually impactful images of fracking is a video that circulated the internet of someone who lived near one of the natural gas wells lighting their tap water on fire. The chemicals used in fracking have been in several cases found to contaminate groundwater. Over 1000 cases of health damages have been recorded as a result of this contamination. Methane leaks in pipes that are used to pump water deep into the earth can result in dangerous levels of chemicals in drinking water. This, however, does not need to be the case. Most of these problems are caused by practices that are avoidable or fixable. The water used for fracking is filled with dangerous chemicals and completely unusable, even for further fracking. If we could come up with a process of treated this water so it could be reused, or even coming up with a reasonable substitute for water, it would solve the most objectionable problem in fracking.

There are other problems that result from fracking that are somewhat less well-known, such as minor earthquakes, air contamination, and land clearing. These can also be fixed or controlled by changing practices and being more careful. Air contamination is mostly caused by leaving the natural-gas-containing water to sit in open air, which can easily be remedied. Earthquakes are usually the result of wells running through different types of rock than were expected. Gas companies claim that fracking can’t cause earthquakes any higher than a -3 on the Richter scale (humans can perceive a +3), but many communities have experienced much higher and very unusual seismic activity. When the wrong type of rock is cracked open, it can cause shifts in tectonic plates. Carefully checking wells for this would solve that problem, as well as monitoring any seismic activity before it reaches levels humans can perceive. However, even with all the above-mentioned effects, gas companies claim that they are not at fault. Additionally, governmental pressures and poor regulations make it difficult to perform scientific studies to prove them wrong.

Hydro-fracking comes at a cost, but the benefit of the natural gas it supplies is enough that our priority should be fixing fracking, not getting rid of it entirely. The practices and problems of fracking are not unique to the United States. Countries all over the world use these same methods, and have suffered the same consequences. The reaction that most other countries have found, most notably Great Britain, is to impose many restrictions on fracking, rather than outlawing it outright. Probably the greatest problem with fracking in the United States is the shockingly small amount of restrictions on it. Enforcing rules on fracking would force companies to be responsible, and hold them accountable for the dangers they impose when they use bad practices.

The natural gas or shale gas produces from fracking is cheaper and burns cleaner than many other fossil fuels. However, this does not change the fact that it is a fossil fuel, and using it for energy creates more CO2 emissions. Fracking is not the answer to the energy crisis; our first priority should be finding renewable sources of energy that do not further our dependence on fossil fuels. However, given the current state of the economy and country, overcoming this dependence is not a goal that will be accomplished in the near future. Until then, the need for natural gas will grow; natural gas production is expected to rise by 44% in the next 30 years. Fracking is a necessary evil, and it should be done in a way that is safe and has as little as possible impact on communities and the people that live in them.

Working from home is one of the ways technology has changed our everyday lives and the work scheme over the past years. Starting from the extended use of personal computers at home, and going on with collaborative suite technologies that allow telecommuting, it looks like technology is offering us the opportunity to do everything at the convenience of our own place. Benefits of this facility for the employees are the flexibility of work schedule and behavior, the opportunity of parents to raise their children while continuing their career, and the ability to work from remote areas without the need of transportation.

Maybe the most important advantage of telecommuting is related to the environment and fuel usage. According to the annual Energy Review of the U.S Energy Information Administration for 2011, 28% of energy consumption in the U.S. comes from transportation, 93% of which is petroleum based. Everyday commute to work contributes largely to this amount of gas spent, therefore producing CO2 and other greenhouse gas emissions and causing air-pollution and climate change. The number of the employees having jobs that could be done from home, multiplied by the amount of emissions generated in every trip for a whole year give us a great potential for environmental protection, if they chose to avoid the trip to the office.

The effects that telecommuting has on transportation do not stop at energy efficiency issues. The rush-hour congestion in metropolitan areas could be significantly reduced if some of the employees with the flexibility to work from home did so at least some days every week. Telecommuters would then completely gain the travel time spent in traveling to work, which was 25.2 minutes on average from 2006 to 2010, and people who still have to use their cars would encounter less traffic, cutting down their travel time and the work trip cost significantly. Time and money, however, are not the only problems of peak-hour congestion. Starting your day sitting in an idle car, trying to get through traffic causes unnecessary frustration and upset, and arriving at work in such a negative state certainly does not enhance the employees’ productivity.
Then why did Yahoo ban telecommuting? Their policy change this February might look like a suicidal movement from an environmental approach, and the employees who used to take advantage of the company’s flexible regulations were definitely not glad with the new rule. Yet, Yahoo hopes that having all the employees spend their work day together at the office will positively affect its productivity, and it there are reasons to believe so. The company is trying to increase communication between its employees; engineers have to be creative, and the best way to achieve this is to be in touch with people with the same goals, but different mind sets.

Groups work better than individuals, and this is a general truth all engineering companies abide by. An innovative contribution to a project is more likely to occur while discussing ideas with people outside your team, with more objective opinions. Working at an office with common areas, where employees constantly interact with each other, increases the number of random encounters and therefore the input of ideas into a project. Yahoo’s new regulation sounds absolutely reasonable, considering that at the same time, its primary competitor, Google, was building new headquarters designed specifically with the purpose of maximizing the probability of random collisions of employees within the office. Areas where employees can take a productive break are an important factor that is missing from the cozy and comfortable environment that working at home provides; socializing with other employees can prove more effective than sitting in a cubicle, analyzing the same information over and over again. Working at the office takes full advantage of the entire employee population, just by having them interact and communicate.

All in all, telecommuting could reduce carbon emissions and benefit the environment, but tech companies should not commit to it without considering the effects that working at the office might have for productivity. Working from home could be ideal for other professions, but not for engineers that are hoping to make pioneering contributions to the technology world. After all, they could all work together to find alternative ways of energy saving—they are more likely to be successful at this task if they are all working at the same office.


  • Chris Muth Moore’s Law and Processor Expansion

Gordon Moore himself acknowledged the fact that transistors are becoming too small. He states, “In terms of size [of transistors] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that far—but that's as far out as we've ever been able to see.” (Manek)
This poses the question: “What can we do?” There are already actions being taken to accommodate for smaller transistors; new printing techniques such as photolithography and supercooling. But these cannot be the only answers. When we hit the atomic barrier, what happens then?
It would be in the best interest of computing for funds and research efforts to be allocated towards developing alternatives to making smaller transistors. We will eventually be computing at a microscopic level, but the budgets for transistors at that point will be overwhelming, as will the energy consumption and heat dissipation hardware required to control the processing power of several billion transistors.
In conjunction with developing smaller transistors, researchers should focus on techniques that will facilitate speed at the cost of size. Bigger chips provide the ability for processors to incorporate more hardware, which in turn allows for a technique called parallel processing to be utilized. This system integrates many central processing units, or cores, together, allowing user instructions to be executed at fractions of the speed a single microprocessor would take to achieve the same results.
Although people appreciate the fact that many of their gadgets are small and lightweight, they should realize that at some point in the near future, we will not be able to achieve processing speeds that we would like to. Many companies such as Intel and IBM already realize the fact that additional hardware and software components must be produced, but are still primarily focused on decreasing transistor size because that is currently the most efficient way of achieving processing speed increases without investing too much of their budget.
We would like for research to keep increasing the speed that computers run at, no consumer will argue with that fact, but to accomplish such a feat, to adhere to Moore’s Law, new systems must be developed aside from just shrinking transistors.

Manek Dubash (2005-04-13). "Moore's Law is dead, says Gordon Moore". Techworld. Retrieved 2013-07-17.

Is technology guilty?
As today’s technology is developing very fast, people start to concern that our lives will be invaded by the technology. This has been a controversial topic for a long time. There have been some incidents involving the “guilt” of technology. A freshman named Clementi from Rutgers University committed suicide after his roommate and another student used webcam to broadcast on the Internet live images of the 18-year-old Clementi having an intimate encounter with another man. Another incident involves Andrew, an assistant attorney general in Michigan who used his personal blog to attack 21-year-old Chris, the openly gay student body president at the University of Michigan. Chris was constantly bothered by the blog and his life was deeply affected. A study done at the University of Maryland found that people who are prevented from using any technology for 24 hours will show a feeling of anxiety. It is claimed that people who use the technology such as Ipad and text messaging for frequent times will easily get addicted. Technology today such as Twitter and Facebook, has almost occupied everyone’s normal life. It is really difficult to stay away from technology even though the person does not actively use the technology. Thus many people think that technology has invaded our lives with very bad impact.
When new technology appears, people will automatically think about the ethics of it. This issue is involved with the sin of the technology. If someone uses webcam to monitor other people, it is the fault of technology. If someone uses facebook or some other social media to attack other innocent people, it is the fault of technology. Let’s follow this logic. If civilians are killed by high tech weapons in turmoil, it is the fault of technology; if you are late for your work because of the traffic, it is again the fault of technology since it is technology that makes so many cars on the road. Obviously, it is not a correct deduction. Should technology be fully responsible for all the incidents mentioned above? The answer I give is definitely no. When analyzing the causes of those incidents, people seem to focus more on the impacts of technology but forget to consider the dominant cause, which is human itself.
If we need to make judgment on some incident, we cannot say that the technology is guilty. It is always the person who uses the technology that is guilty. For the example from the very first paragraph, because the two students disregard the privacy of Clementi, they recklessly broadcast the private life of Clementi and finally led to the consequence that cannot be saved. Some may say the technology of the webcam assisted them. They will always find ways to invade Clementi’s privacy, regardless of what method they use. People created technology and technology can be used to do whatever they want. If people are evil, they will use technology to do bad things and the technology will be regarded as evil. If people are good-natured, the technology will work in the right ways and the technology will be beneficial.
Some people claim even though the incidents are the fault of humans, it is still true that technology is invading our life. I may be wrong but I doubt it. What technology has brought to our life so far? They include freedom, creativity and efficiency. For freedom, it will definitely encounter many issues especially how to deal with privacy. For a society with variety, freedom will always give rise to the invasion of privacy. This privacy issue is not the problem from technology. It is a philosophical question may never be answered. Ethics of technology has been discussed a lot in recent years especially many incidents involving with privacy. Development of technology has realized the imagination of human beings and offered us more opportunities and freedom to achieve our goals. However, some people utilized the freedom and did harm to other people. It is human’s nature to pursue freedom but it is also the nature to break the rules. Therefore, laws need to take effect and let the freedom created by the technology work properly under surveillance. In this way, more freedom can be produced by the technology without concern and human’s rights will not be violated.

Violent video games do not cause violence
As video game graphics become more and more realistic, some continue to fear that violent video games make players more aggressive in the real world and more likely to inflict harm to another person. This fear has been around since the 1970’s, leading to much research and public debate. In 2011, the Supreme Court struck down a California law that banned the sale of mature-rated video games to minors on the basis of the First Amendment. The Court cited that studies connecting video games with aggression do not prove causation and that their effect is small compared to other media. New, improved research has supported the Court’s decision.
Christopher J. Ferguson, a clinical psychologist and department chair of Psychology and Communication at Texas A&M International University, has led the way regarding researching the influence of violent video games. He concludes that video game violence does not correlate with either positive or negative behavior, while competitiveness, among many other variables, increases both temporary aggression and cognition. He also points out problems with previous research that has found a negative impact due to violence in video games. The first is poorly validated outcome measures, for example, a temporary increase in aggression, originally thought to be caused by seeing violence, does not mean that individuals will commit acts of violence. The second is a failure to control significant confounding variables, including social variables such as personality and family situation. And the third is possible confirmation bias, a common problem in this field due to its political ramifications.
So if the amount of violence in video games does not have an effect on the actions of players, it does not matter how realistic the games are. But should we be concerned about competition causing violence? As mentioned above, aggression while playing is not a predictor of violent acts. In fact, despite the rapid increase in video game prevalence, violent crime had decreased steadily. It has been suggested that violent, competitive video games are an outlet for aggression, particularly in youth. Children tend to learn to differentiate between reality and fiction quickly, preventing any residual effects from appearing while not gaming. And while not unique to video games, Ferguson found that competitive video games correlate with increased ability to focus, increased hand-eye coordination, and increased spatial reasoning. Many video games also balance violence with positive values, such as justice, honor, and teamwork.
In the end, parents should decide what they allow their children to play, using the current rating system as a guide if they wish. But realistic, violent video games should not be seen as a source of violence simply because mass murderers often play them. More research is always needed, but for now our quest to prevent violent acts should focus elsewhere.

Ferguson, Christopher J., et al. "Not Worth the Fuss After all? Cross-Sectional and Prospective Data on Violent Video Game Influences on Aggression, Visuospatial Cognition and Mathematics Ability in a Sample of Youth." Journal of Youth and Adolescence 42.1 (2013): 109-22. Print.
Nieuwenhuis, Jared M. “Op-ed: Video-game industry not to blame for gun violence.” The Seattle Times. March 17, 2013. http://seattletimes.com/html/opinion/2020569751_jarednieuwenhuisopedxml.html

Shryan Appalaraju

Many people have begun to worry about the growing power of major Silicon Valley companies such as Google and Facebook; these companies have access to so much data from our lives that they are more than capable of invading our privacy. However, even these tech giants are beginning to voice their opinions about an even larger threat to our privacy- the NSA. With recent technological breakthroughs in data mining, the NSA has been able to autonomously analyze enormous portions of data—this means the NSA does not have to “eavesdrop” on calls or watch people live on camera; they can track the movement and communication of people all over the world without actually watching them.

The recent leak of the NSA’s PRISM program from Edward Snowden reveals that the NSA uses large corporations as sources for their vast data scourges, and to monitor the public on a daily basis. For instance, Verizon Communications has an advanced fiber optic line connected to a military base in Virginia that allows for the transfer of all calls to the NSA on a daily basis! That access to data from companies that Americans depend on daily raises troubling questions about privacy and civil liberties that officials in Washington, insistent on near-total secrecy, have yet to address.

Officials in the White House, most prominently President Barack Obama, defend the NSA programs by stating that they are vital to national security, allowing for the prevention of terrorist attacks and monitoring of suspicious activity. Obama also states that the NSA isn’t targeting ordinary Americans, and “does not directly listen to the content of phone calls,” but rather looks at time logs, and other less- private data. But new advancements by the NSA seem to contradict government assurances that the NSA does not intend to invade the privacy of the American public without warrant. New technologies such as trilateralization, allow the NSA to track the movements of individuals moment to moment! And the Federal Aviation Administration states that we may have up to 30,000 drones monitoring us from the sky by 2020! Commercial drones are currently illegal but will be coming to a sky near you starting September 2015.

Activists such as Senator Jim Tomes of Indiana filed a bill earlier this year to prohibit a drone taking photos of a subject without the subject’s approval. A bold move—however this bill didn’t even get a first hearing. But the NSA’s continuing expansion in technological surveillance continues to worry politicians. Is this intense and continuous monitoring of activity necessary to protect Americans and has it helped stop any attack thus far? A better question would be: what information does the government actually access about individuals in the American public? The truth is, we don’t know, and that information is as of now kept to the government.

But there is a possible compromise. The largest fear in the public of the NSA’s activities is the fear of the unknown—fear of not knowing whether at any moment a drone could be watching you or someone listening to a phone call you just made. To ease this fear, the NSA, with its vast technological resources, need to make the public aware of its processes to monitor activity, as with great technology comes greater responsibility. An alliance of 63 companies, such as Google, Facebook, and Apple, has begun protesting for greater transparency in NSA surveillance activities. For instance, if the NSA wants to request information from these companies to monitor activity, the companies should be allowed to report specific information to the public: the number of government requests for information about their users, and the number of requests that sought communications content or other personal data.

This is a major step in compromise between the NSA and the American public, as it will help Americans to stop speculating about the extent their lives are monitored, and start to view some facts on the issue. Assuming the NSA’s surveillance is necessary to protect us, the least they can do is release information to give the American public awareness of what information they actually take. Kentucky State Representative Diane St. Ogne says it best: “it’s time to educate the American people about the institutions that are designed to protect them.” A transparency report, which is what the alliance of companies are requesting from the NSA, would shed light on the where the eye of the NSA is looking, and help to ease the fear of the unknown currently haunting the American public.

*Joseph Begun http://articles.latimes.com/2012/apr/13/opinion/la-oe-ropeik-california-nuclear-risk-20120413
Nuclear Fusion: Energy of the Sun
We are in dire need of an alternative energy source. Our planet could quickly run out of coal and gas, in fact, HSBC (as reported by the New York Times) estimate that we have as little as fifty years of oil left. Alright, that may seem like a small number, so even if we double that, we still don’t have much time. This, on top of the fact that the oil that we use now is polluting the atmosphere and causing catastrophic climate change, and that the United States is tied to OPEC for most of its oil, points to the need for change. Where to turn? Ethanol? Solar? Wind? The answer is none of the above.

Nuclear. You’re probably cringing in your seats at the mere mention of nuclear power. Nuclear power has gotten a bad wrap, particularly in the United States. We recently had the incident at Fukushima, where tons of radioactivity was released into the atmosphere. However, many will still remember Three Mile Island and even Chernobyl. According to a Gallup poll, as of March 15, 2011, only forty-four percent of Americans are in favor of nuclear energy, and forty-seven percent are opposed. The government has said very little in support of nuclear energy.

You may very well be one of those people who opposes the use of nuclear energy in the United States. However, there is a safer, more efficient, and less waste-producing method of getting nuclear energy. To date, we only have nuclear fission reactors, which use heavy uranium and the energy released from the decay of those materials. The products are radioactive and dangerous, and we don’t really know what to do with them. Fission reactors are dangerous because once they start/once they get out of control, they are hard to control.

Nuclear fusion is the simple answer to our energy needs. Fusion is simple. All you need is deuterium, which is found in seawater in large quantities (enough to last us about a hundred million years, and tritium, which is less abundant, but easily created using lithium or some lithium compound, and a lot of lasers of magnets. Once these atoms are heated up enough, using one of a few methods, the nuclei fuse, and you get energy out.

Nuclear fusion is clean. The only byproducts of a fusion reaction are helium and neutrons. We draw the energy from the fast moving neutron, a neutral particle. Then, put a balloon factory next door and you have zero waste. You have no dirty plutonium to get rid of. The only radioactive materials that come out of a fusion reactor is the walls, and the radioactivity of these has been shown to be no longer than 100 years (much less than waste from fission reactors).

Nuclear fusion is safe. Unlike nuclear fusion, which, as mentioned, is hard to stop once its started, if anything goes wrong in a fusion reactor such that radioactivity could leak out, it cannot undergo nuclear fusion reactions, and thus, will immediately stop and any major blast or radioactive exposure will be impossible.

Lastly, nuclear fusion is efficient. It has been theorized that we could build fusion reactors that would produce 5 GW of electrical power to put on the grid (even after taking some of the power to continue to run the lasers/magnets). Given that the U.S. uses about 25,155 TWh of energy per year, we would need about 3 such plants to run the United States. Only 3 plants to run the entire United States. We have about 6,600 power plants in the United States today. Even if we could only reach 3% of total possible power, we would only need a hundred power plants in the United States.

The downside of nuclear fusion is the capital. We need to build the plants and do more research. Luckily, the rest of the world is interested in nuclear fusion so the research is already being done. Positively, once the capital of a nuclear power plant is put down, they only need about two million dollars a year for upkeep, or about seven cents per person per ten years.

Nuclear fusion is a clear choice. Countries all around the world are all jumping on the idea. With a little bit more research, the country could be running completely on clean fusion energy in just a couple decades. So why is the country not fully behind this technology? Completely clean energy that cuts our dependence on foreign oil? Sounds like a dream come true.

  • Molly Tierney: Taking the Wind out of the Sail


5.2 miles from Cape Cod, 9.3 from Martha’s Vineyard and 13.8 from Nantucket; these are not far distances for the projected wind farm in the Nantucket Sound. Cape Wind was proposed by Energy Management Inc. (EMI) in 2001 and since then has been working their way through some very thick layers of red tape. Not only did they have to receive all the proper permits, but they also had to come to an agreement with some very hesitant Massachusetts’ energy companies, NSTAR and National Grid. As a result of fighting off over twelve lawsuits from various groups, the project has already cost $50 million and construction has not even started yet. Cape Wind is currently in the financing stage, with the price of the project is still growing to over $2 billion.

There is a lot of give and take with this project from an environmental standpoint. Does it provide sustainable energy for the area? Yes. Does in interfere with the regions natural wildlife? Yes. The argument that it is green energy is somewhat still questionable. If anyone ever tried to put a wind turbine in the Grand Canyon, it wouldn’t even be a question of it being rejected instantly. The same should go for the Nantucket Sound. This site will lose not only its natural beauty, but its historic preservation too. According to the Massachusetts Historical Commission will impair the views of 16 historical sites on both the Cape and the Islands.

EMI chose horseshoe Shoal, the 25 square mile area where the farm will be built, because of its shallow waters and short distance to the shore. Ultimately, it was chosen because it is profitable for them; they did not consult anyone else including the various groups it would be affected. The lives this one project will alter are endless, ranging from homeowners who will lose property value to migrating song birds and sea ducks that will die in the cross path of the whirling turbines. As one of the most densely traveled waterways on the Atlantic Coast the list continues on to the two ferry companies, Steamship Authority and Hy-Line Cruises, that are the main connection between the Islands and mainland, whale watching boats, commercial and recreational fishermen and sailors who have spent lifetimes in these waters.

The turbines themselves will stand 440 feet tall, higher than the highest points of the Cape, Nantucket or the Vineyard. They create visual pollution not only along the horizon during the day, but added lights to warn airplanes will hide the stars at night. They add noise pollution that can be heard even onshore. All of these cons contribute to the downfall in tourism that will plague the area. These summer hot spots will no longer be as popular causing regulars to relocate to other waterfront areas. It may even go as far as to cause permanent residents to move back to the mainland.

Energy Management Inc. the company behind the entire project boasts that they are the pioneers for wind energy for the country. However, as the forerunners they are creating some horrible headaches for themselves. There are the obvious problems of its high cost and the clashing views between federal and local powers. As the project moves forward the finer details start to reveal themselves. For one thing, lodging a turbine into the ocean floor is no easy task. In fact, there is not a single ship in the United States big enough or equipped to do the job. Obviously there will be challenges no matter where the location, but the quantity of them could definitely be reduced if EMI chose to work at a site further offshore that is not a popular and historic waterway or even better they could chose to work onshore.

There are more problems with this project than just the obvious. It is not simply the wealthy homeowners versus the environmentalists. So much more has to be taken into account for this one project that one has to question whether it is even worth it? Due to the small amount of space Cape Wind is limited to only a 130 turbines. However, if they chose another offshore site a little bit further from the coast they could build thousands of turbines and make a real difference in the battle against global warming. The idea is awesome; wind energy is free, why shouldn’t they take advantage of it? It just comes down to location, location, location and really this just isn’t prime real estate for this type of project.

  • Online Piracy is Wrong and Permanent

By Thomas Babineau

There is no doubt that online piracy is a problem. Currently, over 75% of computers contain some pirated software, the average iPod has $800 worth of pirated music, and 95% of all music downloaded online is done so illegally. There is also no doubt that this is a legal issue and a moral wrongdoing. The music industry takes $12.5 billion in economic losses annually due to piracy, and with so much stolen music, in most cases we are not talking about petty theft, but grand larceny. So, the question remains, what can we do about this?

The first step was to go after the suppliers. In 1999, Napster was sued and by 2001 was ordered to shut down and pay $26 million in damages to copyright owners. Then, pirating got more complex. Instead of actually hosting pirated materials, peer to peer file sharing sites just became an intermediary between individuals, creating millions of small time criminals rather than one large one. The Pirate Bay is currently the worlds largest pirating site, featuring movies, TV shows, music, software, eBooks, and more all free of charge. Originating in Sweden, The Pirate Bay has survived raids and legal action and has preemptively moved their servers around the world to avoid legal consequences. They remain running and strong today.

The United States recently took its own shot at online piracy with the Stop Online Piracy Act (SOPA). SOPA would help US law enforcement to fight against online copyright infringement with the ability to legally bar advertisers from associating with piracy sites and by preventing search engines from linking to these sites. The act also had provisions to prevent internet services providers from allowing access to these websites. In response, Google, Wikipedia, and 7,000 other websites came out publically against SOPA, mainly arguing that the bill was an unreasonable form of censorship, which was enough pressure to kill the previously popular bill.

So far, we have not found an answer to the piracy problem because we are looking in the wrong place. The internet, being the international vehicle that it is, is a lawless domain, and legal action to end internet crime is futile. Imagine trying to fight the war on drugs, except those drugs are now free and all you have to do is press a button in your home to get them. Based on the sheer volume of offenders, you cannot go after individual pirates. The owners of these websites have successfully evaded being shut down thus far, and even after small shut downs, there is nothing stopping more pirating sites from popping up. Trying to legally eliminate these cyber criminals is like sweeping leaves on a windy day.

To answer the piracy problem, there must be a fundamental change in the entertainment industry, combating the problem with economics, not laws. The fact remains that pirating is not just popular because everything is free, but also because it is arguably the best delivery system we have. You can find nearly any TV show or movie just a click away, and you can download it at some of the best speeds available. While I don’t blame entertainment companies for pursuing legal action, if they want real results, they must change their business models.

The rise of Netflix is a possible answer, but the media selection on Netflix is not yet enough to seriously combat pirating. Spotify is a possible answer, but it still requires bandwidth, limiting use and eating into data limits. Some musicians have changed their approaches with some success, releasing free mixtapes to raise fame and then profiting from endorsements, merchandising, and live shows. Jay-Z even sold one million copies of his new album Magna Carta Holy Grail to Samsung to be available as a free app.

Despite piracy (or possibly with the help of piracy), the entertainment industry is still growing. In the digital age, music, movies, and TV shows are as popular as ever. While ample suppliers (criminals included) will drive down prices of entertainment media, there is still billions of dollars to be made. If the entertainment industry stays traditional, it will suffer at the hands of piracy indefinitely, but if it can change and adapt, it might just be able to do what the law cannot and stop online piracy.


Based on:



The Growth of Solar-Powered Homes

In recent years, solar energy has been garnering recognition from people as a viable power source. In fact, a September 2012 poll showed that 92% of Americans believe it is important for the United States to develop and use solar power, and 70% think the federal government should do more to promote solar power. However, this has not translated into the use of solar energy to power households. There remains a gap between support and adoption.

The reasons the gap exists include initial costs, permits and inspections required for installation, and the practical issue of whether a given location is exposed to enough sunlight to produce the required amount of power.

Initial costs for installation of solar panels can be pricey. In a study comparing the best current options for solar panels, initial prices ranged from about $24,000 to $28,000. However, prices have been dropping rapidly – an estimated 70 percent in the past two years. Also, federal and state tax rebates are offered for a total of about $8,000 on average. Some solar companies are even offering a way to curtail the upfront payment; they install panels for the customer but maintain ownership of them. The energy from those panels is then sold to the customer.

In the long run, having solar panels installed will actually save the customer money. Although the initial costs are high, they are a one-time payment. From that time on, the customer will be saving money from drastically reduced power bills. Solar energy cannot power a house entirely due to nighttime and cloudy days, but when the panels are creating more power than is being consumed in the house, that difference gets taken out from the power that is billed for, a system called “net metering”. Average monthly electric bill savings range from about $60 to $70. This means that the solar panels will eventually pay for themselves.

Another concern is getting permits and going through inspections that are necessary for installation. State and local regulations require these. Some areas make installation much easier than others do, which causes solar companies to only target these locations; it is simply not profitable otherwise. The simplest solution to this would be a single federal policy to simplify and streamline the process.

With the ever-increasing practicality of using solar panels, solar companies must increase their advertising and establish relationships with community, state, and even national leaders in order to bridge the gap between support and adoption of solar energy. Though there is support for the notion of solar energy, people do not necessarily realize that it is already a viable power source.

Though the general public has not yet fully embraced the practicality of solar power, utility companies are beginning to sense a threat. Already, appeals against net metering have been lobbied, with the argument that it raises prices for non-solar consumers. This is actually a good sign as it gives validation to the viability of solar panels.

Another positive sign is that organizations and companies have begun to try out solar power. For example, panels have been installed on over 100 Walmart stores as well as school districts.

Another option for the use of solar panel is on a community-wide basis rather than just a single household or building. Large “fields” of solar panels can be built, and then members of the community buy shares, which give them a deduction from their power bill.

In order to close the gap between support and adoption, there are a number of steps that must first be accomplished. There have to be fewer regulations regarding installation of solar panels. People will not turn to solar power if it is not convenient for them. It makes little sense for installation of furnaces and other household appliances to require none of the inspections and permits that solar panels do. This can be accomplished through individual states changing their policies, or a single federal policy being introduced.

Advertising must also increase so that the general public is aware of the viability of solar power. Without advertising, many people that would otherwise be able and willing to convert to solar will never make the transition. Finally, prices must continue on their decline so that the initial cost is affordable.

If these are all accomplished, use of solar power will continue to increase and could become a common household power source.


Nick Thompson
ENGRC 3500: Op Ed
19 July 2013

On the Disregard for Creativity in Education

Four years ago, if you had asked me whether or not I thought I was creative, intelligent, and motivated, I probably would have responded with a shy and awkward variant of "I don't know." Today, if you ask me the same question, my answer will probably be equally awkward. "Not as much as I used to be."

At first, that might be a surprising response. Surely, after completing the "esteemed" Computer Science program at Cornell University, I must at least be a little more intelligent than I was four years ago, right? Well, I suppose that would depend on which definition of intelligent you were referring to. I’m afraid that, through the role that our educational system plays in our lives, "intelligent” and "smart" have assumed a definition similar to "good at avoiding mistakes." After all, that's how we define academic ability, isn't it? And it’s our academic performance which will determine the quality of the job that we can get when we graduate, which will determine how successful we are in life, right? Or so we’re told.

My time at Cornell has been dominated with a constant and seemingly uphill battle for a decent GPA. The concept of free time was something I lost somewhere around Sophomore year, and didn’t find again until I took a leave of absence. Becoming so helplessly consumed in the struggle for gratification in the form of a three digit number meant that I was becoming a different kind of intelligent - I was becoming good at avoiding mistakes. Four years ago, things were different. I had an incessant curiosity for all things computer, access to more knowledge than I could possibly consume in a lifetime, ideas, problems, and most importantly, time. That’s what I remember when I think of intelligence, but also what I remember when I think of creativity. In a sense, these two words now have the same meaning to me.

By the end of high school, I had somehow learned how to tap into my emotional center in a way that was like turning on a fire hose of musical ideas and experiments, which I now consider an incredible talent. I could write music for hours without moving or eating because I was too far in the zone to realize how much time I was spending. It’s one of the most incredible feelings I’ve ever had. Sadly though, if you crack open the student handbook and look through the Computer Science curriculum, you won't find much room for such a talent. Instead you'll find a rigorous list of courses with a checkbox next to each one. Worse, if you choose to pursue this curriculum, and if you're like me, you'll find that you need to sacrifice certain personal interests so that you can keep that three digit number on your transcript within an acceptable range.

Today, despite the fact that I pick up my guitar, or sit down in front of the piano, or pull up a synthesizer on my computer several times a week, I can’t remember how to write a song. I guess four years is enough time to forget.

But the ramifications go beyond me, and beyond losing a facet of my creativity; in 2012, researchers from Dartmouth University released the results of a study in which groups of students were sent off with the task of designing a "next-generation alarm clock." The study was divided such that roughly half the groups were made up of freshman students, and half of the groups were made up of senior students. They found that "freshmen students generated concepts that were significantly more original than those of the seniors, with no significant difference in quality or technical feasibility of the concepts generated by the two levels of students." [1]

This hits awfully close to home with respect to my newfound in-ability to creatively express myself through music. What is it about four years at a top University that should yield such suppressive results? To paraphrase creativity expert Ken Robinson’s brilliant explanation in his 2006 TED Talk [3], kids will take chances because they’re not afraid of being wrong. We know that if you’re not prepared to be wrong, you will never come up with anything original, but by the time these children become adults, they have become frightened of being wrong. We run our companies and our national education systems on the premise that mistakes are the worst thing you can make. We’re educating people out of their creative capacities.

Unfortunately, I sincerely believe Ken’s explanation is true. I fear that I, and many of my peers, are living proof of the idea that our educational system, and our engineering curricula in particular, are producing a mass of drones; “diligent students […] who rather work consistently on given tasks than on finding new problems, questions, and solutions on their own and in discussion with others." [2] Maybe we are perfectly adept at solving today’s problems, but what about the young students who are soon expected to lead us into a future that we can’t possibly predict? We expect them to be able to find answers to questions that we can’t yet ask, but who will first find those questions? I like to think that recognizing this as a problem is a first step towards a solution, but I don’t have any of the answers. Just questions.

"All children are artists. The problem is how to remain an artist once he grows up.” - Pablo Picasso

[1] Anonymous. "Lost Creativity." ASEE Prism 21.7 (2012): 17. Print.
[2] Haertel, Tobias, Claudius Terkowsky, and Isa Jahnke. "Where have all the Inventors Gone?: Is there a Lack of Spirit of Research in Engineering Education Curricula?".IEEE , 2012. 1-8. Print.
[3] http://www.ted.com/talks/ken_robinson_says_schools_kill_creativity.html

Loosely modeled after: http://www.nytimes.com/2013/06/10/opinion/keller-affirmative-reaction.html?pagewanted=1&_r=0&ref=education

Masaki Yamaguchi

Labeling ADHD as a Fictitious Disease

One simple cure: Children with attention deficit and hyperactivity disorder, also known as ADHD, have one simple solution to get focused- medical treatment. Adderall, Ritalin, Focalin, Dexedrine, are many of the common medications being prescribed everyday to patients. By taking one simple pill, the medication will affect the patient’s central nervous system and make them calm, focused, and concentrated for a period of time- it works like magic.

What if everybody starts claiming that they have ADHD?

In the United States, every tenth boy among ten-year olds already swallows an ADHD medication on a daily basis. The reason is simple: With the slightest doubt that their child has not been focused enough or interested in their everyday activity, the parent takes the child to the psychologist or psychiatrist. Their best hope is for their child to be diagnosed with ADHD, and to justify the reason for the misconduct and lack of attention of the child. So does every child that has a difficulty focusing at school have ADHD?

It is too easy to say that every child with a lack of attention has ADHD. In fact, there is no known evidence for ADHD and is often thought as a combination of factors. The cause of ADHD remains a mystery, but it is generally understood as a result of an exposure to lead and other environmental toxins, genetic contributions from the parents, and social factors. According to Dr. Jennifer Tellett of the University of Nottingham, the observable problems with attention may be the result of difficulty in processing information quickly, and manipulating information in memory. It is difficult to conclude that ADHD is a definite disorder; it could easily be a personal characteristic of an individual.

With the advancement of technology, there are ways to diagnose a patient with ADHD. The NEBA system is designed to measure and compare two kinds of brainwaves through electrodes on the scalp. Scientists working with this gadget hypothesized that children with ADHD have different brainwave ratios compared to those without the disorder. However, the technology has attracted skepticism from experts because the behavior trumps brain activity when diagnosing ADHD. In addition, there is no evidence to prove that the difference in brainwave ratios actually helps the diagnosis.

An additional controversy behind the existence of ADHD has to do with money. There is a tremendous amount of money associated with the medication of ADHD. As a result, the tendencies for a psychologists and psychiatrists to sell the medication increases. An associate professor of Psychiatry at Harvard Medical School received $1 million in earnings from drug companies between 2000 and 2007. Thus, ADHD is often thought to be over-diagnosed, which in tern questions the legitimacy of the disorder.

With no evident cause of the disorder and it being over-diagnosed among children, it is reasonable to question the validity of the disorder. The inventor of the disorder Leon Eisenberg, an American psychiatrist, who is also known as the “scientific father of ADHD” said seven months before his death in an interview that, “ADHD is a prime example of a fictitious disease.” What he truly meant is unknown, yet it shines a light on the topic of whether ADHD should be considered a legitimate disorder.

The diagnosis of ADHD can be seen as a get out of jail free card for child who does not intellectually perform to the parent’s expectations. By classifying the child as “special”, it satisfies the parent in thinking that they have no control over the inattentiveness and/or hyperactivity of their child. Think of it this way: People have different IQ levels. Does this mean that a person with a significantly lower IQ level has a disorder? Similarly, attentiveness can differ between individuals. Therefore it may not be reasonable to classify somebody as having a disorder just because they are not capable of focusing for an extended period of time.

The last question to be asked is why do we even bother treating ADHD, setting aside the economic benefits of the psychologists and psychiatrists. If everybody took this “magical” pill, would they all be intellectually well off? This may sound a little cynical, but it is valid to ask this question given that there is no concrete proof that this disorder even exists. Perhaps “attention deficit” should be a responsibility of the parents, not the child. It is a matter of how the parent trains the child that affects the characteristic of the child, not the disorder itself.


Based on Op-ed:

By: Anonymous

Offshore Wind Energy: The New Breezy Energy

The United States needs to look for alternative and cleaner sources of energy. Rising gas prices, increased dependence on foreign oil, and global climate, these should all be motives to speed up the search for new energy. Offshore wind energy can offer hope for a future of energy independence and a clean energy economy.

The United States still lags in developing offshore wind farms. In other countries, such as Europe, have such farms already implemented and have been providing jobs and clean energy since 1991.

One of the reasons off shore wind energy is so effective is that these winds are stronger and steadier than onshore winds. The total potential of wind energy on land and near-shore is somewhere around 72 TW that is over five times more than the world's current energy use in all forms.

Another advantage is that offshore breezes can be strong in the afternoon—as well as in heat waves—matching the time when people are using the most electricity, when the demand for energy is highest. In fact, the East Coast of the United States has enough off shore wind to provide the entire country with electricity—which would save almost 60% in greenhouse gasses—if the industry becomes fully developed. According to North Carolina’s Scientific Advisory Panel, “a large offshore farm with 450 3.6 MW turbines would be equivalent to displacing one year’s emissions from 9 million cars or from 12 coal-fired power plants over a 20-year period.”

So why don’t we push towards this innovative new energy?

Harry Braun from Hydrogen News states, “the cost of electricity is a major factor in hydrogen production costs,“ However, this is quite the contrary. Although any solar energy option can generate the electricity needed for hydrogen production, the cost of electricity generated from photovoltaic solar cells is approximately 10-times more expensive than the electricity generated from megawatt-scale wind machines. State-of-the-art wind systems, which have an installed capital cost of approximately $1,000 per kW and a 35% capacity factor, are able to generate electricity for approximately 4-cents per kWh. If the wind systems are mass-produced like automobiles for large-scale hydrogen production, their capital costs will be expected to drop to well below $300/kW, which will reduce the cost of electricity to 1 or 2-cents per kilowatt-hour (kWh).

Other critiques propose “Wind is not predictable so other forms of power must be available to make up any shortfall.” Improved ability to predict the intermittent availability of wind would enable better use of this resource. In fact, in Germany it is now possible to predict wind generation output with 90% certainty 24 hours ahead. This means that it is possible to deploy other plants more effectively so that the economic value of that wind contribution is greatly increased.

Though the benefits are tremendous, the main disadvantage of offshore wind farms is high construction costs. Offshore wind energy projects have higher costs compared to the ones on land because they need to withstand rough weather conditions and their maintenance is highly complex and thus very expensive. The price gap between offshore and onshore wind farms still remains significant but many energy experts believe this gap is likely to narrow in next few decades. Offshore wind energy technologies still need time to mature, and with it the development of new technologies should drive down the total costs connected with offshore wind projects.

So what should we do?

First off, the government should invest more money and time into creating and implements offshore wind farms in the United States. With this would come and increase in electricity costs, but only for a limited time, in order to assuage the costs of building. They should also facilitate the development of the transmission infrastructure necessary to interconnect offshore wind farm. As well as create a program for research and data collection in order to ensure there is adequate information to evaluate the impacts of offshore energy exploration and energy development. And then we let the wind Farms do the rest, while we bath in all its electric glory and soon to come low cost, affordable, clean energy.

Modeled after:

Henry Ly
The Need for Continued Government Backing for Alternative Energy in Automobiles
Modeled after: http://www.popularmechanics.com/science/earth/4237539.html?page=2

Currently, there are approximately 250 million registered passenger vehicles in the United States. However, only around 100,000 of these vehicles on the road are actually electric powered vehicles. With approximately 10,000 vehicle miles travelled per capita per year in the United States, the country’s reliance on fossil fuels, specifically foreign fossil fuel, has become a major economic and environmental concern. The price of gasoline continues to increase, but the consumption of it doesn’t seem to change. Our consumption of fossil fuels has had tremendous environmental impacts. Gas powered vehicles account for one third of America’s greenhouse gas emissions. The carbon footprint left behind by traditional combustion engine vehicles is increasing daily. If the rate at which people consume fossil fuels continue, the environment in which we live could be potentially destroyed.

In recent years, the government began to realize just how important the consequences of our insatiable consumption of fossil fuels have become. As a result, there have been bills that are being pushed through Congress to aid the development of alternative energy sources, especially in the automotive industry. The benefits of alternative fuel sources such as electric powered vehicles cannot be ignored. Electric vehicles produce zero emissions and would therefore reduce your carbon footprint. Drivers would never have to worry about the cost of gas as well. There are many benefits that the government is giving to electric car users as well, such as tax credits on their purchase of their electric vehicle, and use of high-occupancy vehicle (HOV) lanes.
The government recently has made investments in alternative fuel source companies. One of the most successful ventures so far has been its backing of Tesla Motors. Tesla Motors produces fully electric vehicles such as the Model S and the Model X. The company, and its founder Elon Musk, has shed light on what the future of vehicles can be like. The overwhelming success of the Tesla Model S demonstrates how electric vehicles can be a viable alternative energy option. The company is expected to sell an estimated 20,000 Model S vehicles this year alone. Some may believe that electric vehicles are inferior to the average gas-powered vehicle, but the Tesla Model S was actually rated the best car of the year by Motor Trend Magazine, one of the leading auto magazines in the country.

Of course, there are still some downfalls to an electric car in comparison to a traditional car. The most controversial issue surrounding electric vehicles is the lack of ability to travel as far as the traditional gasoline-powered vehicle. With the lack of electric charging stations widely available, an electric vehicle is limited to how far it can travel. However, new, more powerful and efficient electric motors are continuing to be developed, and every day more electric charging stations are being installed all across the country. The Model S has a travel range of over 200 miles, which is more than satisfactory for most daily commutes, and Tesla are constantly installing more free charging stations all along the east coast.

With every success of government-backed investments, critics will naturally bring up some failures and blunders that the US government has made in the past such as Solyndra, a solar panel development company that failed to ever meet expectations. However, the prospect of possible failure just comes with the nature of making an investment or backing any type of project. Some ventures will be successful, while others may fail to reach expectations. The government has to continue taking risks, and trying to get companies to push what is technologically possible. Our country wouldn’t be where it is today if we weren’t taking risks and constantly trying to push the boundaries of science and technology. And when you have an excellent product and a sound business plan like Tesla does, the possibilities are endless.

How Many Gadgets Do We Need? by Jen Qian 2

On April 11, 2012 a small startup company, Pebble Technology, launched a Kickstarter campaign. The company had previously attempted to raise money through venture capital firms, but was only able to raise $375,000 in support of its smartwatch, Pebble. The Kickstarter had a target of $100,000 with the main perk being that anyone who pledged $115 or more would receive a Pebble as soon as they went into production.

That $100,000 goal was met within two hours, and in a few days the Pebble campaign was the most successful in Kickstarter history. In fact, it was so successful that after a month Pebble Technology decided they had too many preorders and closed the campaign. In just over a month, the Pebble had raised $10,266,844 from 68,928 people.

This was great news for Pebble Technology; they had more than enough money to move their product forward, and had confirmed that there was a market for this fairly new idea.

The real question…why were consumers so eager to own the Pebble?

The Pebble is really only an extension of your smartphone. Of course, that’s only if your smartphone is an Android device or an iPhone, which are the only phones compatible with the Pebble. The Pebble uses Bluetooth technology to connect to your phone, which in turn allows the Pebble to access the Internet. Without your phone Pebble’s utility is pretty low, there aren’t many useful things that can be done on a black and white 1.26-inch screen.

The Bluetooth connection also causes your phone to lose 5-10% of its battery life. This is unfortunate because battery life is already a problem for many smartphone users. There’s an 8-application limit for the device, restricting the versatility of the Pebble without doing frequent installations. To put that limit in perspective, the iPhone comes with over three times that many applications pre-installed which cannot be removed.

With so much stacked against it, why did people decide to put their money behind the Pebble? There seems to be no real utility gained from the device; at best a Pebble owner can get a glimpse of notifications from their smartphone without reaching their hand into their pocket.

Pebble seems to be a piece of technology born out of a new mentality where people want to shoehorn tech features into every device possible. For example, Samsung has a fridge with a touchscreen and Wi-Fi, which can show you news headlines…why? Then there’s an LG washer and dryer set that allows you to check the status of your laundry online. This might be a bit useful (we all know that walking through your house to check the laundry is just too much hassle), but the fact that LG also touts the fact that you can start your washer or dryer remotely is ridiculous. In what situation would you load the washer or dryer and not immediately start it? Why do we need these features?

The short answer is, we don’t. New features are being added to devices because this allows the manufacturers to increase prices and make more money. People buy the devices because they believe that having technology everywhere is the future. This isn’t necessarily true. Technology should only be added to things where there’s real value, not just for the sake of technology.

A good example of a device that may actually allow us to interface more effectively with technology is Google Glass. Glass will allow us to see the world through a device, inserting a new layer of information between us the world. In contrast, the Pebble only allows us to use a small subset of smartphone features from our wrist.

In the end, everything needs to be put into context. A report from the Pew Research Center recently said that 56% of all American adults are now smartphone users. This is a huge number for devices that have only been available for a few years, but it took smartphone companies a lot to get to this point. For example, when Motorola and Verizon were first launching the Droid smartphone they had a $100 million marketing campaign. Devices like the Pebble can’t compete with this kind of budget, so they’ll likely stay small for a while.

In the world of technology there are always going to be gimmicks, so it’s important for consumers to really think about what they’re buying and what it’ll add to their lives. Not all gadgets were created equal, and surrounding yourself with tech won’t necessarily make your life any better. Perhaps a small niche group will put the Pebble to good use, but I don’t believe that it’s the device for everyone. For now, I’ll stick to wearing a watch that tells me the time and nothing else.


Op-Ed Read Before Writing:

Space, the Next Frontier? : an Op Ed
The cover page of October 2012 issue of MIT Technology Review featured a photo of Astronaut Buzz Aldrin and a line— “You promised me mars colonies, instead, I got facebook”. It’s no accident that such a rueful reflection was featured on the 40th anniversary of the last of Apollo Missions. On Dec. 7, 1972, Apollo 17 blasted off toward the moon and brought down the end of an era. Never again has mankind stepped foot on another celestial body. For 40 years, with the exception of a dozen unmanned space probes, mankind’s ingenuity, along with his/her ambition, hovered within near earth orbit.

The missions to the moon were the high water mark of human activity in space. Although we have sent probes and rover to other planets and edge of the solar system since then, no other space endeavor had marshalled the same energy, presented the same challenge, or changed the human experience as the moon landings. Why did we stop? There are political reasons as well as economic ones. As cynical as it sounds, the US space program is a direct product of space race and cold war. As the US and Soviet Union vied against each other for technological superiority, there’s real incentive for the government to put resources into the space program. But after the success of Apollo 11, the US had scored a major victory over the the Soviet Union, continuing the missions had a much diminished payoff and urgency. Economically, spending billions of dollars on projects that generated made no short-term return made little business sense. Moreover, the stagflation of the 70s, and the boom of personal computing and internet in the 80s and 90s diverted attention and investment elsewhere.

Yet it would be a mistake to allow the space program continue to limp along, especially in the light of its immense benefit and new challenges facing mankind. Statistics have shown that for every dollar invested in NASA, 7 to 8 dollars return was generated. In NASA’s heyday in the 60s and early 70s, it created/ saved hundreds of thousands of jobs and span off thousands inventions each year into the consumer market. The innovations spurred by those missions caused great leaps forward in microelectronics, material science and manufacturing. Few could have foreseen these immense benefits when the space program first got under way, yet fewer could deny the wisdom of the this investment today. The myopic valuation of space program based on its immediate economic return could only cost us the grand prize in the long run.

Space exploration not only broadens economic opportunity here on earth, it also opens new window of opportunity for growth. Colonizing Mars might be distant in the future, but colonizing the moon and exploit its resources is a feasible venture. The moon has vast deposit of He-3, a viable fuel for fusion reactor which is extremely rare on earth. Nuclear fusion is 3 times more efficient than nuclear fission. And one of the major obstacle to its implementation is the lack of inexpensive fuel. Lunar colonization would give us access to abundant source of He-3 and could potentially solve the world’s energy problem. And there’s more to moon’s resources than He-3. Long day hours and zero atmospheric deflection makes solar power much more efficient than they are on earth. Vast banks of solar cells deployed on lunar surface could generate electricity that can be beam back to be used on earth. Low gravity condition on moon will allow us to employ processing techniques impossible on earth.

And there’s even existential reasons why we should be more active in research and development of space technology. Potentially hazardous objects, or PHO, are small asteroids or comets that fly too close to earth and can cause great damage in the event of impact. A space rock larger than 35 meters could wipe out a small city or town, an asteroid larger than 100 meters could spell the end of humanity. Currently there are 1300 PHO on NASA’s list, up from 1006 in 2009 and the list keeps growing. Granted, the odds of actually having such a fatal encounter is miniscule, but if it does happen, history in a post apocalyptic world will record with the greatest astonishment that a generation which had the most to lose, did the least to prevent its happening.

So the answer to the opening questions is an emphatic yes. For the continued prosperity and survival of our species, there is an urgent need to ramp up our quest in space.

Tech-nified Education by Yueming

In a world with frequent technological advances, many of us have unquestioningly adapted our lifestyles to adopt these cool, state-of-the-art gadgets. The question remains, do these gadgets actually improve our lives?

It’s quite difficult to picture life without the technologies that have improved the efficiency of how we conduct our everyday business. Who, for example, would opt to hand-wash clothes and navigate in unfamiliar areas using a hard-to-read map when washers and GPSs are readily available? While technology has undeniably provided many benefits, it does not always bring about changes for the better.

One aspect of life that has not always improved with technology is education. When new technological gadgets enter the market, armies of promoters seek out teachers, principals, and superintendents with the single-minded goal of convincing them that these gadgets will help produce the next generation of American geniuses. What they often neglect to mention is that using technology to teach students how to write essays and add numbers will detract from the quintessential learning experience. With autocorrect applications, thesauruses, and calculators within easy reach, students are robbed of the chance to retrace their steps and discover ways to improve or correct their work.

This is especially true in grade school. While students do need access to computers when learning programming, the material taught in elementary schools all contribute to building a solid foundation for further learning. Learning how to properly construct sentences requires only a few sheets of paper, some sharpened pencils and a qualified, encouraging teacher. By adopting technology, kids are actually at a disadvantage when it comes to writing. Pen and paper often tend to be more conducive to good writing. Whereas longhand writing is more likely to result in well-reasoned, intricate prose, typing tends to yield more abrupt, “staccato” prose due to its punch-key nature. Furthermore, kids who rely on computers to type out every thought miss the opportunity to practice penmanship.

If this argument is not convincing enough, consider the digital divide created in a classroom full of eight-year-olds. Introduction of new technology in school naturally lends ways to requiring parents to purchase new gadgets for their kids to use. The reality is, not every family can afford to purchase iPads and laptops for their children to learn pronouns and multiplication. To make it even worse, a growing number of schools and school districts are adopting an even more controversial approach to implementing gizmos and gadgets in school known as “B.Y.O.T.” (bring your own technology). While some large school districts in Central Florida and near Houston and Atlanta have already adopted this approach and claimed to see improvements, many are rightfully skeptical about the benefits of using more technology in classrooms. According to Amanda Ripley of Slate, while schools in Finland do not emphasize the use of technology, Finnish teenagers still ranked first in math and science among 30 OECD countries in 2006, whereas teenagers in the US ranked 25th in math and 21st in science.

There is also a certain psychological impact of widely implementing technology and widening the digital divide in schools. Young students who cannot afford the state-of-the-art gizmos learn the differences in social class at an early age. Back in the day, parents were able to convince their kids that it is unnecessary to indulge in frivolous gadgets. Nowadays, however, these gadgets are apparently essential for learning and have become (at least in the kids’ eyes) technological must-haves. We are all taught at a young age that education is key to success. With the passing of “No Child Left Behind” and laws to enforce compulsory student attendance at schools until a certain age, we are reassured that receiving a proper education is a right, not a privilege. But now that the impact of technology inadvertently widened the divide between the haves and have-nots, it has become painstakingly evident that not everyone is entitled to equal education.

This is not to say that technology is unable to help students learn more effectively. Technological advances over the years now enable students to take online courses that are otherwise unavailable to them, promote collaboration among students where physical barriers would otherwise hinder the group work, and so much more. While there is irrefutable evidence that technology has the potential to enhance a student’s educational experience, it is equally important, if not more important, to see that technology can sometimes hinder students’ education.

Before we delve into our pocket books to purchase the newest laptops, graphing calculators, and iPads for first graders, let’s ask the essential question: If we have this technology, how will they really benefit these school children if at all?

Modeled after: OPINION: School technology: Does it help?


Jeanette Liu
ENGRC 3500

What is a GMO? GMO is the acronym for genetically modified organisms, which are plants and animals that have been genetically altered by the gene splicing techniques of biotechnology. In America, GMOs are widely produced and consumed by everyday people, but is this smart?

Experts say that 60% to 70% of the processed foods on US grocery shelves have genetically modified ingredients. But surprisingly, researchers from the Food Policy Institute at Rutgers' Cook College found that only 52% of Americans realized that genetically modified foods are sold in grocery stores and only 26% believed that they have ever eaten genetically modified foods. In America, GMOs are widely unlabeled, but in 1997, the European Union approved mandatory GMO labels. Why is America not jumping on this?

In 2012, Proposition 37 was proposed in California. This statute would have required labeling of genetically engineered food, with some expectations. Big food companies, such as Monsanto, that produce GMOs gave tens of millions of dollars into defending Prop 37. Why is this? Why do companies that produce GMOs not want to notify consumers of their presence?

When many Americans hear the words genetically modified organism, they panic. Despite this, GMOs are being heavily produced in America because of certain benefits. GMOs can be modified with increased pest and disease resistance, drought tolerance, which can increase the food supply. Why, then, are people not in full support of GMOs? Because despite the benefits of GMOs, there are also consequences, many of which are unknown.

Although there are some benefits of creating GMOs, there are also some negative consequences, which is why many people prefer GMO-free food. Some of these consequences include Introducing allergens and toxins to food, accidental contamination between genetically modified and non-genetically modified foods, antibiotic resistance, adversely changing the nutrient content of a crop, and creation of "super" weeds and other environmental risks.
Many European countries wont accept GMO contaminated crops such as alfalfa. This is because many people are against GMOs and countries suffer due to them. If GMOs imported from America are sold at lower prices than local crop, the local economy would suffer. All the money made would go to large food corporations, instead of hardworking local small-farm owners. This leads to many GMOs being used in many US foods. This is why the percentage of GMOs in US food products is so much higher than many people suspect. So because other countries are taking responsibility of protecting their citizens, Americans suffer from the lack of regulation by their government.

Although GMOs may eventually be the answer to larger issue of world hunger, they should be tested more before served to the public. One day after much more research, GMOs and all their side effects will be fully understood, but until then, I’d be cautious of them.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License