Gun Control Essay Draft

Gun control has been a heated topic circling around the news lately and for good reason. With the recent Parkland school shooting in February and the even more recent, Great Mills school shooting the following month, many are calling for an action of change. But there lies the golden question, what really needs to change? There is always going to be the debate that the government needs to push stricter gun laws, or that there needs to be more intense background checks for mental health. Of course, we can’t forget those who propose the idea of lower magazine levels or even those who propose we change what kind of guns civilians are allowed to own. Any idea is a great idea if it means there will be less gun violence, because really, that’s the underlying issue, right? However, just like any great ending, there needs to be an even better beginning.

As I started my research for this paper I found thousands of articles talking about gun violence, the recent school shootings, and everyone’s opinion on the matter. It was quite overwhelming and actually made the whole topic more confusing. But what I really needed to find was the government research on gun violence and all the correlations in between. As I previously stated, every idea is a great idea if it means that there will be less gun violence. So now is the time that we pick up the pen and paper and start with the beginning. Just as the government allocates funds for research on cancer, automobile safety, and other epidemics, they need to take a look at gun violence as well. For there to be less gun violence in the United States, the government first needs to make research on the issue a higher priority.

According to an article from the University of Utah, the federal government is spending less annually on gun violence research than private organizations and citizens. Currently, gun violence research is not a federal priority, unlike other public health concerns with similar and lesser fatality rates (Quijada, 2018). This leads to the question, why isn’t the government spending money on gun violence research? After doing a little reading, I noticed that a couple of articles were blaming the Dickey Amendment for the lack of government research on gun violence. This further lead to the question, what is the Dickey Amendment? The Dickey Amendment is, an amendment to a spending bill that forbade the Centers for Disease Control and Prevention from using money to “advocate or promote gun control” (Zhang, 2018). This bill was passed in 1996 sponsored by a congressman from Arkansas, Jay Dickey. Since then, medical and public-health professionals have been pushing back—more and more forcefully in recent years. The American Public Health Association and the American Medical Association have both taken to calling gun violence a public-health problem. In 2016, more than 100 medical organizations signed a letter to Congress asking to lift the Dickey Amendment (Zhang, 2018). Although the American Medical Association has dubbed gun violence as a public health crisis, the lift of the Dickey Amendment failed to pass and the funding for research on gun related deaths has remained the same.

The American Public Health Association and the American Medical Association have recognized gun violence as a public health problem because, as The Atlantic noted, “We in public health count dead people. It’s one of the things we do. And we count them in order to understand how to prevent preventable deaths,” Nancy Krieger, an epidemiologist at the Harvard T.H. Chan School of Public Health, told NPR in 2015 (Zhang, 2018). Some may argue that gun violence is not a public health problem, but the CDC studies more than sickness. They are also responsible for studying drownings, accidental falls, brain injuries, car crashes, suicides, and so much more. Suicide by firearm account for nearly half of the gun deaths per year in the United States. However, because of the laws preventing the research needed, we are unable to dig deeper into the issue.

Not only are there laws that forbid the CDC of using money to advocate or promote gun control, but there are some that also prevents the tracking of firearms themselves. The Tiahrt Amendment, first sponsored by Kansas Republican Rep. Todd Tiahrt and recently reauthorized as part of another appropriations bill, prohibits the Bureau of Alcohol, Tobacco and Firearms (ATF) from maintaining a searchable database. Instead, officers attempting to trace a gun used in the commission of a crime must use a card catalog and phone system to track the weapon (Dooley, 2017). Not having a readily searchable database to keep track of the firearms that are among the public is very inefficient and in return, costs officers time and taxpayers’ money. ABC News reported, often, officers find themselves forced to comb through boxes and boxes of paper records, many of them barely legible, by hand. The antiquated system — which stretches the average processing time from hours to days — cost taxpayers around $60 million over the course of 12 years, the ATF Tracing Center estimates (Dooley, 2017). Although we cannot effectively track the guns that are already out in the public’s hands, perhaps we should start tracking the ones that we are about to sell. By doing this, it would be easier to tie the gun with the owner and allow more surveillance on the guns that are in the public’s hands.



1. After Parkland, States Take A Fresh Look At Gun Legislation : NPR. (n.d.). Retrieved March 19, 2018, from

2. Basic Bullet Guide: Sizes, Calibers, and Types. (2016, January 31). Retrieved April 4, 2018, from

3. Categories of Prohibited People. (n.d.). Retrieved April 22, 2018, from

4. GOP Congresswoman Questions The Need For Government-Funded Research On Gun Violence | HuffPost. (n.d.). Retrieved April 22, 2018, from

5. Gun Control – (n.d.). Retrieved March 21, 2018, from

6. Gun Violence. (n.d.). Retrieved April 22, 2018, from

7. Gun Violence Research. (n.d.). Retrieved April 22, 2018, from

8. Gun Violence Research in America | S.J. Quinney College of Law. (n.d.). Retrieved April 22, 2018, from

9. Here’s a Timeline of the Major Gun Control Laws in America. (n.d.). Retrieved March 21, 2018, from

10. Here’s why the federal government can’t study gun violence – ABC News. (n.d.). Retrieved April 22, 2018, from

11. How Guns Work. (2016, February 11). Retrieved April 4, 2018, from

12. How the NRA Suppressed Gun Violence Research. (n.d.). Retrieved April 22, 2018, from

13. Is the return of government gun research near? – CNN. (n.d.). Retrieved April 22, 2018, from

14. Jouvenal, J., George, D. S., & Truong, D. (2018, March 20). Student gunman dies after Maryland school shooting; two other students injured. Washington Post. Retrieved from

15. Kaplan, S. (2018, March 12). Congress Quashed Research Into Gun Violence. Since Then, 600,000 People Have Been Shot. The New York Times. Retrieved from

16. Maryland school shooting: Officer stops armed student who shot 2 others – CNN. (n.d.). Retrieved March 21, 2018, from

17. Spending Bill Lets CDC Study Gun Violence; But Researchers Are Skeptical It Will Help. (n.d.). Retrieved April 22, 2018, from

18. Stark, D. E., & Shah, N. H. (2017). Funding and Publication of Research on Gun Violence and Other Leading Causes of Death. JAMA, 317(1), 84–85.

19. The 2nd Amendment of the U.S. Constitution. (n.d.). Retrieved March 19, 2018, from

20. The Health 202: Gun violence research by the government hasn’t been funded in two decades. But that may soon change. – The Washington Post. (n.d.). Retrieved April 22, 2018, from

21. Zhang, S. (2018, February 15). Why Can’t the U.S. Treat Gun Violence as a Public-Health Problem? The Atlantic. Retrieved from






Choose Your Own Demise: Or Your Car Will Plan it For You


This paper explores the choices tied to self-driving cars, particularly the choice of when to sacrifice passengers in order to minimize casualties in a crash. Multiple fields invested in research on this question are noted, and the role of morality in the decision-making algorithm for these cars is touched upon. The question of whether it is possible that there can be one version of morality agreed upon by all is begged, using moral relativism as an illustration. Further exposition goes into documenting the human reaction to these questions. Thought is given to the consequence of freedom of choice regarding self-sacrifice in car crash scenarios, as counted in the number of lives that such a choice would potentially cost. The link between choice and death is delved into, and this connection is ultimately used, along with information gathered from various sources, to insist that owners of self-driving cars should ultimately have the right to choose what they would be willing to sacrifice their lives for in an emergency.

Keywords: self-driving cars, morality, choice, mortality, individuality, greater good


Choose Your Own Demise: Or Your Car Will Plan it For You

Figure 1. Death at the wheel. (apokusay, n.d.)


While driving down the road, observing the speed limit of 55 miles per hour, a deer runs into your path. You hit the brakes, mind grasping for some kind of protocol to follow. A truck bears down on you from the oncoming lane, so you can’t swerve. Thick forest walls the roadside. You have no options. The deer caves in your hood as you collide. 10 seconds have elapsed since the moment you first saw the deer, and your car has just now come to a stop, engine stalled hot and ticking. It takes a few more seconds for it to sink in that you are alive.

Figure 2. Deer hit by Kentucky police cruiser. Depicting the time frame of an accident.  (Kenton County Police, 2015)

Further inspection reveals that the deer is not. But have you killed it? Your car struck it, obviously, but it entered the roadway in front of you, you reacted as quickly as possible to its appearance, and in the safest way possible. Of course you can’t be blamed. But what if it was a child, and not a deer, that had entered the roadway? How culpable would you be if you were to follow the same course of action? Should you, in that circumstance, chance swerving into the forest to save the child, even if it could mean your own life? What about a grown man? An elderly woman? A robber? A nun? How does the situation dictate what you should and should not do? How does one decide?

These are the questions facing car companies today as they develop the self-driving cars that may become, in the future, the norm when it comes to travel. Because of the potential for safer driving, much talk has been given to how casualties from automobile accidents can be minimized with this new technology. However, more talk must be given to the manner in which we achieve this reduction in casualties. With our cars controlling themselves, we must make sure that the programming behind them, and the companies that create them, adequately take passenger lives into account when looking at the bigger picture. Most importantly, if our cars are programmed to sacrifice us in the interest of saving the most lives, we should know, and we should have some say as to if, and particularly when, this is to occur. We as owners of self-driving cars should have the right to choose what we would be willing to sacrifice our lives for in the event of an emergency.


Accidents of the Future

If you were to find yourself in an automobile crash today, your response to events would determine your safety as well as the safety of those around you. That response would be recognized as a split-second reaction to an unforeseen event, and therefore you would be seen as the unfortunate victim of an accident. The label accident would apply even if you were talking on the phone, or doing your makeup, or speeding on a slick road. That label would remain as long as you did not mean to crash your car, bar extremely reckless or drug-influenced driving. Without premeditation, a crash is labeled an accident. But do self-driving cars react, as we do, to unforeseen events? If not, will they even be able to get into accidents, as we define them today, bar some gross malfunction?


The Moral Equation

This is where the trolley problem comes into play. The trolley problem, proposed by philosopher Phillipa Foot (1978)4 is actually a set of hypothetical scenarios that are designed to study people’s response to situations where they must either vicariously put events into motion that will lead to the death of one to save many, or actively kill one person to save many (p. 19-32). Programmers must use the same concepts to plan vehicle reactions to billions of emergency situations. In essence, your car may know, before you buy it, that in a situation where swerving left to avoid a rock slide entering the road will kill five bystanders, and swerving right will total a minivan packed with kids, but maintaining course will kill you alone, you will die to minimize casualties. This begs the question, if your car knows in advance under which circumstances it will sacrifice you for the greater good, is that planning? If so, and if in the event of those circumstances arising it carries out that plan, has it committed premeditated murder? Of course, the machine itself is not planning to murder you, as a programmer plans these scenarios, but is it not disconcerting that some programmer somewhere out there in the wide world, who most likely can’t conceive of you as a person, because they’ve never met you, gets to decide when your death is worth more than your life? And how could such a thing as the worth of one life against another ever be weighed?

The answer lies in morality. The code that your vehicle will use to determine the worth of your life is essentially an algorithm that judges the particular situation of the crash based on one agreed upon definition of morality, and as impossible as it sounds, it is something that according to an article written by Olivia Goldhill (2018) 5, philosophers are already working on. Plugging various ethical theories into a number of scenarios based on that same trolley problem mentioned above, and taking a look at the reactions cars will make while following each school of moral thought, they are able to get data on the situations likely to cause passenger sacrifice for each (para. 4).

Yes. We have come to a point in history when morality, that ever-shifting amalgam of idea and ideal, that indefinable north that guides our ethical compasses, must necessarily be whittled down into code, somehow, to be fed into our cars if they are to be allowed to take us from point A to B. This is the task. The question facing us now is, what does morality look like? Which flavor do we choose? Is it utilitarian? Based on virtue ethics, or divine command theory? Some mixture? Something new? According to Alfred University philosophy professor Emrys Westacott(n.d.) 7, writing for the Internet Encyclopedia of Philosophy, if looked at objectively, morality can be thought of as shifting across time and society, and moral judgement only applicable as applied to any one standpoint, a concept called moral relativism (Moral Relativism section, para. 1). For those that adhere to this viewpoint on morality, what is wrong now may not be wrong then, and what is right here is not necessarily right there, and certainly isn’t right according to everyone. If this moral view were to be the one chosen, to follow the moral norm would involve multiple algorithms to start with, reflecting different societal moral views, and all of those would have to be updated constantly as morals changed over time.  Even then, the morals being used may still not be acceptable to you personally, especially if to follow them would mean your death.

The Human Equation

So, what would be acceptable? It appears, as written in an article by Evan Ackerman (2016) 1, that people on average preferred a utilitarian approach to morality, as related in 6 online surveys taken of nearly 2000 people, where the focus was on minimizing casualty. At least in theory. In practice, people confessed that they would be unlikely to buy a car that was programmed to sacrifice them, and even less likely to purchase one that was subject to government regulation of its programming. (para. 11) With a high selling point of cars today being their safety features, this outlook isn’t really all that surprising. People value their lives and the lives of those that they know over the lives of strangers, on the whole. This concept is illustrated well by Dunbar’s number, which as explained in an article written for The New Yorker by Maria Konnikova (2014) 6, is the maximum number of individuals that people can keep track of and care about in their social group. Humans have a maximum group size of around 150, which means that we literally cannot care about the other 7,600,000,850 people, give or take, in the world that we don’t know on more than a theoretical level. (para. 2) And if we cannot care about them, how can we be asked to die for them? These are problems that must be overcome if self-driving cars are to move forward.

Figure 3. Google self-driving car. A representative of current driverless technology. (n.a., n.d.)

What people seem to want is the ability to make decisions about their safety, and any potential sacrifices, themselves. But what effect would handing the moral programming of self-driving cars over to their owners have on the safety of our roads? Looking into this, Abigail Beall(2017) 2, writing for New Scientist quotes Edmond Awad of MIT as saying that “If people have too much control over the relative risks the car makes, we could have a Tragedy of the Commons type scenario, in which everyone chooses the maximal self-protective mode” (Me, me, me section para. 1). But what would the harm be in that? Don’t we value self-protection now in our automobiles? With driverless cars obeying all safety laws, never driving distracted, and able to communicate with all other self-driving cars around them, won’t automobile casualties have dropped enough? Do we really need the very last slice of the pie if we must loose so much to achieve it? According to an article written by Berkley University’s wellness website (2018) 3, self-driving cars could potentially save 10 to 20 thousand lives annually if perfected, but they also stress that any improvement on that of human drivers will save lives, with a potential half-million lives saved over a period of 30 years in the United States alone, and that with only a 10 percent increase in driving safety (para. 1, 4). Clearly, even if every car owner were to choose to maximize the priority of their own safety, lives would be saved with this technology. But the very fact that there is so much debate on the subject of morality suggests that not everyone would choose the same morality settings, were they presented with the choice. Some people may choose to be completely altruistic, others neutral. Those people should have the chance to decide for themselves. We all should have that chance.

It all comes down to the age-old debate over the balance of individual freedoms with the greater good of the community. Which is the most important? Which has more weight? In American society, and most of Western society, individualism is king. In many Eastern societies, the good of the whole is embedded in their cultures and therefore held in higher regard than the rights of the individual. Should this dictate the way that self-driving cars are approached around the world? Should the car companies decide how safe your car is for you based on your geographical location? Should your government impose its will and dictate in which scenarios your life ceases to matter? Some choices are bigger than companies or governments or societal views. They are bigger, because they are permanent, because they are personal, and they should not fall under any jurisdiction other than that of the individual who is affected by the decision. Some choices should never be taken from us.


Profound Choice

Figure 4. Road sign choice. (Burner, n.d.)

Mortality is an intensely personal subject. Our own death is something that only we can experience, and it is something that, if possible, we should have the final say over. When people are faced with a deadline to their lives, due to sickness, or age, many make plans for that end. They choose the ceremony with which to be remembered, what will become of their remains, how their funds will be distributed, and most importantly, how they will face death, fighting, or accepting, or perhaps a little of both. People plan. They choose. And no choice is more important, more affecting, than the choice of what you would be willing to die for. That is a choice that should not be forfeited or denied. It is a choice that should be counted among our basic human rights.

If owners of self-driving cars were to be given a choice about their vehicle’s response to an emergency, the type of emergency where morals become fuzzy and nobody wins in the end, the roads would still be safer than they are today. Safer by a lot. Choice lets us take matters into our own hands even as we give up control. It allows us piece of mind. The offer of choice inspires trust. The withholding of choice inspires fear. Car companies, programmers, philosophers, government agencies, and consumers must work together to create a balance between choice and utility, between fear and sacrifice, between morality and machinery, if we are to move forward into a future where we choose to let our cars drive us.




1. Ackerman, E. (2016, June 23). People want driverless cars with utilitarian ethics, unless they’re a passenger. IEEE Spectrum. Retrieved from:

apokusay. Grim reaper drive car pop art vector image [Online image]. Retrieved from:

2. Beall, A. (2017, October 13). Driverless cars could let you choose who survives in a crash. New Scientist. Retrieved from:

3. Berkeley Wellness. (2018, March 13). Self-driving cars: How many lives will they save? Berkeley Wellness. Retrieved from:

Burner, T. [untitled image of road sign labeled choice]. Retrieved from:

4. Foot, P. (1978). Virtues and vices and other essays in moral philosophy. Berkeley, CA:University of California Press; Oxford: Blackwell.

5. Goldhill, O. (2018, February 11). Philosophers are building ethical algorithms to help control self-driving cars. Quartz. Retrieved from:

Kenton County Police. (2015, December 1). Deer collision. [Animated GIF]. Retrieved from:

6. Konnikova, M. (2014, October 7). The limits of friendship. The New Yorker. Retrieved from:

Nelson, E. (2017, January 12). Would you sacrifice one person to save five? Retrieved from:

[untitled image of a Google self-driving car] Retrieved from:

7. Westacott, E. (n.d.). Moral relativism. Internet Encyclopedia of Philosophy. Retrieved from: